OWC 12.4

Category Archives: AR

Khronos releases OpenXR 1.0 for cross-platform AR/VR

The Khronos Group has ratified and released the OpenXR 1.0 specification, along with publicly available implementations. OpenXR is a unifying, royalty-free open standard that provides high-performance, cross-platform access to virtual reality (VR) and augmented reality (AR) — collectively known as XR — platforms and devices. The new specification can be found on the Khronos website and via GitHub.

“The feedback from the community on the provisional specification released in March has been invaluable to getting us to this significant milestone,” says Brent Insko, OpenXR working group chair and lead XR architect at Intel. “Our work continues as we now finalize a comprehensive test suite, integrate key game engine support, and plan the next set of features to evolve a truly vibrant, cross-platform standard for XR platforms and devices. Now is the time for software developers to start putting OpenXR to work.”

After gathering feedback from the XR community during the public review of the provisional specification, improvements were made to the OpenXR input subsystem, game engine editor support and loader. With this 1.0 release, the working group will evolve the standard while maintaining full backward compatibility from this point onward, giving software developers and hardware vendors a solid foundation upon which to deliver portable user experiences.

OpenXR implementations are shipping this week, including the Monado OpenXR open source implementation from Collabora, the OpenXR runtime for Windows Mixed Reality headsets from Microsoft, an Oculus OpenXR implementation for Rift and Oculus Quest support. Epic Games also plans to release OpenXR 1.0 support in Unreal Engine.

Conductor boosts its cloud rendering with Amazon EC2

Conductor Technologies’ cloud rendering platform will now support Amazon Web Services (AWS) and Amazon Elastic Compute Cloud (Amazon EC2), bringing the virtual compute resources of AWS to Conductor customers. This new capability will provide content production studios working in visual effects, animation and immersive media access to new, secure, powerful resources that will allow them — according to the company — to quickly and economically scale render capacity. Amazon EC2 instances, including cost-effective Spot Instances, are expected to be available via Conductor this summer.

“Our goal has always been to ensure that Conductor users can easily access reliable, secure instances on a massive scale. AWS has the largest and most geographically diverse compute, and the AWS Thinkbox team, which is highly experienced in all facets of high-volume rendering, is dedicated to M&E content production, so working with them was a natural fit,” says Conductor CEO Mac Moore. “We’ve already been running hundreds of thousands of simultaneous cores through Conductor, and with AWS as our preferred cloud provider, I expect we’ll be over the million simultaneous core mark in no time.”

Simple to deploy and highly scalable, Conductor is equally effective as an off-the-shelf solution or customized to a studio’s needs through its API. Conductor’s intuitive UI and accessible analytics provide a wealth of insightful data for keeping studio budgets on track. Apps supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy. Additional software and plug-in support are in progress, and may be available upon request.

Some background on Conductor: it’s a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud. As the only rendering service that is scalable to meet the exact needs of even the largest studios, Conductor easily integrates into existing workflows, features an open architecture for customization, provides data insights and can implement controls over usage to ensure budgets and timelines stay on track.

OWC 12.4

Apple offers augmented reality with Reality Composer

By Barry Goch

In addition to introducing the new MacPro and the Pro Display XDR, at its Worldwide Developers Conference (WWDC19), Apple had some pretty cool demos. The coolest, in my mind, was the Minecraft augmented reality presentation.

Across the street from the San Jose Convention Center, where the keynote was held, Apple set up “The Studio” in the San Jose Civic. One of the demos there was an AR experience with the new MacPro which in reality, you only saw the space frame of Apple’s tower, but in augmented reality you were able to animate an exploded view. The technology behind this demo is the just-announced ARKit3 and Reality Composer.

Apple had a couple of stations demoing Reality Composer in The Studio. Apple has applied its famous legacy of enabling content creators by making new technology easy to use. Case in point is Reality Composer. I’ve tried building AR experiences in other apps and it’s not very straightforward. You have to learn a new interface and coding as well — and use yet another app for targeting your AR environment into the real world. The demo I saw of Reality Composer made it look easy, working in Motion with drag-and-drop prebuilt behaviors built into the app, along with multiple ways to target your AR experience in the real world.

AR QuickLook technology is part of iOS, and you can even get an AR experience of the new MacPro and Pro Display XDR through Apple’s website. They also mentioned its new file for holding AR elements, usdz. Apple has created a tool to convert other 3D file formats to usdz.

With native AR support across Apple’s ecosystem, there is no better time to experiment and learn about augmented reality.


Barry Goch is a finishing artist at LA’s The Foundation and a UCLA Extension Instructor in post production. You can follow him on Twitter at @Gochya.


Behind the Title: Ntropic Flame artist Amanda Amalfi

NAME: Amanda Amalfi

COMPANY: Ntropic (@ntropic)

CAN YOU DESCRIBE YOUR COMPANY?
Ntropic is a content creator producing work for commercials, music videos and feature films as well as crafting experiential and interactive VR and AR media. We have offices in San Francisco, Los Angeles, New York City and London. Some of the services we provide include design, VFX, animation, color, editing, color grading and finishing.

WHAT’S YOUR JOB TITLE?
Senior Flame Artist

WHAT DOES THAT ENTAIL?
Being a senior Flame artist involves a variety of tasks that really span the duration of a project. From communicating with directors, agencies and production teams to helping plan out any visual effects that might be in a project (also being a VFX supervisor on set) to the actual post process of the job.

Amanda worked on this lipstick branding video for the makeup brand Morphe.

It involves client and team management (as you are often also the 2D lead on a project) and calls for a thorough working knowledge of the Flame itself, both in timeline management and that little thing called compositing. The compositing could cross multiple disciplines — greenscreen keying, 3D compositing, set extension and beauty cleanup to name a few. And it helps greatly to have a good eye for color and to be extremely detail-oriented.

WHAT MIGHT SURPRISE PEOPLE ABOUT YOUR ROLE?
How much it entails. Since this is usually a position that exists in a commercial house, we don’t have as many specialties as there would be in the film world.

WHAT’S YOUR FAVORITE PART OF THE JOB?
First is the artwork. I like that we get to work intimately with the client in the room to set looks. It’s often a very challenging position to be in — having to create something immediately — but the challenge is something that can be very fun and rewarding. Second, I enjoy being the overarching VFX eye on the project; being involved from the outset and seeing the project through to delivery.

WHAT’S YOUR LEAST FAVORITE?
We’re often meeting tight deadlines, so the hours can be unpredictable. But the best work happens when the project team and clients are all in it together until the last minute.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
The evening. I’ve never been a morning person so I generally like the time right before we leave for the day, when most of the office is wrapping up and it gets a bit quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably a tactile art form. Sometimes I have the urge to create something that is tangible, not viewed through an electronic device — a painting or a ceramic vase, something like that.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved films that were animated and/or used 3D elements growing up and wanted to know how they were made. So I decided to go to a college that had a computer art program with connections in the industry and was able to get my first job as a Flame assistant in between my junior and senior years of college.

ANA Airlines

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Most recently I worked on a campaign for ANA Airlines. It was a fun, creative challenge on set and in post production. Before that I worked on a very interesting project for Facebook’s F8 conference featuring its AR functionality and helped create a lipstick branding video for the makeup brand Morphe.

IS THERE A PROJECT THAT YOU ARE MOST PROUD OF?
I worked on a spot for Vaseline that was a “through the ages” concept and we had to create looks that would read as from 1880s, 1900, 1940s, 1970s and present day, in locations that varied from the Arctic to the building of the Brooklyn Bridge to a boxing ring. To start we sent the digitally shot footage with our 3D and comps to a printing house and had it printed and re-digitized. This worked perfectly for the ’70s-era look. Then we did additional work to age it further to the other eras — though my favorite was the Arctic turn-of-the-century look.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Flame… first and foremost. It really is the most inclusive software — I can grade, track, comp, paint and deliver all in one program. My monitors — the 4K Eizo and color-calibrated broadcast monitor, are also essential.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Mostly Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? 
I generally have music on with clients, so I will put on some relaxing music. If I’m not with clients, I listen to podcasts. I love How Did This Get Made and Conan O’Brien Needs a Friend.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Hiking and cooking are two great de-stressors for me. I love being in nature and working out and then going home and making a delicious meal.


Marvel Studios’ Victoria Alonso to keynote SIGGRAPH 2019

Marvel Studios executive VP of production Victoria Alonso has been name keynote speaker for SIGGRAPH 2019, which will run from July 28 through August 1 in downtown Los Angeles. Registration is now open. The annual SIGGRAPH conference is a melting pot for researchers, artists and technologists, among other professionals.

“Victoria is the ultimate symbol of where the computer graphics industry is headed and a true visionary for inclusivity,” says SIGGRAPH 2019 conference chair Mikki Rose. “Her outlook reflects the future I envision for computer graphics and for SIGGRAPH. I am thrilled to have her keynote this summer’s conference and cannot wait to hear more of her story.”

One of few women in Hollywood to hold such a prominent title, Alonso’s dedication to the industry has been admired for a long time, leading to multiple awards and honors, including the 2015 New York Women in Film & Television Muse Award for Outstanding Vision and Achievement, the Advanced Imaging Society’s first female Harold Lloyd Award recipient, and the 2017 VES Visionary Award (another female first). A native of Buenos Aires, her career began in visual effects and included a four-year stint at Digital Domain.

Alonso’s film credits include productions such as Ridley Scott’s Kingdom of Heaven, Tim Burton’s Big Fish, Andrew Adamson’s Shrek, and numerous Marvel titles — Iron Man, Iron Man 2, Thor, Captain America: The First Avenger, Iron Man 3, Captain America: The Winter Soldier, Captain America: Civil War, Thor: The Dark World, Avengers: Age of Ultron, Ant-Man, Guardians of the Galaxy, Doctor Strange, Guardians of the Galaxy Vol. 2, Spider-Man: Homecoming, Thor: Ragnarok, Black Panther, Avengers: Infinity War, Ant-Man and the Wasp and, most recently, Captain Marvel.

“I’ve been attending SIGGRAPH since before there was a line at the ladies’ room,” says Alonso. “I’m very much looking forward to having a candid conversation about the state of visual effects, diversity and representation in our industry.”

She adds, “At Marvel Studios, we have always tried to push boundaries with both our storytelling and our visual effects. Bringing our work to SIGGRAPH each year offers us the opportunity to help shape the future of filmmaking.”

The 2019 keynote session will be presented as a fireside chat, allowing attendees the opportunity to hear Alonso discuss her life and career in an intimate setting.


Behind the Title: Nice Shoes animator Yandong Dino Qiu

This artist/designer has taken to sketching people on the subway to keep his skills fresh and mind relaxed.

NAME: Yandong Dino Qiu

COMPANY: New York’s Nice Shoes

CAN YOU DESCRIBE YOUR COMPANY?
Nice Shoes is a full-service creative studio. We offer design, animation, VFX, editing, color grading, VR/AR, working with agencies, brands and filmmakers to help realize their creative vision.

WHAT’S YOUR JOB TITLE?
Designer/Animator

WHAT DOES THAT ENTAIL?
Helping our clients to explore different looks in the pre-production stage, while aiding them in getting as close as possible to the final look of the spot. There’s a lot of exploration and trial and error as we try to deliver beautiful still frames that inform the look of the moving piece.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Not so much for the title, but for myself, design and animation can be quite broad. People may assume you’re only 2D, but it also involves a lot of other skill sets such as 3D lighting and rendering. It’s pretty close to a generalist role that requires you to know nearly every software as well as to turn things around very quickly.

WHAT TOOLS DO YOU USE?
Photoshop, After Effects,. Illustrator, InDesign — the full Adobe Creative Suite — and Maxon Cinema 4D.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Pitch and exploration. At that stage, all possibilities are open. The job is alive… like a baby. You’re seeing it form and helping to make new life. Before this, you have no idea what it’s going to look like. After this phase, everyone has an idea. It’s very challenging, exciting and rewarding.

WHAT’S YOUR LEAST FAVORITE?
Revisions. Especially toward the end of a project. Everything is set up. One little change will affect everything else.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
2:15pm. Its right after lunch. You know you have the whole afternoon. The sun is bright. The mood is light. It’s not too late for anything.

Sketching on the subway.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a Manga artist.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
La Mer. Frontline. Friskies. I’ve also been drawing during my commute everyday, sketching the people I see on the subway. I’m trying to post every week on Instagram. I think it’s important for artists to keep to a routine. I started up with this at the beginning of 2019, and there’ve been about 50 drawings already. Artists need to keep their pen sharp all the time. By doing these sketches, I’m not only benefiting my drawing skills, but I’m improving my observation about shapes and compositions, which is extremely valuable for work. Being able to break down shapes and components is a key principle of design, and honing that skill helps me in responding to client briefs.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
TED-Ed What Is Time? We had a lot of freedom in figuring out how to animate Einstein’s theories in a fun and engaging way. I worked with our creative director Harry Dorrington to establish the look and then with our CG team to ensure that the feel we established in the style frames was implemented throughout the piece.

TED-Ed What Is Time?

The film was extremely well received. There was a lot of excitement at Nice Shoes when it premiered, and TED-Ed’s audience seemed to respond really warmly as well. It’s rare to see so much positivity in the YouTube comments.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Wacom tablet for drawing and my iPad for reading.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I take time and draw for myself. I love that drawing and creating is such a huge part of my job, but it can get stressful and tiring only creating for others. I’m proud of that work, but when I can draw something that makes me personally happy, any stress or exhaustion from the work day just melts away.


IDEA launches to create specs for next-gen immersive media

The Immersive Digital Experiences Alliance (IDEA) will launch at the NAB 2019 with the goal of creating a suite of royalty-free specifications that address all immersive media formats, including emerging light field technology.

Founding members — including CableLabs, Light Field Lab, Otoy and Visby — created IDEA to serve as an alliance of like-minded technology, infrastructure and creative innovators working to facilitate the development of an end-to-end ecosystem for the capture, distribution and display of immersive media.

Such a unified ecosystem must support all displays, including highly anticipated light field panels. Recognizing that the essential launch point would be to create a common media format specification that can be deployed on commercial networks, IDEA has already begun work on the new Immersive Technology Media Format (ITMF).

ITMF will serve as an interchange and distribution format that will enable high-quality conveyance of complex image scenes, including six-degrees-of-freedom (6DoF), to an immersive display for viewing. Moreover, ITMF will enable the support of immersive experience applications including gaming, VR and AR, on top of commercial networks.

Recognized for its potential to deliver an immersive true-to-life experience, light field media can be regarded as the richest and most dense form of visual media, thereby setting the highest bar for features that the ITMF will need to support and the new media-aware processing capabilities that commercial networks must deliver.

Jon Karafin, CEO/co-founder of Light Field Lab, explains that “a light field is a representation describing light rays flowing in every direction through a point in space. New technologies are now enabling the capture and display of this effect, heralding new opportunities for entertainment programming, sports coverage and education. However, until now, there has been no common media format for the storage, editing, transmission or archiving of these immersive images.”

“We’re working on specifications and tools for a variety of immersive displays — AR, VR, stereoscopic 3D and light field technology, with light field being the pinnacle of immersive experiences,” says Dr. Arianne Hinds, Immersive Media Strategist at CableLabs. “As a display-agnostic format, ITMF will provide near-term benefits for today’s screen technology, including VR and AR headsets and stereoscopic displays, with even greater benefits when light field panels hit the market. If light field technology works half as well as early testing suggests, it will be a game-changer, and the cable industry will be there to help support distribution of light field images with the 10G platform.”

Starting with Otoy’s ORBX scene graph format, a well-established data structure widely used in advanced computer animation and computer games, IDEA will provide extensions to expand the capabilities of ORBX for light field photographic camera arrays, live events and other applications. Further specifications will include network streaming for ITMF and transcoding of ITMF for specific displays, archiving, and other applications. IDEA will preserve backwards-compatibility on the existing ORBX format.

IDEA anticipates releasing an initial draft of the ITMF specification in 2019. The alliance also is planning an educational seminar to explain more about the requirements for immersive media and the benefits of the ITMF approach. The seminar will take place in Los Angeles this summer.

Photo Credit: All Rights Reserved: Light Field Lab. Future Vision concept art of room-scale holographic display from Light Field Lab, Inc.


Behind the Title: Left Field Labs ECD Yann Caloghiris

NAME: Yann Caloghiris

COMPANY: Left Field Labs (@LeftFieldLabs)

CAN YOU DESCRIBE YOUR COMPANY?
Left Field Labs is a Venice-California-based creative agency dedicated to applying creativity to emerging technologies. We create experiences at the intersection of strategy, design and code for our clients, who include Google, Uber, Discovery and Estée Lauder.

But it’s how we go about our business that has shaped who we have become. Over the past 10 years, we have consciously moved away from the traditional agency model and have grown by deepening our expertise, sourcing exceptional talent and, most importantly, fostering a “lab-like” creative culture of collaboration and experimentation.

WHAT’S YOUR JOB TITLE?
Executive Creative Director

WHAT DOES THAT ENTAIL?
My role is to drive the creative vision across our client accounts, as well as our own ventures. In practice, that can mean anything from providing insights for ongoing work to proposing creative strategies to running ideation workshops. Ultimately, it’s whatever it takes to help the team flourish and push the envelope of our creative work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably that I learn more now than I did at the beginning of my career. When I started, I imagined that the executive CD roles were occupied by seasoned industry veterans, who had seen and done it all, and would provide tried and tested direction.

Today, I think that cliché is out of touch with what’s required from agency culture and where the industry is going. Sure, some aspects of the role remain unchanged — such as being a supportive team lead or appreciating the value of great copy — but the pace of change is such that the role often requires both the ability to leverage past experience and accept that sometimes a new paradigm is emerging and assumptions need to be adjusted.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team, and the excitement that comes from workshopping the big ideas that will anchor the experiences we create.

WHAT’S YOUR LEAST FAVORITE?
The administrative parts of a creative business are not always the most fulfilling. Thankfully, tasks like timesheeting, expense reporting and invoicing are becoming less exhaustive thanks to better predictive tools and machine learning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The early hours of the morning, usually when inspiration strikes — when we haven’t had to deal with the unexpected day-to-day challenges that come with managing a busy design studio.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be somewhere at the cross-section between an artist, like my mum was, and an engineer like my dad. There is nothing more satisfying than to apply art to an engineering challenge or vice versa.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school in France, and there wasn’t much room for anything other than school and homework. When I got my Baccalaureate, I decided that from that point onward that whatever I did, it would be fun, deeply engaging and at a place where being creative was an asset.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently partnered with ad agency RK Venture to craft a VR experience for the New Mexico Department of Transportation’s ongoing ENDWI campaign, which immerses viewers into a real-life drunk-driving scenario.

ENDWI

To best communicate and tell the human side of this story, we turned to rapid breakthroughs within volumetric capture and 3D scanning. Working with Microsoft’s Mixed Reality Capture Studio, we were able to bring every detail of an actor’s performance to life with volumetric performance capture in a way that previous techniques could not.

Bringing a real actor’s performance into a virtual experience is a game changer because of the emotional connection it creates. For ENDWI, the combination of rich immersion with compelling non-linear storytelling proved to affect the participants at a visceral level — with the goal of changing behavior further down the road.

Throughout this past year, we partnered with the VMware Cloud Marketing Team to create a one-of-a-kind immersive booth experience for VMworld Las Vegas 2018 and Barcelona 2018 called Cloud City. VMware’s cloud offering needed a distinct presence to foster a deeper understanding and greater connectivity between brand, product and customers stepping into the cloud.

Cloud City

Our solution was Cloud City, a destination merging future-forward architecture, light, texture, sound and interactions with VMware Cloud experts to give consumers a window into how the cloud, and more specifically how VMware Cloud, can be an essential solution for them. VMworld is the brand’s penultimate engagement where hands-on learning helped showcase its cloud offerings. Cloud City garnered 4000-plus demos, which led to a 20% lead conversion in 10 days.

Finally, for Google, we designed and built a platform for the hosting of online events anywhere in the world: Google Gather. For its first release, teams across Google, including Android, Cloud and Education, used Google Gather to reach and convert potential customers across the globe. With hundreds of events to date, the platform now reaches enterprise decision-makers at massive scale, spanning far beyond what has been possible with traditional event marketing, management and hosting.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Recently, a friend and I shot and edited a fun video homage to the original technology boom-town: Detroit, Michigan. It features two cultural icons from the region, an original big block ‘60s muscle car and some gritty electro beats. My four-year-old son thinks it’s the coolest thing he’s ever seen. It’s going to be hard for me to top that.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Human flight, the Internet and our baby monitor!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram, Twitter, Medium and LinkedIn.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Where to start?! Music has always played an important part of my creative process, and the joy I derive from what we do. I have day-long playlists curated around what I’m trying to achieve during that time. Being able to influence how I feel when working on a brief is essential — it helps set me in the right mindset.

Sometimes, it might be film scores when working on visuals, jazz to design a workshop schedule or techno to dial-up productivity when doing expenses.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Spend time with my kids. They remind me that there is a simple and unpretentious way to look at life.


From artist to AR technologist: What I learned along the way

By Leon Hui

As an ARwall co-founder and chief technology officer (CTO), I manage all things relating to technology for the company. This includes overseeing software and technology development, designs, engineering, IT, troubleshooting and everything in-between. Launching the company, I solely developed the critical pieces of technology required to achieve the ARwall concept overall.

I came into augmented reality (AR) as a game development software engineer, and that plays a big role in how I approach this new medium. Stepping into ARwall, it became my job to produce artistic realtime graphics for AR backdrops and settings, while also pursuing technological advancements that will move the AR industry forward.

Rene Amador presents in front of an ARwall screen. The TV monitor in the foreground shows the camera’s perspective.

Alongside CEO Rene Amador, the best way we found to make sure the company retained artistic values was to bring on highly talented artists, coders and engineers with a diverse skill set in both art and tech. It’s our mission to not let the scales tip one way or the other, and to focus on bringing in both artistic and tech talent.

With the continuing convergence of entertainment and technology, it is vital for a creative technology company to continue to advance, while maintaining and nurturing artistic integrity.

Here is what we have learned along the way in striking this balance:

Diversify Your Hiring
Going into AR, or any other immersive field, it is very important that one understands realtime graphics.

So, while it’s useful for my company to hire engineers that have graphics and coding backgrounds — as many game engineers do — it’s still crucial to hire for the individual strengths of both tech and art. At ARwall, our open roles could be combined for one gifted individual, or isolated with an emphasis on either artistry or coding, for those with specialties.

Because we are dealing with high-quality realtime graphics, the ARwall team would be similar to the team profiles of any AAA game studio. We never deviated from an artistic trajectory — we just brought technology along for the ride. We think of talent recruitment as a crucial process in our advancement and always have our eyes out for our next game developer to fill roles ranging from technical, environment, material and character artist to graphics, game engine and generalist engineer.

Expand Your Education
If someone with a background in film or TV post production came to work in a new tech industry, like AR, they would need to expand their own education. It’s challenging, but not impossible. While my company’s current emphasis is on game developers and CG artists, the backgrounds of fellow co-founders Rene Amador, Eric Navarrette and Jocelyn Hsu sit in ad agencies, television digital development, post production and beyond.

Jocelyn Hsu on an XR set, a combination of physical set pieces with the CG set extension running in the background.

There are a variety of toolsets and concepts left to learn, including: the software development life cycle; Microsoft Project or Hansoft; Agile methodology; the definition of “realtime graphics” and how it works; the top-dog game engine tools, including Unity and Unreal Engine 4; and digital asset creation pipelines for game engines, among others.

The transition is largely based on ones game development background but, of course, there is always a learning curve when entering a new industry.

Focus on the Balance
We understand that the core of a “technology company,” as we bill ourselves, is still the foundational technology. However, depending on the type of technology, companies need staffers that have a high-level mastery of the technology in order to demonstrate its full potential to others. It just happens that with AR technology there is an inherently visual aspect, which translates to a need for superior artistry in unison with the precise technology.

In order for AR technology to showcase and look more appealing, high-quality artistry is very much needed. This can be a difficult balance to maintain if focus or purpose are lost. For ARwall, we aim to hire talent that excels at art or engineering, or both.

ARwall expanded its offerings to stake its claim as a technology company, but built on each founders’ roots as artists, engineers and producers. Tech and art aren’t mutually-exclusive; rather, with focus, education and time to search for the right talent, technology companies can excel with invention and keep their creative edge, all at once.


Leon Hui brings to the team 20+ years of technical experience as a software engineer focusing on realtime 3D graphics, VR/AR and systems architecture. He has lead/senior technical roles on 15 AAA shipped titles as a veteran of top developers including EA, Microsoft Studios, Konami Digital Entertainment. He was previously TD at Skydance Interactive. ARwall is based in Burbank. 

 

Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.

Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.

SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 

Lenovo intros 15-inch VR-ready ThinkPad P52

Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.

“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”

The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.

“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”

The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.

The ThinkPad P52 will be available later this month.

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

The Other Art Fair: Brands as benefactors for the arts

By Tom Westerlin

Last weekend, courtesy of Dell, I had the opportunity to attend The Other Art Fair, presented by Saatchi Art here in New York City. My role at Nice Shoes is creative director for VR/AR/360, and I was interested to see how the worlds of interactive and traditional art would intersect. I was also curious to see what role brands like Dell would play, as I feel that as we’ve transitioned from traditional advertising to branded content, brands have emerged as benefactors for the arts.

It was great to have so many artists represented that had created such high-quality work, and unlike other art shows I’ve attended, everything felt affordable and accessible. Art is often priced out for the average person and here was an opportunity to get to know artists, learn about their process and possibly walk away with a beautiful piece to bring into the home.

The curators and sponsors created a very welcoming, jovial atmosphere. Kids had an area where they could draw on the walls, and adults had access to a bar area and lounge where they could converse (I suppose adults could have drawn there as well, but some needed a drink or two to loosen up). The human body was also a canvas as there was an artist offering tattoos. Overall, the organizers created an infectious, creative vibe.

A variety of artists were represented. Traditional paintings, photography, collage, sculpture, neon and VR were all on display in the same space. Seeing VR and digital art amongst traditional art was very encouraging. I’ve encountered bits of this at other shows, but in those instances everything felt cordoned off. At The Other Art Fair, every medium felt as if it were being displayed on equal ground, and, in some cases, the lines between physical and digital art were blurred.

Samsung had framed displays that looked like physical paintings. Their high-quality monitors sat flat on the wall, framed and indistinguishable from physical art.

Dell’s 8K monitor looked amazing. It was such a high resolution and the pixel density was very tight. It looked perfect for displaying a high-resolution photo at 100%. I’d be curious to see how galleries take advantage of monitors like these. Traditionally, prints of photographs would be shown, but monitors like these offer up new potential for showcasing vivid texture, detail and composition.

Although I didn’t walk out with a painting that night, I did come away with the desire to keep my eye on a number of artists — in particular, Glen Gauthier, Paul Richard, Laura Noel and Beth Radford. They all stood out to me.

As the lines between art and advertising blur, there are always new opportunities for brands and artists to come together to create stunning content, and I expect many brands, agencies, and creative studios to engage these artists in the near future.

Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.

Sonic Union adds Bryant Park studio targeting immersive, broadcast work

New York audio house Sonic Union has launched a new studio and creative lab. The uptown location, which overlooks Bryant Park, will focus on emerging spatial and interactive audio work, as well as continued work with broadcast clients. The expansion is led by principal mix engineer/sound designer Joe O’Connell, now partnered with original Sonic Union founders/mix engineers Michael Marinelli and Steve Rosen and their staff, who will work out of both its Union Square and Bryant Park locations. O’Connell helmed sound company Blast as co-founder, and has now teamed up with Sonic Union.

In other staffing news, mix engineer Owen Shearer advances to also serve as technical director, with an emphasis on VR and immersive audio. Former Blast EP Carolyn Mandlavitz has joined as Sonic Union Bryant Park studio director. Executive creative producer Halle Petro, formerly senior producer at Nylon Studios, will support both locations.

The new studio, which features three Dolby Atmos rooms, was created and developed by Ilan Ohayon of IOAD (Architect of Record), with architectural design by Raya Ani of RAW-NYC. Ani also designed Sonic’s Union Square studio.

“We’re installing over 30 of the new ‘active’ JBL System 7 speakers,” reports O’Connell. “Our order includes some of the first of these amazing self-powered speakers. JBL flew a technician from Indianapolis to personally inspect each one on site to ensure it will perform as intended for our launch. Additionally, we created our own proprietary mounting hardware for the installation as JBL is still in development with their own. We’ll also be running the latest release of Pro Tools (12.8) featuring tools for Dolby Atmos and other immersive applications. These types of installations really are not easy as retrofits. We have been able to do something really unique, flexible and highly functional by building from scratch.”

Working as one team across two locations, this emerging creative audio production arm will also include a roster of talent outside of the core staff engineering roles. The team will now be integrated to handle non-traditional immersive VR, AR and experiential audio planning and coding, in addition to casting, production music supervision, extended sound design and production assignments.

Main Image Caption: (L-R) Halle Petro, Steve Rosen, Owen Shearer, Joe O’Connell, Adam Barone, Carolyn Mandlavitz, Brian Goodheart, Michael Marinelli and Eugene Green.

 

postPerspective Impact Award winners from SIGGRAPH 2017

Last April, postPerspective announced the debut of our Impact Awards, celebrating innovative products and technologies for the post production and production industries that will influence the way people work. We are now happy to present our second set of Impact Awards, celebrating the outstanding offerings presented at SIGGRAPH 2017.

Now that the show is over, and our panel of VFX/VR/post pro judges has had time to decompress, dig out and think about what impressed them, we are happy to announce our honorees.

And the winners of the postPerspective Impact Award from SIGGRAPH 2017 are:

  • Faceware Technologies for Faceware Live 2.5
  • Maxon for Cinema 4D R19
  • Nvidia for OptiX 5.0  

“All three of these technologies are very worthy recipients of our first postPerspective Impact Awards from SIGGRAPH,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that define the leading-edge of technology while producing tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category.

“While SIGGRAPH’s focus is on VFX, animation, VR/AR and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production. We’ve tapped real-world users in these areas to vote for our Impact Awards, and they have determined what tools might be most impactful to their day-to-day work. That’s what makes our awards so special.”

There were many new technologies and products at SIGGRAPH this year, and while only three won an Impact Award, our judges felt there were other updates that it was important to let people know about as well.

Blackmagic Design’s Fusion 9 was certainly turning heads and Nvidia’s VRWorks 360 Video was called out as well. Chaos Group also caught our judges attention with V-Ray for Unreal Engine 4.

Stay tuned for future Impact Award winners in the coming months — voted on by users for users — from IBC.

Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

Lenovo’s ‘Transform’ event: IT subscriptions and AR

By Claudio Santos

Last week I had the opportunity to attend Lenovo’s “Transform” event, in which the company unveiled its newest releases as well as its plans for the near future. I must say they had quite the lineup ready.

The whole event was divided into two tracks “Datacenters” and “PC and Smart Devices.” Each focused on its own products and markets, but a single idea permeated all announcements in the day. It’s what Lenovo calls the “Fourth Revolution.” That’s what the company calls the next step in integration between devices and the cloud. Their vision is that soon 5G mobile Internet will be available, allowing for devices to seamlessly connect to the cloud on the go and more importantly, always stay connected.

While there were many interesting announcements throughout the day, I will focus on two that seem more closely relatable to most post facilities.

The first is what Lenovo is calling “PC as a service.” They want to sell the bulk of the IT hardware and support needs for companies as subscription-based deals, and that would be awesome! Why? Well, it’s simply a fact of life now that post production happens almost exclusively with the aid of computer software (sorry, if you’re still one of the few cutting film by hand, this article won’t be that interesting for you).

Having to choose, buy and maintain computers for our daily work takes a lot of research and, most notably, time. Between software updates, managing different licenses, subscriptions and hunting down weird quirks of the system, a lot of time is taken away from more important tasks such as editing or client relationship. When you throw a server and a local network in the mix it becomes a hefty job that takes a lot of maintenance.

That’s why bigger facilities employ IT specialists to deal with all that. But many post facilities aren’t big enough to employ a full-time IT person, nor are their needs complex enough to warrant the investment.

Lenovo sees this as an opportunity to simplify the role of the IT department by selling subscriptions that include the hardware, the software and all the necessary support (including a help desk) to keep the systems running without having to invest in a large IT department. More importantly, the subscription would be flexible. So, during periods in which you have need for more stations/support you can increase the scope of the subscription and then shrink it once again when the demands lower, freeing you from absorbing the cost of unused machines/software that would just sit around unused.

I see one big problem in this vision: Lenovo plans to start the service with a minimum of 1,000 seats for a deal. That is far, far more staff than most post facilities have, and at that point it would probably just be worth hiring a specialist that can also help you automate your workflow and develop customized tools for your projects. It is nonetheless an interesting approach, and I hope to see it trickle down to smaller clients as it solidifies as a feasible model.

AR
The other announcement that should interest post facilities is Lenovo’s interest in the AR market. As many of you might know, augmented reality is projected to be an even bigger market than it’s more popular cousin virtual reality, largely due to its more professional application possibilities.

Lenovo has been investing in AR and has partnered up with Metavision to experiment and start working towards real work-environment offerings of the technology. Besides the hand gestures that are always emphasized in AR promo videos, one very simple use-case seems to be in Lenovo’s sights, and that’s one I hope to see being marketable very soon: workspace expansion. Instead of needing three or four different monitors to accommodate our ever-growing number of windows and displays while working, with AR we will be able to place windows anywhere around us, essentially giving us a giant spherical display. A very simple problem with a very simple solution, but one that I believe would increase the productivity of editors by a considerable amount.

We should definitely keep an eye on Lenovo as they embark one this new quest for high-efficiency solutions for businesses, because that’s exactly what the post production industry finds itself in need of right now.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

Adobe acquires Mettle’s SkyBox tools for 360/VR editing, VFX

Adobe has acquired all SkyBox technology from Mettle, a developer of 360-degree and virtual reality software. As more media and entertainment companies embrace 360/VR, there is a need for seamless, end-to-end workflows for this new and immersive medium.

The Skybox toolset is designed exclusively for post production in Adobe Premiere Pro CC and Adobe After Effects CC and complements Adobe Creative Cloud’s existing 360/VR cinematic production technology. Adobe will integrate SkyBox plugin functionality natively into future releases of Premiere Pro and After Effects.

To further strengthen Adobe’s leadership in 360-degree and virtual reality, Mettle co-founder Chris Bobotis will join Adobe, bringing more than 25 years of production experience to his new role.

“We believe making virtual reality content should be as easy as possible for creators. The acquisition of Mettle SkyBox technology allows us to deliver a more highly integrated VR editing and effects experience to the film and video community,” says Steven Warner, VP of digital video and audio at Adobe. “Editing in 360/VR requires specialized technology, and as such, this is a critical area of investment for Adobe, and we’re thrilled Chris Bobotis has joined us to help lead the charge forward.”

“Our relationship started with Adobe in 2010 when we created FreeForm for After Effects, and has been evolving ever since. This is the next big step in our partnership,” says Bobotis, now director, professional video at Adobe. “I’ve always believed in developing software for artists, by artists, and I’m looking forward to bringing new technology and integration that will empower creators with the digital tools they need to bring their creative vision to life.”

Introduced in April 2015, SkyBox was the first plugin to leverage Mettle’s proprietary 3DNAE technology, and its success quickly led to additional development of 360/VR plugins for Premiere Pro and After Effects.

Today, Mettle’s plugins have been adopted by companies such as The New York Times, CNN, HBO, Google, YouTube, Discovery VR, DreamWorks TV, National Geographic, Washington Post, Apple and Facebook, as well as independent filmmakers and YouTubers.

Technicolor Experience Center launches with HP Mars Home Planet

By Dayna McCallum

Technicolor’s Tim Sarnoff and Marcie Jastrow oversaw the official opening of the Technicolor Experience Center (TEC), with the help of HP’s Sean Young and Rick Champagne, on June 15. The kickoff event also featured the announcement that TEC is teaming up with HP to develop HP Mars Home Planet, an experimental VR experience to reinvent life on Mars for one million humans.

The purpose-built TEC space is located in Blackwelder creative park, a business district designed specifically for the needs of creative and media companies in Culver City. The center, dedicated to bringing artists and scientists together to explore immersive media, covers almost 27,000 square feet, with 3,000 square feet dedicated to motion capture. The TEC serves as a hub connecting Technicolor’s creative houses and research labs across the globe, including an R&D team from France that made an appearance during event via a remote demo, with technology partners, such as HP.

Sarnoff, Technicolor deputy CEO and president of production services, said, “The TEC is about realizing the aspirations of all the players who are part of the nascent immersive ecosystem we work in, from content creation, to content distribution and content consumption. Designing and delivering immersive experiences will require a massive convergence of artistic, technological and economic talent. They will have to come together productively. That is why the TEC has been formed. It is designed to be a practical place where we take theoretical constructs and move systematically to tactical implementation through a creative and dynamic process of experimentation.”

The HP Mars Home Planet project is a global, immersive media collaboration uniting engineers, architects, designers, artists and students to design an urban area on Mars in a VR environment. The project will be built on the terrain from Fusion’s “Mars 2030” game, which is based on research, images, and expertise based on NASA research. In addition to HP, Fusion and TEC, partners include Nvidia, Unreal Engine, Autodesk and HTCVive. Additional details will be released at Siggraph 2017.

Young, worldwide segment manager for product development and AEC for HP Inc., said of the Mars project, “To ensure fidelity and professional-grade quality and a fantastic end-user experience, the TEC is going to oversee the virtual reality development process of the work that is going to be done by collaborators from all over the world. It is an incredible opportunity for anybody from anywhere in the world that is interested in VR to work with Technicolor.”

VR Audio — Differences between A Format and B Format

By Claudio Santos

A Format and B Format. What is the difference between them after all? Since things can get pretty confusing, especially with such non-descriptive nomenclature, we thought we’d offer a quick reminder of what each is in the spatial audio world.

A Format and B Format are two analog audio standards that are part of the ambisonics workflow.

A Format is the raw recording of the four individual cardioid capsules in ambisonics microphones. Since each microphone has different capsules at slightly different distances, the A Format is somewhat specific to the microphone model.

B Format is the standardized format derived from the A Format. The first channel carries the amplitude information of the signal, while the other channels determine the directionality through phase relationships between each other. Once you get your sound into B Format you can use a variety of ambisonic tools to mix and alter it.

It’s worth remembering that the B Format also has a few variations on the standard itself; the most important to understand are Channel Order and Normalization standards.

Ambisonics in B Format consists of four channels of audio — one channel carries the amplitude signal while the others represent the directionality in a sphere through phase relationships. Since this can only be achieved by the combination between the channels, it is important that:

– The channels follow a known order
– The relative level between the amplitude channel and the others must be known in order to properly combine them together

Each of these characteristics has a few variations, with the most notable ones being

– Channel Order
– Furse-Malham standard
– ACN standard

– Normalization (level)
– MaxN standard
-SN3D standard

The combination of these variations result in two different B Format standards:
– Furse-Malham – Older standard that is still supported by a variety of plug-ins and other ambisonic processing tools
– AmbiX – Modern standard that has been widely adopted by distribution platforms such as YouTube

Regardless of the format you will deliver your ambisonics file in, it is vital to keep track of the standards you are using in your chain and make the necessary conversions when appropriate. Otherwise rotations and mirrors will end up in the wrong direction and the whole soundsphere will break down into a mess.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

VR audio terms: Gaze Activation v. Focus

By Claudio Santos

Virtual reality brings a lot of new terminology to the post process, and we’re all having a hard time agreeing on the meaning of everything. It’s tricky because clients and technicians sometimes have different understandings of the same term, which is a guaranteed recipe for headaches in post.

Two terms that I’ve seen being confused a few times in the spatial audio realm are Gaze Activation and Focus. They are both similar enough to be put in the same category, but at the same time different enough that most of the times you have to choose completely different tools and distribution platforms depending on which technology you want to use.

Field of view

Focus
Focus is what the Facebook Spatial Workstation calls this technology, but it is a tricky one to name. As you may know, ambisonics represents a full sphere of audio around the listener. Players like YouTube and Facebook (which uses ambisonics inside its own proprietary .tbe format) can dynamically rotate this sphere so the relative positions of the audio elements are accurate to the direction the audience is looking at. But the sounds don’t change noticeably in level depending on where you are looking.

If we take a step back and think about “surround sound” in the real world, it actually makes perfect sense. A hair clipper isn’t particularly louder when it’s in front of our eyes as opposed to when its trimming the back of our head. Nor can we ignore the annoying person who is loudly talking on their phone on the bus by simply looking away.

But for narrative construction, it can be very effective to emphasize what your audience is looking at. That opens up possibilities, such as presenting the viewer with simultaneous yet completely unrelated situations and letting them choose which one to pay attention to simply by looking in the direction of the chosen event. Keep in mind that in this case, all events are happening simultaneously and will carry on even if the viewer never looks at them.

This technology is not currently supported by YouTube, but it is possible in the Facebook Spatial Workstation with the use of high Focus Values.

Gaze Activation
When we talk about focus, the key thing to keep in mind is that all the events happen regardless of the viewer looking at them or not. If instead you want a certain sound to only happen when the viewer looks at a certain prop, regardless of the time, then you are looking for Gaze Activation.

This concept is much more akin to game audio then to film sound because of the interactivity element it presents. Essentially, you are using the direction of the gaze and potentially the length of the gaze (if you want your viewer to look in a direction for x amount of seconds before something happens) as a trigger for a sound/video playback.

This is very useful if you want to make impossible for your audience to miss something because they were looking in the “wrong” direction. Think of a jump scare in a horror experience. It’s not very scary if you’re looking in the opposite direction, is it?

This is currently only supported if you build your experience in a game engine or as an independent app with tools such as InstaVR.

Both concepts are very closely related and I expect many implementations will make use of both. We should all keep an eye on the VR content distribution platforms to see how these tools will be supported and make the best use of them in order to make 360 videos even more immersive.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

VR Workflows: The Studio | B&H panel during NAB

At this year’s NAB Show in Las Vegas, The Studio B&H hosted a series of panels at their booth. One of those panels addressed workflows for virtual reality, including shooting, posting, best practices, hiccups and trends.

The panel, moderated by postPerspective editor-in-chief Randi Altman, was made up of SuperSphere’s Lucas Wilson, ReDesign’s Greg Ciaccio, Local Hero Post’s Steve Bannerman and Jaunt’s Koji Gardner.

While the panel was streamed live, it also lives on YouTube. Enjoy…

Timecode and GoPro partner to make posting VR easier

Timecode Systems and GoPro’s Kolor team recently worked together to create a new timecode sync feature for Kolor’s Autopano Video Pro stitching software. By combining their technologies, the two companies have developed a VR workflow solution that offers the efficiency benefits of professional standard timecode synchronization to VR and 360 filming.

Time-aligning files from the multiple cameras in a 360° VR rig can be a manual and time-consuming process if there is no easy synchronization point, especially when synchronizing with separate audio. Visually timecode-slating cameras is a disruptive manual process, and using the clap of a slate (or another visual or audio cue) as a sync marker can be unreliable when it comes to the edit process.

The new sync feature, included in the Version 3.0 update to Autopano Video Pro, incorporates full support for MP4 timecode generated by Timecode’s products. The solution is compatible with a range of custom, multi-camera VR rigs, including rigs using GoPro’s Hero 4 cameras with SyncBac Pro for timecode and also other camera models using alternative Timecode Systems products. This allows VR filmmakers to focus on the creative and not worry about whether every camera in the rig is shooting in frame-level synchronization. Whether filming using a two-camera GoPro Hero 4 rig or 24 cameras in a 360° array creating resolutions as high as 32K, the solution syncs with the same efficiency. The end results are media files that can be automatically timecode-aligned in Autopano Video Pro with the push of a button.

“We’re giving VR camera operators the confidence that they can start and stop recording all day long without the hassle of having to disturb filming to manually slate cameras; that’s the understated benefit of timecode,” says Paul Bannister, chief science officer of Timecode Systems.

“To create high-quality VR output using multiple cameras to capture high-quality spherical video isn’t enough; the footage that is captured needs to be stitched together as simply as possible — with ease, speed and accuracy, whatever the camera rig,” explains Alexandre Jenny, senior director of Immersive Media Solutions at GoPro. “Anyone who has produced 360 video will understand the difficulties involved in relying on a clap or visual cue to mark when all the cameras start recording to match up video for stitching. To solve that issue, either you use an integrated solution like GoPro Omni with a pixel-level synchronization, or now you have the alternative to use accurate timecode metadata from SyncBac Pro in a custom, scalable multicamera rig. It makes the workflow much easier for professional VR content producers.”

Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey

Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.

The importance of audio in VR

By Anne Jimkes

While some might not be aware, sound is 50 percent of the experience in VR, as well as in film, television and games. Because we can’t physically see the audio, it might not get as much attention as the visual side of the medium. But the balance and collaboration between visual and aural is what creates the most effective, immersive and successful experience.

More specifically, sound in VR can be used to ease people into the experience, what we also call “on boarding.” It can be used subtly and subconsciously to guide viewers by motivating them to look in a specific direction of the virtual world, which completely surrounds them.

In every production process, it is important to discuss how sound can be used to benefit the storytelling and the overall experience of the final project. In VR, especially the many low-budget independent projects, it is crucial to keep the importance and use of audio in mind from the start to save time and money in the end. Oftentimes, there are no real opportunities or means to record ADR after a live-action VR shoot, so it is important to give the production mixer ample opportunity to capture the best production sound possible.

Anne Jimkes at work.

This involves capturing wild lines, making sure there is time to plant and check the mics, and recording room tone. Things that are already required, albeit not always granted, on regular shoots, but even more important on a set where a boom operator cannot be used due to the 360 degree view of the camera. The post process is also very similar to that for TV or film up to the point of actual spatialization. We come across similar issues of having to clean up dialogue and fill in the world through sound. What producers must be aware of, however, is that after all the necessary elements of the soundtrack have been prepared, we have to manually and meticulously place and move around all the “audio objects” and various audio sources throughout the space. Whenever people decide to re-orient the video — meaning when they change what is considered the initial point of facing forward or “north” — we have to rewrite all this information that established the location and movement of the sound, which takes time.

Capturing Audio for VR
To capture audio for virtual reality we have learned a lot about planting and hiding mics as efficiently as possible. Unlike regular productions, it is not possible to use a boom mic, which tends to be the primary and most naturally sounding microphone. Aside from the more common lavalier mics, we also use ambisonic mics, which capture a full sphere of audio and matches the 360 picture — if the mic is placed correctly on axis with the camera. Most of the time we work with Sennheiser and use their Ambeo microphone to capture 360 audio on set, after which we add the rest of the spatialized audio during post production. Playing back the spatialized audio has become easier lately, because more and more platforms and VR apps accept some form of 360 audio playback. There is still a difference between the file formats to which we can encode our audio outputs, meaning that some are more precise and others are a little more blurry regarding spatialization. With VR, there is not yet a standard for deliverables and specs, unlike the film/television workflow.

What matters most in the end is that people are aware of how the creative use of sound can enhance their experience, and how important it is to spend time on capturing good dialogue on set.


Anne Jimkes is a composer, sound designer, scholar and visual artist from the Netherlands. Her work includes VR sound design at EccoVR and work with the IMAX VR Centre. With a Master’s Degree from Chapman University, Jimkes previously served as a sound intern for the Academy of Television Arts & Sciences.

Sound editor/mixer Korey Pereira on 3D audio workflows for VR

By Andrew Emge

As the technologies for VR and 360 video rapidly advance and become more accessible, media creators are realizing the crucial role that sound plays in achieving realism. Sound designers are exploring this new frontier of 3D audio at the same time that tools for the medium are being developed and introduced. When everything is so new and constantly evolving, how does one learn where to start or decide where to invest time and experimentation?

To better understand this process, I spoke with Korey Pereira, a sound editor and mixer based in Austin, Texas. He recently entered the VR/360 audio world and has started developing a workflow.

Can you provide some background about who you are, the work you’ve done, and what you’ve been up to lately?
I’m the owner/creative director at Soularity Sound, an Austin-based post company. We primarily work with indie filmmakers, but also do some television and ad work. In addition to my work at Soularity, I also work as a sound editor and mixer at a few other Austin post facilities, including Soundcrafter. My credits with them include Richard Linklater’s Boyhood and Everybody Wants Some, as well as TV shows such as Shipping Wars and My 600lb Life.

You recently purchased the Pro Sound Effects NYC Ambisonics library. Can you talk about some VR projects you are working on?
In the coming months I plan to start creating audio content for VR with a local content creator, Deepak Chetty. Over the years we have collaborated on a number of projects, most recently I worked on his stereoscopic 3D sci-fi/action film, Hard Reset, which won the 2016 “Best 3D Live Action Short” from the Advanced Imaging Society.

Deepak Chetty shooting a VR project.

I love sci-fi as a genre, because there really are no rules. It lets you really go for it as far as sound. Deepak has been shifting his creative focus toward 360 content and we are hoping to start working together in that aspect in the near future.

The content Deepak is currently mostly working on non-fiction and documentary-based content in 360 — mainly environment capture with a through line of audio storytelling that serves as the backbone of the piece. He is also looking forward to experimenting with fiction-based narratives in the 360 space, especially with the use of spatial audio to enhance immersion for the viewer.

Prior to meeting Deepak, did you have any experience working with VR/3D audio?
No, this is my first venture into the world of VR audio or 3D audio. I have been mixing in surround for over a decade, but I am excited about the additional possibilities this format brings to the table.

What have been the most helpful sources for studying up and figuring out a workflow?
The Internet! There is such a wealth of information out there, and you kind of just have to dive in. The benefit of 360 audio being a relatively new format is that people are still willing to talk openly about it.

Was there anything particularly challenging to get used to or wrap your head around?
In a lot of ways designing audio for VR is not that different from traditional sound mixing for film. You start with a bed of ambiences and then place elements within a surround space. I guess the most challenging part of the transition is anticipating how the audience might hear your mix. If the viewer decides to watch a whole video facing the surrounds, how will it sound?

Can you describe the workflow you’ve established so far? What are some decisions you’ve made regarding DAW, monitoring, software, plug-ins, tools, formats and order of operation?
I am a Pro Tools guy, so my main goal was finding a solution that works seamlessly inside the Pro Tools environment. As I started looking into different options, the Two Big Ears Spatial Workstation really stood out to me as being the most intuitive and easiest platform to hit the ground running with. (Two Big Ears recently joined Facebook, so Spatial Workstation is now available for free!)

Basically, you install a Pro Tools plug-in that works as a 3D audio engine and gives you a Pro Tools project with all the routing and tracks laid out for you. There are object-based tracks that allow you to place sounds within a 3D environment as well as ambience tracks that allow you to add stereo or ambisonic beds as a basis for your mix.

The coolest thing about this platform is that it includes a 3D video player that runs in sync with Pro Tools. There is a binaural preview pathway in the template that lets you hear the shift in perspective as you move the video around in the player. Pretty cool!

In September 2016, another audio workflow for VR in Pro Tools entered the market from the Dutch company Audio Ease and their 360 pan suite. Much like the Spatial Workstation, the suite offers an object-based panner (360 pan) that when placed on every audio track allows you to pan individual items within the 360-degree field of view. The 360 pan suite also includes the 360 monitor, which allows you to preview head tracking within Pro Tools.

Where the 360 pan suite really stands out is with their video overlay function. By loading a 360 video inside of Pro Tools, Audio Ease adds an overlay on top of the Pro Tools video window, letting you pan each track in real time, which is really useful. For the features it offers, it is relatively affordable. The suite does not come with its own template, but they have a quick video guide to get you up and going fairly easily.

Are there any aspects that you’re still figuring out?
Delivery is still a bit up in the air. You may need to export in multiple formats to be able to upload to Facebook, YouTube, etc. I was glad to see that YouTube is supporting the ambisonic format for delivery, but I look forward to seeing workflows become more standardized across the board.

Any areas in which you see the need for further development, and/or where the tech just isn’t there yet?
I think the biggest limitation with VR is the lack of affordable and easy-to-use 3D audio capture devices. I would love to see a super-portable ambisonic rig that filmmakers can easily use in conjunction with shooting 360 video. Especially as media giants like YouTube are gravitating toward the ambisonic format for delivery, it would be great for them to be able to capture the actual space in the same format.

In January 2017, Røde announced the VideoMic Soundfield — an on-camera ambisonic, 360-degree surround sound microphone — though pricing and release dates have not yet been made public.

One new product I am really excited about is the Sennheiser Ambeo VR mic, which is around $1,650. That’s a bit pricey for the most casual user once you factor in a 4-track recorder, but for the professional user that already has a 788T, the Ambeo VR mic offers a nice turnkey solution. I like that the mic looks a little less fragile than some of the other options on the market. It has a built-in windscreen/cage similar to what you would see on a live handheld microphone. It also comes with a Rycote shockmount and cable to 4-XLR, which is nice.

Some leading companies have recently selected ambisonics as the standard spatial audio format — can you talk a bit about how you use ambisonics for VR?
Yeah, I think this is a great decision. I like the “future proof” nature of the ambisonic format. Even in traditional film mixing, I like having the option to export to stereo, 5.1 or 7.1 depending on the project. Until ambisonic becomes more standardized, I like that the Two Big Ears/FB 360 encoder allows you to export to the .tbe B-Format (FuMa or ambiX/YouTube) as well as quad-binaural.

I am a huge fan of the ambisonic format in general. The Pro Sound Effects NYC Ambisonics Library (and now Chicago and Tokyo as well) was my first experience using the format and I was blown away. In a traditional mixing environment it adds another level of depth to the backgrounds. I really look forward to being able to bring it to the VR format as well.


Andrew Emge is operations manager at Pro Sound Effects.

Quick Chat: Scott Gershin from The Sound Lab at Technicolor

By Randi Altman

Veteran sound designer and feature film supervising sound editor Scott Gershin is leading the charge at the recently launched The Sound Lab at Technicolor, which, in addition to film and television work, focuses on immersive storytelling.

Gershin has more than 100 films to his credit, including American Beauty (which earned him a BAFTA nomination), Guillermo del Toro’s Pacific Rim and Dan Gilroy’s Nightcrawler. But films aren’t the only genre that Gershin has tackled — in addition to television work (he has an Emmy nom for the TV series Beauty and the Beast), this audio post pro has created the sound for game titles such as Resident Evil, Gears of War and Fable. One of his most recent projects was contributing to id Software’s Doom.

We recently reached out to Gershin to find out more about his workflow and this new Burbank-based audio entity.

Can you talk about what makes this facility different than what Technicolor has at Paramount? 
The Sound Lab at Technicolor works in concert with our other audio facilities, tackling film, broadcast and gaming projects. In doing so we are able to use Technicolor’s world-class dubbing, ADR and Foley stages.

One of the focuses of The Sound Lab is to identify and use cutting-edge technologies and workflows not only in traditional mediums, but in those new forms of entertainment such as VR, AR, 360 video/films, as well as dedicated installations using mixed reality. The Sound Lab at Technicolor is made up of audio artists from multiple industries who create a “brain trust” for our clients.

Scott Gershin and The Sound Lab team.

As an audio industry veteran, how has the world changed since you started?
I was one of the first sound people to use computers in the film industry. When I moved from the music industry into film post production, I brought that knowledge and experience with me. It gave me access to a huge number of tools that helped me tell better stories with audio. The same happened when I expanded into the game industry.

Learning the interactive tools of gaming is now helping me navigate into these new immersive industries, combining my film experience to tell stories and my gaming experience using new technologies to create interactive experiences.

One of the biggest changes I’ve seen is that there are so many opportunities for the audience to ingest entertainment — creating competition for their time — whether it’s traveling to a theatre, watching TV (broadcast, cable and streaming) on a new 60- or 70-inch TV, or playing video games alone on a phone or with friends on a console.

There are so many choices, which means that the creators and publishers of content have to share a smaller piece of the pie. This forces budgets to be smaller since the potential audience size is smaller for that specific project. We need to be smarter with the time that we have on projects and we need to use the technology to help speed up certain processes — allowing us more time to be creative.

Can you talk about your favorite tools?
There are so many great technologies out there. Each one adds a different color to my work and provides me with information that is crucial to my sound design and mix. For example, Nugen has great metering and loudness tools that help me zero in on my clients LKFS requirements. With each client having their own loudness requirements, the tools allow me to stay creative, and meet their requirements.

Audi’s The Duel

What are some recent projects you’ve worked on?
I’ve been working on a huge variety of projects lately. Recently, I finished a commercial for Audi called The Duel, a VR piece called My Brother’s Keeper, 10 Webisodes of The Strain and a VR music piece for Pentatonix. Each one had a different requirement.

What is your typical workflow like?
When I get a job in, I look at what the project is trying to accomplish. What is the story or the experience about? I ask myself, how can I use my craft, shaping audio, to better enhance the experience. Once I understand how I am going to approach the project creatively, I look at what the release platform will be. What are the technical challenges and what frequencies and spacial options are open to me? Whether that means a film in Dolby Atmos or a VR project on the Rift. Once I understand both the creative and technical challenges then I start working within the schedule allotted me.

Speed and flow are essential… the tools need to be like musical instruments to me, where it goes from brain to fingers. I have a bunch of monitors in front of me, each one supplying me with different and crucial information. It’s one of my favorite places to be — flying the audio starship and exploring the never-ending vista of the imagination. (Yeah, I know it’s corny, but I love what I do!)

HPA Tech Retreat takes on realities of virtual reality

By Tom Coughlin

The HPA Tech Retreat, run by the Hollywood Professional Association in association with SMPTE, began with an insightful one-day VR seminar— Integrating Virtual Reality/Augmented Reality into Entertainment Applications. Lucas Wilson from SuperSphere kicked off the sessions and helped with much of the organization of the seminar.

The seminar addressed virtual reality (VR), augmented reality (AR) and mixed reality (MR, a subset of AR where the real world and the digital world interact, like Pokeman Go). As in traditional planar video, 360-degree video still requires a director to tell a story and direct the eye to see what is meant to be seen. Successful VR requires understanding how people look at things, how they perceive reality, and using that understanding to help tell a story. Some things that may help with this are reinforcement of the viewer’s gaze with color and sound that may vary with the viewer — e.g. these may be different for the “good guy” and the “bad guy.”

VR workflows are quite different from traditional ones, with many elements changing with multiple-camera content. For instance, it is much more difficult to keep a camera crew out of the image, and providing proper illumination for all the cameras can be a challenge. The image below from Jaunt shows their 360-degree workflow, including the use of their cloud-based computational image service to stitch the images from the multiple cameras.
Snapchat is the biggest MR application, said Wilson. Snapchat’s Snapchat-stories could be the basis of future post tools.

Because stand-alone headsets (head-mounted displays, or HMDs) are expensive, most users of VR rely on smart phone-based displays. There are also some places that allow one or more people to experience VR, such as the IMAX center in Los Angeles. Activities such as VR viewing will be one of the big drivers for higher-resolution mobile device displays.

Tools that allow artists and directors to get fast feedback on their shots are still in development. But progress is being made, and today over 50 percent of VR is used for video viewing rather than games. Participants in a VR/AR market session, moderated by the Hollywood Reporter’s Carolyn Giardina and including Marcie Jastrow, David Moretti, Catherine Day and Phil Lelyveld, seemed to agree that the biggest immediate opportunity is probably with AR.

Koji Gardiner from Jaunt gave a great talk on their approach to VR. He discussed the various ways that 360-degree video can be captured and the processing required to create finished stitched video. For an array of cameras with some separation between the cameras (no common axis point for the imaging cameras), there will be area that needs to be stitched together between camera images using common reference points between the different camera images as well as blind spots near to the cameras where they are not capturing images.

If there is a single axis for all of the cameras then there are effectively no blind spots and no stitching possible as shown in the image below. Covering all the space to get a 360-degree video requires additional cameras located on that axis to cover all the space.

The Fraunhofer Institute, in Germany, has been showing a 360-degree video camera with an effective single axis for several cameras for several years, as shown below. They do this using mirrors to reflect images to the individual cameras.

As the number of cameras is increased, the mathematical work to stitch the 360-degree images together is reduced.

Stitching
There are two approaches commonly used in VR stitching of multiple camera videos. The easiest to implement is a geometric approach that uses known geometries and distances to objects. It requires limited computational resources but results in unavoidable ghosting artifacts at seams from the separate images.

The Optical Flow approach synthesizes every pixel by computing correspondences between neighboring cameras. This approach eliminates the ghosting artifacts at the seams but has its own more subtle artifacts and requires significantly more processing capability. The Optical Flow approach requires computational capabilities far beyond those normally available to content creators. This has led to a growing market to upload multi-camera video streams to cloud services that process the stitching to create finished 360-degree videos.

Files from the Jaunt One camera system are first downloaded and organized on a laptop computer and then uploaded to Jaunt’s cloud server to be processed and create the stitching to make a 360 video. Omni-directionally captured audio can also be uploaded and mixed ambisonically, resulting in advanced directionality in the audio tied to the VR video experience.

Google and Facebook also have cloud-based resources for computational photography used for this sort of image stitching.

The Jaunt One 360-degree camera has a 1-inch 20MP rolling shutter sensor with frame rates up to 60fps with 3200 ISO max, 29dB SNR at ISO800. It has a 10 stops per camera module, with 130-degree diagonal FOV, 4/2.9 optics and with up to 16K resolution (8K per eye). Jaunt One at 60fps provides 200GB/minute uncompressed. This can fill a 1TB SSD in five minutes. They are forced to use compression to be able to use currently affordable storage devices. This compression creates 11GB per minute, which can fill a 1TB SSD in 90 minutes.

The actual stitched image, laid out flat, looks like a distorted projection. But when viewed in a stereoscopic viewer it appears to look like a natural image of the world around the viewer, giving an immersive experience. At one point in time the viewer does not see all of the image but only the image in a restricted space that they are looking directly at as shown in the red box in the figure below.

The full 360-degree image can be pretty high resolution, but unless the resolution is high enough, the resolution inside the scene being viewed at any point in time will be much less that the resolution of the overall scene, unless special steps are taken.

The image below shows that for a 4k 360-degree video the resolution in the field of view (FOV) may be only 1K, much less resolution and quite perceptible to the human eye.

In order to provide a better viewing experience in the FOV, either the resolution of the entire view must be better (e.g. the Jaunt One high-resolution version has 8K per eye and thus 16K total displayed resolution) or there must be a way to increase the resolution in the most significant FOV in a video, so at least in that FOV, the resolution leads to a greater feeling of reality.

Virtual reality, augmented reality and mixed reality create new ways of interacting with the world around us and will drive consumer technologies and the need for 360-degree video. New tools and stitching software, much of this cloud-based, will enable these workflows for folks who want to participate in this revolution in content. The role of a director is as important as ever as new methods are needed to tell stories and guide the viewer to engage in this story.

2017 Creative Storage Conference
You can learn more about the growth in VR content in professional video and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services at the 2017 Creative Storage Conference — taking place May 24, 2017 in Culver City.


Thomas M. Coughlin of Coughlin Associates is a storage analyst and consultant. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide.

Rick & Morty co-creator Justin Roiland to keynote VRLA

Justin Roiland, co-creator of Rick & Morty from Cartoon Network’s Adult Swim, will be delivering VRLA’s Saturday keynote. The expo, which takes place April 14 and 15 at the LA Convention Center, will include demos, educational sessions, experimental work and presentations.

The exhibit floor will feature hardware and software developers, content creators and prototype technology that can only be seen at VRLA. Registration is currently open, with the business-focused two-day “Pro” pass at $299 and a one-day pass for Saturday priced at $40.

Roiland, is also the newly-minted founder of the VR studio Squanchtendo, aims to dive into the surreally funny possibilities of the medium in his keynote, remarking, “What does the future of VR hold? Will there be more wizard games? Are grandmas real? What is a wizard really? Are there wizard grandmas? How does this factor into VR? Please come to my incredible keynote address on the state of VR.”

VRLA is currently accepting applications for its Indie Zone, which offers complimentary exhibition space to small teams who have raised less than $500,000 in venture capital funding or generated less than less than that amount in revenue. Click here to apply.

Chris Hill & Sami Tahari

Imaginary Forces expands with EP Chris Hill and director of biz dev Sami Tahari

Imaginary Forces has added executive producer Chris Hill and director of business development Sami Tahari to its Los Angeles studio. The additions come at a time when the creative studio is looking to further expand their cross-platform presence with projects that mix VR/AR/360 with traditional, digital and social media.

Celebrating 20 years in business this year, the independently owned Imaginary Forces is a creative company specializing in brand strategy and visual storytelling encompassing many disciplines, including full-service design, production and post production. Being successful for that long in this business means they are regularly innovating and moving where the industry takes them. This led to the hiring of Hill and Tahari, whose diverse backgrounds will help strengthen the company’s long-standing relationships, as well as its continuous expansion into emerging markets.

Recent work of note includes main titles for Netflix’s beloved Stranger Things, the logo reveal for Michael Bay’s Transformers: The Last Knight and an immersive experience for the Empire State Building.

Hill’s diverse production experience includes commercials, experience design, entertainment marketing and branding for such clients as HBO Sports, Google, A&E and the Jacksonville Jaguars, among others. He joins Imaginary Forces after recently presiding over the broadcast division of marketing agency BPG.

Tahari brings extensive marketing, business and product development experience spanning the tech and entertainment spaces. His resume includes time at Lionsgate and Google, where he was an instrumental leader in the creative development and marketing of Google Glass.

“Imaginary Forces has a proven ability to use design and storytelling across any medium or industry,” adds Hill. “We can expand that ability to new markets, whether it’s emerging technologies, original content or sports franchises. When you consider, for example, the investment in massive screens and new technologies in stadiums across the country, it demands [that] same high level of brand strategy and visual storytelling.”

Our Main Image: L-R: Chris Hill and Sami Tahari.

HPA Tech Retreat takes on VR/AR at Tech Retreat Extra

The long-standing HPA Tech Retreat is always a popular destination for tech-focused post pros, and while they have touched on virtual reality and augmented reality in the past, this year they are dedicating an entire day to the topic — February 20, the day before the official Retreat begins. TR-X (Tech Retreat Extra) will feature VR experts and storytellers sharing their knowledge and experiences. The traditional HPA Tech Retreat runs from February 21-24 in Indian Wells, California.

TR-X VR/AR is co-chaired by Lucas Wilson (Founder/Executive Producer at SuperSphereVR) and Marcie Jastrow (Senior VP, Immersive Media & Head of Technicolor Experience Center), who will lead a discussion focused on the changing VR/AR landscape in the context of rapidly growing integration into entertainment and applications.

Marcie Jastrow

Experts and creative panelists will tackle questions such as: What do you need to understand to enable VR in your environment? How do you adapt? What are the workflows? Storytellers, technologists and industry leaders will provide an overview of the technology and discuss how to harness emerging technologies in the service of the artistic vision. A series of diverse case studies and creative explorations — from NASA to the NFL — will examine how to engage the audience.

The TR-X program, along with the complete HPA Tech Retreat program, is available here. Additional sessions and speakers will be announced.

TR-X VR/AR Speakers and Panel Overview
Monday, February 20

Opening and Introductions
Seth Hallen, HPA President

Technical Introduction: 360/VR/AR/MR
Lucas Wilson

Panel Discussion: The VR/AR Market
Marcie Jastrow
David Moretti, Director of Corporate Development, Jaunt
Catherine Day, Head of VR/AR, Missing Pieces
Phil Lelyveld, VR/AR Initiative Program Lead, Entertainment Technology Center at USC

Acquisition Technology
Koji Gardiner, VP, Hardware, Jaunt

Live 360 Production Case Study
Andrew McGovern, VP of VR/AR Productions, Digital Domain

Live 360 Production Case Study
Michael Mansouri, Founder, Radiant Images

Interactive VR Production Case Study
Tim Dillon, Head of VR & Immersive Content, MPC Advertising USA

Immersive Audio Production Case Study
Kyle Schember, CEO, Subtractive

Panel Discussion: The Future
Alan Lasky, Director of Studio Product Development, 8i
Ben Grossmann, CEO, Magnopus
Scott Squires, CTO, Creative Director, Pixvana
Moderator: Lucas Wilson
Jen Dennis, EP of Branded Content, RSA

Panel Discussion: New Voices: Young Professionals in VR
Anne Jimkes, Sound Designer and Composer, Ecco VR
Jyotsna Kadimi, USC Graduate
Sho Schrock, Chapman University Student
Brian Handy, USC Student

TR-X also includes an ATSC 3.0 seminar, focusing on the next-generation television broadcast standard, which is nearing completion and offers a wide range of new content delivery options to the TV production community. This session will explore the expanding possibilities that the new standard provides in video, audio, interactivity and more. Presenters and panelists will also discuss the complex next-gen television distribution ecosystem that content must traverse, and the technologies that will bring the content to life in consumers’ homes.

Early registration is highly recommended for TR-X and the HPA Tech Retreat, which is a perennially sold-out event. Attendees can sign up for TR-X VR/AR, TR-X ATSC or the HPA Tech Retreat.

Main Image: Lucas Wilson.

Virtual Reality Roundtable

By Randi Altman

Virtual reality is seemingly everywhere, especially this holiday season. Just one look at your favorite electronics store’s website and you will find VR headsets from the inexpensive, to the affordable, to the “if I win the lottery” ones.

While there are many companies popping up to service all aspects of VR/AR/360 production, for the most part traditional post and production companies are starting to add these services to their menu, learning best practices as they go.

We reached out to a sampling of pros who are working in this area to talk about the problems and evolution of this burgeoning segment of the industry.

Nice Shoes Creative Studio: Creative director Tom Westerlin

What is the biggest issue with VR productions at the moment? Is it lack of standards?
A big misconception is that a VR production is like a standard 2D video/animation commercial production. There are some similarities, but it gets more complicated when we add interaction, different hardware options, realtime data and multiple distribution platforms. It actually takes a lot more time and man hours to create a 360 video or VR experience relative to a 2D video production.

tom

Tom Westerlin

More development time needs to be scheduled for research, user experience and testing. We’re adding more stages to the overall production. None of this should discourage anyone from exploring a concept in virtual reality, but there is a lot of consideration and research that should be done in the early stages of a project. The lack of standards presents some creative challenges for brands and agencies considering a VR project. The hardware and software choices made for distribution can have an impact on the size of the audience you want to reach as well as the approach to build it.

The current landscape provides the following options:
YouTube and Facebook can hit a ton of people with a 360 video, but has limited VR functionality; a WebVR experience, works within certain browsers like Chrome or Firefox, but not others, limiting your audience; a custom app or experimental installation using the Oculus or HTC Vive, allows for experiences with full interactivity, but presents the issue of audience limitations. There is currently no one best way to create a VR experience. It’s still very much a time of discovery and experimentation.

What should clients ask of their production and post teams when embarking on their VR project?
We shouldn’t just apply what we’ve all learned from 2D filmmaking to the creation of a VR experience, so it is crucial to include the production, post and development teams in the design phase of a project.

The current majority of clients are coming from a point of view where many standard constructs within the world of traditional production (quick camera moves or cuts, extreme close-ups) have negative physiological implications (nausea, disorientation, extreme nausea). The impact of seemingly simple creative or design decisions can have huge repercussions on complexity, time, cost and the user experience. It’s important for clients to be open to telling a story in a different manner than they’re used to.

What is the biggest misconception about VR — content, process or anything relating to VR?
The biggest misconception is clients thinking that 360 video and VR are the same. As we’ve started to introduce this technology to our clients, we’ve worked to explain the core differences between these extremely difference experiences: VR is interactive and most of the time a full CG environment, while 360 is video and although immersive, it’s a more passive experience. Each have their own unique challenges and rewards, so as we think about the end user’s experiences, we can determine what will work best.

There’s also the misconception that VR will make you sick. If executed poorly, VR can make a user sick, but the right creative ideas executed with the right equipment can result in an experience that’s quite enjoyable and nausea free.

Nice Shoes’ ‘Mio Garden’ 360 experience.

Another misconception is that VR is capable of anything. While many may confuse VR and 360 and think an experience is limited to passively looking around, there are others who have bought into the hype and inflated promises of a new storytelling medium. That’s why it’s so important to understand the limitations of different devices at the early stages of a concept, so that creative, production and post can all work together to deliver an experience that takes advantage of VR storytelling, rather than falling victims to the limitations of a specific device.

The advent of affordable systems that are capable of interactivity, like the Google Daydream, should lead to more popular apps that show off a higher level of interactivity. Even sharing video of people experiencing VR while interacting with their virtual worlds could have a huge impact on the understanding of the difference between passively watching and truly reaching out and touching.

How do we convince people this isn’t stereo 3D?
In one word: Interactivity. By definition VR is interactive and giving the user the ability to manipulate the world and actually affect it is the magic of virtual reality.

Assimilate: CEO Jeff Edson

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue in VR is straightforward workflows — from camera to delivery — and then, of course, delivery to what? Compared to a year ago, shooting 360/VR video today has made big steps in ease of use because more people have experience doing it. But it is a LONG way from point and shoot. As integrated 360/VR video cameras come to market more and more, VR storytelling will become much more straightforward and the creators can focus more on the story.

Jeff Edson

And then delivery to what? There are many online platforms for 360/VR video playback today: Facebook, YouTube 360 and others for mobile headset viewing, and then there is delivery to a PC for non-mobile headset viewing. The viewing perspective is different for all of these, which means extra work to ensure continuity on all the platforms. To cover all possible viewers one needs to publish to all. This is not an optimal business model, which is really the crux of this issue.

Can standards help in this? Standards as we have known in the video world, yes and no. The standards for 360/VR video are happening by default, such as equirectangular and cubic formats, and delivery formats like H.264, Mov and more. Standards would help, but they are not the limiting factor for growth. The market is not waiting on a defined set of formats because demand for VR is quickly moving forward. People are busy creating.

What should clients ask of their production and post teams when embarking on their VR project?
We hear from our customers that the best results will come when the director, DP and post supervisor collaborate on the expectations for look and feel, as well as the possible creative challenges and resolutions. And experience and budget are big contributors. A key issue is, what camera/rig requirements are needed for your targeted platform(s)? For example, how many cameras and what type of cameras (4K, 6K, GoPro, etc.) as well as lighting? When what about sound, which plays a key role in the viewer’s VR experience.

unexpected concert

This Yael Naim mini-concert was posted in Scratch VR by Alex Regeffe at Neotopy.

What is the biggest misconception about VR — content, process or anything relating to VR?
I see two. One: The perception that VR is a flash in the pan, just a fad. What we see today is just the launch pad. The applications for VR are vast within entertainment alone, and then there is the extensive list of other markets like training and learning in such fields as medical, military, online universities, flight, manufacturing and so forth. Two: That VR post production is a difficult process. There are too many steps and tools. This definitely doesn’t need to be the case. Our Scratch VR customers are getting high-quality results within a single, simplified VR workflow

How do we convince people this isn’t stereo 3D?
The main issue with stereo 3D is that it has really never scaled beyond a theater experience. Whereas with VR, it may end up being just the opposite. It’s unclear if VR can be a true theater experience other than classical technologies like domes and simulators. 360/VR video in the near term is, in general, a short-form media play. It’s clear that sooner than later smart phones will be able to shoot 360/VR video as a standard feature and usage will sky rocket overnight. And when that happens, the younger demographic will never shoot anything that is not 360. So the Snapchat/Instagram kinds of platforms will be filled with 360 snippets. VR headsets based upon mobile devices make the pure number of displays significant. The initial tethered devices are not insignificant in numbers, but with the next-generation of higher-resolution and untethered devices, maybe most significantly at a much lower price point, we will see the numbers become massive. None of this was ever the case with stereo 3D film/video.

Pixvana: Executive producer Aaron Rhodes

What is the biggest issue with VR productions at the moment? Is it lack of standards?
There are many issues with VR productions, many of them are just growing pains: not being able to see a live stitch, how to direct without being in the shot, what to do about lighting — but these are all part of the learning curve and evolution of VR as a craft. Resolution and management around big data are the biggest issues I see on the set. Pixvana is all about resolution — it plays a key role in better immersion. Many of the cameras out there only master at 4K and that just doesn’t cut it. But when they do shoot 8K and above, the data management is extreme. Don’t under estimate the responsibility you are giving to your DIT!

aaron rhodes

Aaron Rhodes

The biggest issue is this is early days for VR capture. We’re used to a century of 2D filmmaking and decade of high-definition capture with an assortment of camera gear. All current VR camera rigs have compromises, and will, until technology catches up. It’s too early for standards since we’re still learning and this space is changing rapidly. VR production and post also require different approaches. In some cases we have to unlearn what worked in standard 2D filmmaking.

What should clients ask of their production and post teams when embarking on their VR project?
Give me a schedule, and make it realistic. Stitching takes time, and unless you have a fleet of render nodes at your disposal, rendering your shot locally is going to take time — and everything you need to update or change it will take more time. VR post has lots in common with a non-VR spot, but the magnitude of data and rendering is much greater — make sure you plan for it.

Other questions to ask, because you really can’t ask enough:
• Why is this project being done as VR?
• Does the client have team members who understand the VR medium?
• If not will they be willing to work with a production team to design and execute with VR in mind?
• Has this project been designed for VR rather than just a 2D project in VR?
• Where will this be distributed? (Headsets? Which ones? YouTube? Facebook? Etc.)
• Will this require an app or will it be distributed to headsets through other channels?
• If it is an app, who will build the app and submit it to the VR stores?
• Do they want to future proof it by finishing greater than 4K?
• Is this to be mono or stereo? (If it’s stereo it better be very good stereo)
• What quality level are they aiming for? (Seamless stitches? Good stereo?)
• Is there time and budget to accomplish the quality they want?
• Is this to have spatialized audio?

What is the biggest misconception about VR — content, process or anything relating to VR?
VR is a narrative component, just like any actor or plot line. It’s not something that should just be done to do it. It should be purposeful to shoot VR. It’s the same with stereo. Don’t shoot stereo just because you can — sure, you can experiment and play (we need to do that always), but don’t without purpose. The medium of VR is not for every situation.
Other misconceptions because there are a lot out there:
• it’s as easy as shooting normal 2D.
• you need to have action going on constantly in 360 degrees.
• everything has to be in stereo.
• there are fixed rules.
• you can simply shoot with a VR camera and it will be interesting, without any idea of specific placement, story or design.
How do we convince people this isn’t stereo 3D?
Education. There are tiers of immersion with VR, and stereo 3D is one of them. I see these tiers starting with the desktop experience and going up in immersion from there, and it’s important to the strengths and weakness of each:
• YouTube/Facebook on the desktop [low immersion]
• Cardboard, GearVR, Daydream 2D/3D low-resolution
• Headset Rift and Vive 2D/3D 6 degrees of freedom [high immersion]
• Computer generated experiences [high immersion]

Maxon US: President/CEO Paul Babb

paul babb

Paul Babb

What is the biggest issue with VR productions at the moment? Is it lack of standards?
Project file size. Huge files. Lots of pixels. Telling a story. How do you get the viewer to look where you want them to look? How do you tell and drive a story in a 360 environment.

What should clients ask of their production and post teams when embarking on their VR project?
I think it’s more that production teams are going to have to ask the questions to focus what clients want out of their VR. Too many companies just want to get into VR (buzz!) without knowing what they want to do, what they should do and what the goal of the piece is.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
Oh boy. Let me tell you, that’s a tough one. People don’t even know that “3D” is really “stereography.”

Experience 360°: CEO Ryan Moore

What is the biggest issue with VR productions at the moment? Is it lack of standards?
One of the biggest issues plaguing the current VR production landscape is the lack of true professionals that exist in the field. While a vast majority of independent filmmakers are doing their best at adapting their current techniques, they have been unsuccessful in perceiving ryan moorehow films and VR experiences genuinely differ. This apparent lack of virtual understanding generally leads to poor UX creation within finalized VR products.

Given the novelty of virtual reality and 360 video, standards are only just being determined in terms of minimum quality and image specifications. These, however, are constantly changing. In order to keep a finger on the pulse, it is encouraged for VR companies to be plugged into 360 video communities through social media platforms. It is through this essential interaction that VR production technology can continually be reintroduced.

What should clients ask of their production and post teams when embarking on their VR project?
When first embarking on a VR project, it is highly beneficial to walk prospective clients through the entirety of the process, before production actually begins. This allows the client a full understanding of how the workflow is used, while also ensuring client satisfaction with the eventual partnership. It’s vital that production partners convey an ultimate understanding of VR and its use, and explain their tactics in “cutting” VR scenes in post — this can affect the user’s experience in a pronounced way.

‘The Backwoods Tennessee VR Experience’ via Experience 360.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people that this isn’t stereo 3D?
The biggest misconception about VR and 360 video is that it is an offshoot of traditional storytelling, and can be used in ways similar to both cinematic and documentary worlds. The mistake in the VR producer equating this connection is that it can often limit the potential of the user’s experience to that of a voyeur only. Content producers need to think much farther out of this box, and begin to embrace having images paired with interaction and interactivity. It helps to keep in mind that the intended user will feel as if these VR experiences are very personal to them, because they are usually isolated in a HMD when viewing the final product.

VR is being met with appropriate skepticism, and is widely still considered a ‘“fad” without the media landscape. This is often because the critic has not actually had a chance to try a virtual reality experience firsthand themselves, and does not understand the wide reaching potential of immersive media. At three years in, a majority of the adults in the United States have never had a chance to try VR themselves, relying on what they understand from TV commercials and online reviews. One of the best ways to convince a doubtful viewer is to give them a chance to try a VR headset themselves.

Radeon Technologies Group at AMD: Head of VR James Knight

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue for us is (or was) probably stitching and the excessive amount of time it takes, but we’re tacking that head on with Project Loom. We have realtime stitching with Loom. You can already download an early version of it on GPUopen.com. But you’re correct, there is a lack of standards in VR/360 production. It’s mainly because there are no really established common practices. That’s to be expected though when you’re shooting for a new medium. Hollywood and entertainment professionals are showing up to the space in a big way, so I suspect we’ll all be working out lots of the common practices in 2017 on sets.

James Knight

What should clients ask of their production and post teams when embarking on their VR project?
Double check they have experience shooting 360 and ask them for a detailed post production pipeline outline. Occasionally, we hear horror stories of people awarding projects to companies that think they can shoot 360 without having personally explored 360 shooting themselves and making mistakes. You want to use an experienced crew that’s made the mistakes, and mostly is cognizant of what works and what doesn’t. The caveat there though is, again, there’s no established rules necessarily, so people should be willing to try new things… sometimes it takes someone not knowing they shouldn’t do something to discover something great, if that makes sense.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
That’s a fun question. The overarching misconception for me, honestly, is just as though a cliché politician might, for example, make a fleeting judgment that video games are bad for society, people are often times making assumptions that VR if for kids or 16 year old boys at home in their boxer shorts. It isn’t. This young industry is really starting to build up a decent library of content, and the payoff is huge when you see well produced content! It’s transformative and you can genuinely envision the potential when you first put on a VR headset.

The biggest way to convince them this isn’t 3D is to convince a naysayer put the headset on… let’s agree we all look rather silly with a VR headset on, and once you get over that, you’ll find out what’s inside. It’s magical. I had the CEO of BAFTA LA, Chantal Rickards, tell me upon seeing VR for the first time, “I remember when my father had arrived home on Christmas Eve with a color TV set in the 1960s and the excitement that brought to me and my siblings. The thrill of seeing virtual reality for the first time was like seeing color TV for the first time, but times 100!”

Missing Pieces: Head of AR/VR/360 Catherine Day

Catherine Day

What is the biggest issue with VR productions at the moment?
The biggest issue with VR production today is the fact that everything keeps changing so quickly. Every day there’s a new camera, a new set of tools, a new proprietary technology and new formats to work with. It’s difficult to understand how all of these things work, and even harder to make them work together seamlessly in a deadline-driven production setting. So much of what is happening on the technology side of VR production is evolving very rapidly. Teams often reinvent the wheel from one project to the next as there are endless ways to tell stories in VR, and the workflows can differ wildly depending on the creative vision.

The lack of funding for creative content is also a huge issue. There’s ample funding to create in other mediums, and we need more great VR content to drive consumer adoption.

Is it lack of standards?
In any new medium and any pioneering phase of an industry, it’s dangerous to create standards too early. You don’t want to stifle people from trying new things. As an example, with our recent NBA VR project, we broke all of the conventional rules that exist around VR — there was a linear narrative, fast cut edits, it was over 25 minutes long — yet still was very well received. So it’s not a lack of standards, just a lack of bravery.

What should clients ask of their production and post teams when embarking on their VR project?
Ask to see what kind of work that team has done in the past. They should also delve in and find out exactly who completed the work and how much, if any, of it was outsourced. There is a curtain that often closes between the client and the production/post company and it closes once the work is awarded. Clients need to know who exactly is working on their project, as much of the legwork involved in creating a VR project — stitching, compositing etc. — is outsourced.

It’s also important to work with a very experienced post supervisor — one with a very discerning eye. You want someone who really knows VR that can evaluate every aspect of what a facility will assemble. Everything from stitching, compositing to editorial and color — the level of attention to detail and quality control for VR is paramount. This is key not only for current releases, but as technology evolves — and as new standards and formats are applied — you want your produced content to be as future-proofed as possible so that if it requires a re-render to accommodate a new, higher-res format in the future, it will still hold up and look fantastic.

What is the biggest misconception about VR — content, process or anything relating to VR?
On the consumer level, the biggest misconception is that people think that 360 video on YouTube or Facebook is VR. Another misconception is that regular filmmakers are the creative talents best suited to create VR content. Many of them are great at it, but traditional filmmakers have the luxury of being in control of everything, and in a VR production setting you have no box to work in and you have to think about a billion moving parts at once. So it either requires a creative that is good with improvisation, or a complete control freak with eyes in the back of their head. It’s been said before, but film and theater are as different as film and VR. Another misconception is that you can take any story and tell it in VR — you actually should only embark on telling stories in VR if they can, in some way, be elevated through the medium.

How do we convince people this isn’t stereo 3D?
With stereo 3D, there was no simple, affordable path for consumer adoption. We’re still getting there with VR, but today there are a number of options for consumers and soon enough there will be a demand for room-scale VR and more advanced immersive technologies in the home.

VR Audio: Virtual and spacial soundscapes

By Beth Marchant

The first things most people think of when starting out in VR is which 360-degree camera rig they need and what software is best for stitching. But virtual reality is not just a Gordian knot for production and post. Audio is as important — and complex — a component as the rest. In fact, audio designers, engineers and composers have been fascinated and challenged by VR’s potential for some time and, working alongside future-looking production facilities, are equally engaged in forging its future path. We talked to several industry pros on the front lines.

Howard Bowler

Music industry veteran and Hobo Audio founder Howard Bowler traces his interest in VR back to the groundbreaking film Avatar. “When that movie came out, I saw it three times in the same week,” he says. I was floored by the technology. It was the first time I felt like you weren’t just watching a film, but actually in the film.” As close to virtual reality as 3D films had gotten to that point, it was the blockbuster’s evolved process of motion capture and virtual cinematography that ultimately delivered its breathtaking result.

“Sonically it was extraordinary, but visually it was stunning as well,” he says. “As a result, I pressed everyone here at the studio to start buying 3D televisions, and you can see where that has gotten us — nowhere.” But a stepping stone in technology is more often a sturdy bridge, and Bowler was not discouraged. “I love my 3D TVs, and I truly believe my interest in that led me and the studio directly into VR-related projects.”

When discussing the kind of immersive technology Hobo Sound is involved with today, Bowler — like others interviewed for this series — clearly define VR’s parallel deliverables. “First, there’s 360 video, which is passive viewing, but still puts you in the center of the action. You just don’t interact with it. The second type, more truly immersive VR, lets you interact with the virtual environment as in a video game. The third area is augmented reality,” like the Pokemon Go phenomenon of projecting virtual objects and views onto your actual, natural environment. “It’s really important to know what you’re talking about when discussing these types of VR with clients, because there are big differences.”

With each segment comes related headsets, lenses and players. “Microsoft’s HoloLens, for example, operates solely in AR space,” says Hobo producer Jon Mackey. “It’s a headset, but will project anything that is digitally generated, either on the wall or to the space in front of you. True VR separates you from all that, and really good VR separates all your senses: your sight, your hearing and even touch and feeling, like some of those 4D rides at Disney World.” Which technology will triumph? “Some think VR will take it, and others think AR will have wider mass adoption,” says Mackey. “But we think it’s too early to decide between either one.”

Boxed Out

‘Boxed Out’ is a Hobo indie project about how gentrification is affecting artists studios in the Gowanus section of Brooklyn.

Those kinds of end-game obstacles are beside the point, says Bowler. “The main reason why we’re interested in VR right now is that the experiences, beyond the limitations of whatever headset you watch it on, are still mind-blowing. It gives you enough of a glimpse of the future that it’s incredible. There are all kinds of obstacles it presents just because it’s new technology, but from our point of view, we’ve honed it to make it pretty seamless. We’re digging past a lot of these problem areas, so at least from the user standpoint, it seems very easy. That’s our goal. Down the road, people from medical, education and training are going to need to understand VR for very productive reasons. And we’re positioning ourselves to be there on behalf of our clients.”

Hobo’s all-in commitment to VR has brought changes to its services as well. “Because VR is an emerging technology, we’re investing in it globally,” says Bowler. “Our company is expanding into complete production, from concepting — if the client needs it — to shooting, editing and doing all of the audio post. We have the longest experience in audio post, but we find that this is just such an exciting area that we wanted to embrace it completely. We believe in it and we believe this is where the future is going to be. Everybody here is completely on board to move this forward and sees its potential.”

To ramp up on the technology, Hobo teamed up with several local students who were studying at specialty schools. “As we expanded out, we got asked to work with a few production companies, including East Coast Digital and End of Era Productions, that are doing the video side of it. We’re bundling our services with them to provide a comprehensive set of services.” Hobo is also collaborating with Hidden Content, a VR production and post production company, to provide 360 audio for premium virtual reality content. Hidden Content’s clients include Samsung, 451 Media, Giant Step, PMK-BNC, Nokia and Popsugar.

There is still plenty of magic sauce in VR audio that continues to make it a very tricky part of the immersive experience, but Bowler and his team are engineering their way through it. “We’ve been developing a mixing technique that allows you to tie the audio to the actual object,” he says. “What that does is disrupt the normal stereo mix. Say you have a public speaker in the center of the room; normally that voice would turn with you in your headphones if you turn away from him. What we’re able to do is to tie the audio of the speaker to the actual object, so when you turn your head, it will pan to the right earphone. That also allows you to use audio as signaling devices in the storyline. If you want the viewer to look in a certain direction in the environment, you can use an audio cue to do that.”

Hobo engineer Diego Jimenez drove a lot of that innovation, says Mackey. “He’s a real VR aficionado and just explored a lot of the software and mixing techniques required to do audio in VR. We started out just doing a ton of tests and they all proved successful.” Jimenez was always driven by new inspiration, notes Bowler. “He’s certainly been leading our sound design efforts on a lot of fronts, from creating instruments to creating all sorts of unusual and original sounds. VR was just the natural next step for him, and for us. For example, one of the spots that we did recently was to create a music video and we had to create an otherworldly environment. And because we could use our VR mixing technology, we could also push the viewer right into the experience. It was otherworldly, but you were in that world. It’s an amazing feeling.”

boxed-out

‘Boxed Out’

What advice do Bowler and Mackey have for those interested in VR production and post? “360 video is to me the entry point to all other versions of immersive content,” says Bowler. “It’s the most basic, and it’s passive, like what we’re used to — television and film. But it’s also a completely undefined territory when it comes to production technique.” So what’s the way in? “You can draw on some of the older ways of doing productions,” he says, “but how do you storyboard in 360? Where does the director sit? How do you hide the crew? How do you light this stuff? All of these things have to be considered when creating 360 video. That also includes everyone on camera: all the viewer has to do is look around the virtual space to see what’s going on. You don’t want anything that takes the viewer out of that experience.”

Bowler thinks 360 video is also the perfect entry point to VR for marketers and advertisers creating branded VR content, and Hobo’s clients agree. “When we’ve suggested 360 video on certain projects and clients want to try it out, what that does is it allows the technology to breathe a little while it’s underwritten at the same time. It’s a good way to get the technology off the ground and also to let clients get their feet wet in it.”

Any studio or client contemplating VR, adds Mackey, should first find what works for them and develop an efficient workflow. “This is not really a solidified industry yet,” he says. “Nothing is standard, and everyone’s waiting to see who comes out on top and who falls by the wayside. What’s the file standard going to be? Or the export standard?  Will it be custom-made apps on (Google) YouTube or Facebook? We’ll see Facebook and Google battle it out in the near term. Facebook has recently acquired an audio company to help them produce audio in 360 for their video app and Google has the Daydream platform,” though neither platform’s codec is compatible with the other, he points out. “If you mix your audio to Facebook audio specs, you can actually have your audio come out in 360. For us, it’s been trial and error, where we’ve experimented with these different mixing techniques to see what fits and what works.”

Still, Bowler concedes, there is no true business yet in VR. “There are things happening and people getting things out there, but it’s still so early in the game. Sure, our clients are intrigued by it, but they are still a little mystified by what the return will be. I think this is just part of what happens when you deal with new technology. I still think it’s a very exciting area to be working in, and it wouldn’t surprise me if it doesn’t touch across many, many different subjects, from history to the arts to original content. Think about applications for geriatrics, with an aging population that gets less mobile but still wants to experience the Caribbean or our National Parks. The possibilities are endless.”

At one point, he admits, it may even become difficult to distinguish one’s real memory from one’s virtual memory. But is that really such a bad thing? “I’m already having this problem. I was watching an immersive video of Cuban music, that was pretty beautifully done, and by the end of the five-minute spot, I had the visceral experience that I was actually there. It’s just a very powerful way of experiencing content. Let me put it another way: 3D TVs were at the rabbit hole, and immersive video will take you down the rabbit hole into the other world.”

Source Sound
LA-based Source Sound, which has provided supervision and sound design on a number of Jaunt-produced cinematic VR experiences, including a virtual fashion show, a horror short and a Godzilla short film written and directed by Oscar-winning VFX artist Ian Hunter, as well as final Atmos audio mastering for the early immersive release Sir Paul McCartney Live, is ready for spacial mixes to come. That wasn’t initially the case.

Tim

Tim Gedemer

“When Jaunt first got into this space three years ago, they went to Dolby to try to figure out the audio component,” says Source Sound owner/supervising sound designer/editor Tim Gedemer. “I got a call from Dolby, who told me about what Jaunt was doing, and the first thing I said was, ‘I have no idea what you are talking about!’ Whatever it is, I thought, there’s really no budget and I was dragging my feet. But I asked them to show me exactly what they were doing. I was getting curious at that point.”

After meeting the team at Jaunt, who strapped some VR goggles on him and showed him some footage, Gedemer was hooked. “It couldn’t have been more than 30 seconds in and I was just blown away. I took off the headset and said, ‘What the hell is this?! We have to do this right now.’ They could have reached out to a lot of people, but I was thrilled that we were able to help them by seizing the moment.”

Gedemer says Source Sound’s business has expanded in multiple directions in the past few years, and VR is still a significant part of the studio’s revenue. “People are often surprised when I tell them VR counts for about 15-20 percent of our business today,” he says. “It could be a lot more, but we’d have to allocate the studios differently first.”

With a background in mixing and designing sound for film and gaming and theatrical trailers, Gedemer and his studio have a very focused definition of immersive experiences, and it all includes spacial audio. “Stereo 360 video with mono audio is not VR. For us, there’s cinematic, live-action VR, then straight-up game development that can easily migrate into a virtual reality world and, finally, VR for live broadcast.” Mass adoption of VR won’t happen, he believes, until enterprise and job training applications jump on the bandwagon with entertainment. “I think virtual reality may also be a stopover before we get to a world where augmented reality is commonplace. It makes more sense to me that we’ll just overlay all this content onto our regular days, instead of escaping from one isolated experience to the next.”

On set for the European launch of the Nokia Ozo VR camera in London, which featured a live musical performances captured in 360 VR.

For now, Source Sound’s VR work is completed in dedicated studios configured with gear for that purpose. “It doesn’t mean that we can’t migrate more into other studios, and we’re certainly evolving our systems to be dual-purpose,” he says. “About a year ago we were finally able to get a grip on the kinds of hardware and software we needed to really start coagulating this workflow. It was also clear from the beginning of our foray into VR that we needed to partner with manufacturers, like Dolby and Nokia. Both of those companies’ R&D divisions are on the front lines of VR in the cinematic and live broadcast space, with Dolby’s Atmos for VR and Nokia’s Ozo camera.”

What missing tools and technology have to be developed to achieve VR audio nirvana? “We delivered a wish list to Dolby, and I think we got about a quarter of the list,” he says. “But those guys have been awesome in helping us out. Still, it seems like just about every VR project that we do, we have to invent something to get us to the end. You definitely have to have an adventurous spirit if you want to play in this space.”

The work has already influenced his approach to more traditional audio projects, he says, and he now notices the lack of inter-spacial sound everywhere. “Everything out there is a boring rectangle of sound. It’s on my phone, on my TV, in the movie theater. I didn’t notice it as much before, but it really pops out at me now. The actual creative work of designing and mixing immersive sound has realigned the way I perceive it.”

Main Image: One of Hobo’s audio rooms, where the VR magic happens.


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

 

VR Audio: What you need to know about Ambisonics

By Claudio Santos

The explosion of virtual reality as a new entertainment medium has been largely discussed in the filmmaking community in the past year, and there is still no consensus about what the future will hold for the technology. But regardless of the predictions, it is a fact that more and more virtual reality content is being created and various producers are experimenting to find just how the technology fits into the current market.

Out of the vast possibilities of virtual reality, there is one segment that is particularly close to us filmmakers, and that is 360 videos. They are becoming more and more popular on platforms such as YouTube and Facebook and present the distinct advantage that —  beside playing in VR headsets, such as the GearVR or the DayDream — these videos can also be played in standalone mobile phones, tablets and stationary desktops. This considerably expands the potential audience when compared to the relatively small group of people who own virtual reality headsets.

But simply making the image immerse the viewer into a 360 environment is not enough. Without accompanying spatial audio the illusion is very easily broken, and it becomes very difficult to cue the audience to look in the direction in which the main action of each moment is happening. While there are technically a few ways to design and implement spatial audio into a 360 video, I will share some thoughts and tips on how to work with Ambisonics, the spatial audio format chosen as the standard for platforms such as YouTube.

VR shoot in Bryce Canyons with Google for the Hidden Worlds of the National Parks project. Credit: Hunt Beaty Picture by: Hunt Beaty

First, what is Ambisonics and why are we talking about it?
Ambisonics is a sound format that is slightly different from your usual stereo/surround paradigm because its channels are not attached to speakers. Instead, an Ambisonics recording actually represents the whole spherical soundfield around a point. In practice, it means that you can represent sound coming from all directions around a listening position and, using an appropriate decoder, you can playback the same recording in any set of speakers with any number of channels arranged around the listener horizontally or vertically. That is exactly why it is so interesting to us when we are working with spatial sound for VR.

The biggest challenge of VR audio is that you can’t predict which direction the viewer will be looking at in any given time. Using Ambisonics we can design the whole sound sphere and the VR player decodes the sound to match the direction of the video in realtime, decoding it into binaural for accurate headphone playback. The best part is that the decoding process is relatively light on processing power, which makes this a suitable option for mediums with limited resources such as smartphones.

In order to work with Ambisonics we have two options: to record the sound on location with an Ambisonics microphone, which gives us a very realistic representation of the sound in the location and is very well suited to ambiance recordings, for example; or we can encode other sound formats such as mono and stereo into Ambisonics and then manipulate the sound in the sphere from there, which gives us great flexibility in post production to use sound libraries and create interesting effects by carefully adjusting the positioning and width of a sound in the sphere.

Example: Mono “voice of God” placement. The left shows the soundfield completely filled, which gives the “in-head” illusion.

There are plenty of resources online explaining the technical nature of Ambisonics, and I definitely recommend reading them so you can better understand how to work with it and how the spatiality is achieved. But there aren’t many discussions yet about the creative decisions and techniques used in sound for 360 videos with Ambisonics, so that’s what we will be focusing on from now on.

What to do with mono “in-head” sources such as VO?
That was one of the first tricky challenges we found with Ambisonics. It is not exactly intuitive to place a sound source equally in all directions of the soundfield. The easiest solution comes more naturally once you understand how the four channels of the Ambisonics audio track interact with each other.

The first channel of the ambisonics audio, named W, is omnidirectional and contains the level information of the sound. The other three channels describe the position of the sound in the soundfield through phase relationships. Each one of the channels represents one dimension, which enables the positioning of sounds in three dimensions.

Now, if we want the sound to play at the same level and centered from every direction, what we want is for the sound source to be at the center of the soundfield “sphere,” where the listeners head is. In practice, that means that if you play the sound out of the first channel only, with no information into either of the other three channels, the sound will play “in-head.”

What to do with stereo non-diegetic music?
This is the natural question that follows the one of knowing what to do with mono sources. And the answer is a bit trickier. The mono, first channel trick doesn’t work perfectly with stereo sources because for that to work you would have to first sum the stereo to mono, which might be undesirable depending on your track.

If you want to maintain the stereo width of the source, one good option we found was to mirror the sound in two directions. Some plug-in suites, such as the Ambix VST, offer the functionality to mirror hemispheres of the soundfield. That could also be accomplish with careful positioning of a copy of the source, but this will make things easier.

Example of sound paced in the “left” of the soundfield in ambisonics.

Generally, what you want is to place the center of the stereo source in the focus of the action your audience will be looking at and mirror the top-bottom and the front-back. This will keep the music playing at the same level regardless of the direction the viewer looks at, but will keep the spatiality of the source. The downside is that the sound is not anchored to the viewer, so changes in direction of the sources will be noted as the viewer turns around, notably inverting the sides when looking at the back. I usually find this to be an interesting effect nonetheless, and it doesn’t distract the audience too much. If the directionality is too noticeable you can always mix a bit of the mono sum of the music into both channels in order to reduce the perceived width of the track.

How to creatively use reverberation in Ambisonics?
There is a lot you can do with reverberation in Ambisonics and this is only a single trick I find very useful when dealing with scenes in which you have one big obstacle in one direction (such as a wall), and no obstacles in the opposite direction.

In this situation, the sound would reflect from the barrier and return to the listener from one direction, while on the opposite side there would be no significant reflections because of the open field. You can simulate that by placing a slightly delayed reverb coming from the direction of the barrier only. You can adjust the width of the reflection sound to match the perceived size of the barrier and the delay based on the distance the barrier is from the viewer. In this case the effect usually works better with drier reverbs with defined early reflections but not a lot of late reflections.

Once you experiment with this technique you can use variations of if to simulate a variety of spaces and achieve even more realistic mixes that will fool anyone into believing the sounds you placed in post production were recorded on location.

Main Caption: VR shoot in Hawaii with Google for the Hidden Worlds of the National Parks project. Credit: Hunt Beaty.


Claudio Santos is a sound editor at Silver Sound/SilVR in New York.

Missing Pieces hires head of VR/AR/360, adds VR director

Production company Missing Pieces has been investing in VR recently by way of additional talent. Catherine Day has joined the studio as head of VR/AR/360. She was most recently at Jaunt VR where she was executive producer/head of unscripted. VR director Sam Smith has also joined the company as part of its VR directing team.

This bi-coastal studio has a nice body of VR under its belt. They are responsible for Dos Equis’ VR Masquerade and for bringing a president into VR with Bill Clinton’s Inside Impact series. They also created Follow My Lead: The Story of the NBA 2016 Finals, a VR sports documentary for the NBA and Oculus.

In her new role, Day (pictured) will drive VR/AR/360 efforts from the studio’s Los Angeles office and oversee several original VR series that will be announced jointly with WME and partners in the coming months. In her previous role at Jaunt VR, Day led projects for ABC News, RYOT/Huffington Post, Camp 4 Collective, XRez, Tastemade, Outside TV, Civic Nation and Conservation International.

VR director Smith is a CD and VR director who previously worked with MediaMonks on projects for Expedia, Delta, Converse and YT. Smith also has an extensive background in commercial visual effects. His has a deep understanding of post and VFX, which is helpful when developing VR/360 projects. He will also act as technical advisor.

Margarita Mix’s Pat Stoltz gives us the low-down on VR audio

By Randi Altman

Margarita Mix, one of Los Angeles’ long-standing audio and video post facilities, has taken on virtual reality with the addition of 360-degree sound rooms at their facilities in Santa Monica and Hollywood. This Fotokem company now offers sound design, mix and final print masters for VR video and remixing current spots for a full-surround environment.

Workflows for VR are new and developing every day — there is no real standard. So creatives are figuring it out as they go, but they can also learn from those who were early to the party, like Margarita Mix. They recently worked on a full-length VR concert film with the band Eagles of Death Metal and director/producer Art Haynie of Big Monkey Films. The band’s 2015 tour came to an abrupt end after playing the Bataclan concert hall during last year’s terrorist attacks in Paris. The film is expected to be available online and via apps shortly.

Eagles of Death Metal film.

We reached out to Margarita Mix’s senior technical engineer, Pat Stoltz, to talk about his experience and see how the studio is tackling this growing segment of the industry.

Why was now the right time to open VR-dedicated suites?
VR/AR is an exciting emerging market and online streaming is a perfect delivery format, but VR pre-production, production and post is in its infancy. We are bringing sound design, editorial and mixing expertise to the next level based on our long history of industry-recognized work, and elevating audio for VR from a gaming platform to one suitable for the cinematic and advertising realms where VR content production is exploding.

What is the biggest difference between traditional audio post and audio post for VR?
Traditional cinematic audio has always played a very important part in support of the visuals. Sound effects, Foley, background ambiance, dialog and music clarity to set the mood have aided in pulling the viewer into the story. With VR and AR you are not just pulled into the story, you are in the story! Having the ability to accurately recreate the audio of the filmed environment through higher order ambisonics, or object-based mixing, is crucial. Audio does not only play an important part in support of the visuals, but is now a director’s tool to help draw the viewer’s gaze to what he or she wants the audience to experience. Audio for VR is a critical component of storytelling that needs to be considered early in the production process.

What is the question you asked the most from clients in terms of sound for VR?
Surprisingly none! VR/AR is so new that directors and producers are just figuring things out as they go. On a traditional production set, you have audio mixers and boom operators capturing audio on set. On a VR/AR set, there is no hiding. No boom operators or audio mixers can be visible capturing high-quality audio of the performance.

Some productions have relied on the onboard camera microphones. Unfortunately, in most cases, this turns out to be completely unusable. When the client gets all the way to the audio post, there is a realization that hidden wireless mics on all the actors would have yielded a better result. In VR especially, we recommend starting the sound consultation in pre-production, so that we can offer advice and guide decisions for the best quality product.

What question should clients ask before embarking on VR?
They should ask what they want the viewer to get out of the experience. In VR, no two people are going to walk away with the same viewing experience. We recommend staying focused on the major points that they would like the viewer to walk away with. They should then expand that to answer: What do I have to do in VR to drive that point home, not only mentally, but drawing their gaze for visual support? Based on the genre of the project, considerations should be made to “physically” pull the audience in the direction to tell the story best. It could be through visual stepping stones, narration or audio pre-cues, etc.

What tools are you using on VR projects?
Because this is a nascent field, new tools are becoming available by the day, and we assess and use the best option for achieving the highest quality. To properly address this question, we ask: Where is your project going to be viewed? If the content is going to be distributed via a general Web streaming site, then it will need to be delivered in that audio file format.

There are numerous companies writing plug-ins that are quite good to deliver these formats. If you will be delivering to a Dolby VR (object-based preparatory format) supported site, such as Jaunt, then you will need to generate the proper audio file for that platform. Facebook (higher order ambisonics) requires even a different format. We are currently working in all these formats, as well as working closely with leaders in VR sound to create and test new workflows and guide developments in this new frontier.

What’s the one thing you think everyone should know about working and viewing VR?
As we go through life, we each have our own experiences or what we choose to experience. Our frame of reference directs our focus on things that are most interesting to us. Putting on VR goggles, the individual becomes the director. The wonderful thing about VR is now you can take that individual anywhere they want to go… both in this world and out of it. Directors and producers should think about how much can be packed into a story to draw people into the endless ways they perceive their world.