Tag Archives: AR

An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.

Apple offers augmented reality with Reality Composer

By Barry Goch

In addition to introducing the new MacPro and the Pro Display XDR, at its Worldwide Developers Conference (WWDC19), Apple had some pretty cool demos. The coolest, in my mind, was the Minecraft augmented reality presentation.

Across the street from the San Jose Convention Center, where the keynote was held, Apple set up “The Studio” in the San Jose Civic. One of the demos there was an AR experience with the new MacPro which in reality, you only saw the space frame of Apple’s tower, but in augmented reality you were able to animate an exploded view. The technology behind this demo is the just-announced ARKit3 and Reality Composer.

Apple had a couple of stations demoing Reality Composer in The Studio. Apple has applied its famous legacy of enabling content creators by making new technology easy to use. Case in point is Reality Composer. I’ve tried building AR experiences in other apps and it’s not very straightforward. You have to learn a new interface and coding as well — and use yet another app for targeting your AR environment into the real world. The demo I saw of Reality Composer made it look easy, working in Motion with drag-and-drop prebuilt behaviors built into the app, along with multiple ways to target your AR experience in the real world.

AR QuickLook technology is part of iOS, and you can even get an AR experience of the new MacPro and Pro Display XDR through Apple’s website. They also mentioned its new file for holding AR elements, usdz. Apple has created a tool to convert other 3D file formats to usdz.

With native AR support across Apple’s ecosystem, there is no better time to experiment and learn about augmented reality.


Barry Goch is a finishing artist at LA’s The Foundation and a UCLA Extension Instructor in post production. You can follow him on Twitter at @Gochya.

IDEA launches to create specs for next-gen immersive media

The Immersive Digital Experiences Alliance (IDEA) will launch at the NAB 2019 with the goal of creating a suite of royalty-free specifications that address all immersive media formats, including emerging light field technology.

Founding members — including CableLabs, Light Field Lab, Otoy and Visby — created IDEA to serve as an alliance of like-minded technology, infrastructure and creative innovators working to facilitate the development of an end-to-end ecosystem for the capture, distribution and display of immersive media.

Such a unified ecosystem must support all displays, including highly anticipated light field panels. Recognizing that the essential launch point would be to create a common media format specification that can be deployed on commercial networks, IDEA has already begun work on the new Immersive Technology Media Format (ITMF).

ITMF will serve as an interchange and distribution format that will enable high-quality conveyance of complex image scenes, including six-degrees-of-freedom (6DoF), to an immersive display for viewing. Moreover, ITMF will enable the support of immersive experience applications including gaming, VR and AR, on top of commercial networks.

Recognized for its potential to deliver an immersive true-to-life experience, light field media can be regarded as the richest and most dense form of visual media, thereby setting the highest bar for features that the ITMF will need to support and the new media-aware processing capabilities that commercial networks must deliver.

Jon Karafin, CEO/co-founder of Light Field Lab, explains that “a light field is a representation describing light rays flowing in every direction through a point in space. New technologies are now enabling the capture and display of this effect, heralding new opportunities for entertainment programming, sports coverage and education. However, until now, there has been no common media format for the storage, editing, transmission or archiving of these immersive images.”

“We’re working on specifications and tools for a variety of immersive displays — AR, VR, stereoscopic 3D and light field technology, with light field being the pinnacle of immersive experiences,” says Dr. Arianne Hinds, Immersive Media Strategist at CableLabs. “As a display-agnostic format, ITMF will provide near-term benefits for today’s screen technology, including VR and AR headsets and stereoscopic displays, with even greater benefits when light field panels hit the market. If light field technology works half as well as early testing suggests, it will be a game-changer, and the cable industry will be there to help support distribution of light field images with the 10G platform.”

Starting with Otoy’s ORBX scene graph format, a well-established data structure widely used in advanced computer animation and computer games, IDEA will provide extensions to expand the capabilities of ORBX for light field photographic camera arrays, live events and other applications. Further specifications will include network streaming for ITMF and transcoding of ITMF for specific displays, archiving, and other applications. IDEA will preserve backwards-compatibility on the existing ORBX format.

IDEA anticipates releasing an initial draft of the ITMF specification in 2019. The alliance also is planning an educational seminar to explain more about the requirements for immersive media and the benefits of the ITMF approach. The seminar will take place in Los Angeles this summer.

Photo Credit: All Rights Reserved: Light Field Lab. Future Vision concept art of room-scale holographic display from Light Field Lab, Inc.

Behind the Title: Left Field Labs ECD Yann Caloghiris

NAME: Yann Caloghiris

COMPANY: Left Field Labs (@LeftFieldLabs)

CAN YOU DESCRIBE YOUR COMPANY?
Left Field Labs is a Venice-California-based creative agency dedicated to applying creativity to emerging technologies. We create experiences at the intersection of strategy, design and code for our clients, who include Google, Uber, Discovery and Estée Lauder.

But it’s how we go about our business that has shaped who we have become. Over the past 10 years, we have consciously moved away from the traditional agency model and have grown by deepening our expertise, sourcing exceptional talent and, most importantly, fostering a “lab-like” creative culture of collaboration and experimentation.

WHAT’S YOUR JOB TITLE?
Executive Creative Director

WHAT DOES THAT ENTAIL?
My role is to drive the creative vision across our client accounts, as well as our own ventures. In practice, that can mean anything from providing insights for ongoing work to proposing creative strategies to running ideation workshops. Ultimately, it’s whatever it takes to help the team flourish and push the envelope of our creative work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably that I learn more now than I did at the beginning of my career. When I started, I imagined that the executive CD roles were occupied by seasoned industry veterans, who had seen and done it all, and would provide tried and tested direction.

Today, I think that cliché is out of touch with what’s required from agency culture and where the industry is going. Sure, some aspects of the role remain unchanged — such as being a supportive team lead or appreciating the value of great copy — but the pace of change is such that the role often requires both the ability to leverage past experience and accept that sometimes a new paradigm is emerging and assumptions need to be adjusted.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team, and the excitement that comes from workshopping the big ideas that will anchor the experiences we create.

WHAT’S YOUR LEAST FAVORITE?
The administrative parts of a creative business are not always the most fulfilling. Thankfully, tasks like timesheeting, expense reporting and invoicing are becoming less exhaustive thanks to better predictive tools and machine learning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The early hours of the morning, usually when inspiration strikes — when we haven’t had to deal with the unexpected day-to-day challenges that come with managing a busy design studio.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be somewhere at the cross-section between an artist, like my mum was, and an engineer like my dad. There is nothing more satisfying than to apply art to an engineering challenge or vice versa.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school in France, and there wasn’t much room for anything other than school and homework. When I got my Baccalaureate, I decided that from that point onward that whatever I did, it would be fun, deeply engaging and at a place where being creative was an asset.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently partnered with ad agency RK Venture to craft a VR experience for the New Mexico Department of Transportation’s ongoing ENDWI campaign, which immerses viewers into a real-life drunk-driving scenario.

ENDWI

To best communicate and tell the human side of this story, we turned to rapid breakthroughs within volumetric capture and 3D scanning. Working with Microsoft’s Mixed Reality Capture Studio, we were able to bring every detail of an actor’s performance to life with volumetric performance capture in a way that previous techniques could not.

Bringing a real actor’s performance into a virtual experience is a game changer because of the emotional connection it creates. For ENDWI, the combination of rich immersion with compelling non-linear storytelling proved to affect the participants at a visceral level — with the goal of changing behavior further down the road.

Throughout this past year, we partnered with the VMware Cloud Marketing Team to create a one-of-a-kind immersive booth experience for VMworld Las Vegas 2018 and Barcelona 2018 called Cloud City. VMware’s cloud offering needed a distinct presence to foster a deeper understanding and greater connectivity between brand, product and customers stepping into the cloud.

Cloud City

Our solution was Cloud City, a destination merging future-forward architecture, light, texture, sound and interactions with VMware Cloud experts to give consumers a window into how the cloud, and more specifically how VMware Cloud, can be an essential solution for them. VMworld is the brand’s penultimate engagement where hands-on learning helped showcase its cloud offerings. Cloud City garnered 4000-plus demos, which led to a 20% lead conversion in 10 days.

Finally, for Google, we designed and built a platform for the hosting of online events anywhere in the world: Google Gather. For its first release, teams across Google, including Android, Cloud and Education, used Google Gather to reach and convert potential customers across the globe. With hundreds of events to date, the platform now reaches enterprise decision-makers at massive scale, spanning far beyond what has been possible with traditional event marketing, management and hosting.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Recently, a friend and I shot and edited a fun video homage to the original technology boom-town: Detroit, Michigan. It features two cultural icons from the region, an original big block ‘60s muscle car and some gritty electro beats. My four-year-old son thinks it’s the coolest thing he’s ever seen. It’s going to be hard for me to top that.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Human flight, the Internet and our baby monitor!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram, Twitter, Medium and LinkedIn.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Where to start?! Music has always played an important part of my creative process, and the joy I derive from what we do. I have day-long playlists curated around what I’m trying to achieve during that time. Being able to influence how I feel when working on a brief is essential — it helps set me in the right mindset.

Sometimes, it might be film scores when working on visuals, jazz to design a workshop schedule or techno to dial-up productivity when doing expenses.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Spend time with my kids. They remind me that there is a simple and unpretentious way to look at life.

30 Ninja’s Julina Tatlock to keynote SMPTE 2018, will focus on emerging tech

30 Ninjas CEO Julina Tatlock, an award-winning writer-producer, virtual reality director and social TV specialist, will present the keynote address at the SMPTE 2018 conference, which takes place from October 22-25 in downtown Los Angeles. The keynote by Tatlock will take place on the 23rd at 9am, immediately following the SMPTE Annual general membership meeting.

Tatlock specializes in producing and directing VR, creating social media and web-based narrative games for movies and broadcast, as well as collaborating with developers on integrating new tech intellectual property into interactive stories.

During her keynote, she will discuss the ways that content creation and entertainment production can leverage emerging technologies. Tatlock will also address topics such as how best to evaluate what might be the next popular entertainment technology and platform, as well as how to write, direct and build for technology and platforms that don’t exist yet.

Tatlock’s 30 Ninjas, is an award-winning immersive-entertainment company she founded along with director Doug Liman (Bourne Identity, Mr. & Mrs. Smith, Edge of Tomorrow, American Made). 30 Ninjas creates original narratives and experiences in new technologies such as virtual reality, augmented reality and mixed reality and location-based entertainment for clients such as Warner Bros., USA Network, Universal Cable Productions and Harper Collins.

Tatlock also is the executive producer and director of episodes three and four of the six-part VR miniseries “Invisible,” with production partners Condé Nast Entertainment, Jaunt VR and Samsung.

Before founding 30 Ninjas, she spent eight years at Oxygen Media, where she was VP of programming strategy. In an earlier role with Martha Stewart Living Omnimedia, Tatlock wrote and produced more than 100 of NBC’s Martha Stewart Living morning show segments.

Registration is open for both SMPTE 2018 and for the SMPTE 2018 Symposium, an all-day session that will precede the technical conference and exhibition on Oct. 22. Pre-registration pricing is available through Oct. 13. Further details are available at smpte2018.org.

Lenovo intros 15-inch VR-ready ThinkPad P52

Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.

“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”

The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.

“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”

The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.

The ThinkPad P52 will be available later this month.

VR at NAB 2018: A Parisian’s perspective

By Alexandre Regeffe

Even though my cab driver from the airport to my hotel offered these words of wisdom — “What happens in Vegas, stays in Vegas” — I’ve decided not to listen to him and instead share with you the things that impressed for the VR world at NAB 2018.

Back in September of 2017, I shared with you my thoughts on the VR offerings at the IBC show in Amsterdam. In case you don’t remember my story, I’m a French guy who jumped into the VR stuff three years ago and started a cinematic VR production company called Neotopy with a friend. Three years is like a century in VR. Indeed, this medium is constantly evolving, both technically and financially.

So what has become of VR today? Lots of different things. VR is a big bag where people throw AR, MR, 360, LBE, 180 and 3D. And from all of that, XR (Extended Reality) was born, which means everything.

Insta360 Titan

But if this blurred concept leads to some misunderstanding, is it really good for consumers? Even us pros are finding it difficult to explain what exactly VR is, currently.

While at NAB, I saw a presentation from Nick Bicanic during which he used the term “frameless media.” And, thank you, Nick, because I think that is exactly what‘s in this big bag called VR… or XR. Today, we consume a lot of content through a frame, which is our TV, computer, smartphone or cinema screen. VR allows us to go beyond the frame, and this is a very important shift for cinematographers and content creators.

But enough concepts and ideas, let us start this journey on the NAB show floor! My first stop was the VR pavilion, also called the “immersive storytelling pavilion” this year.

My next stop was to see SGO Mistika. For over a year, the SGO team has been delivering an incredible stitching software with its Mistika VR. In my opinion, there is a “before” and an “after” this tool. Thanks to its optical flow capacities, you can achieve a seamless stitching 99% of the time, even with very difficult shooting situations. The last version of the software provided additional features like stabilization, keyframe capabilities, more cameras presets and easy integration with Kandao and Insta360 camera profiles. VR pros used Mistika’s booth as sort of a base camp, meeting the development team directly.

A few steps from Misitka was Insta360, with a large, yellow booth. This Chinese company is a success story with the consumer product Insta360 One, a small 360 camera for the masses. But I was more interested in the Insta360 Pro, their 8K stereoscopic 3D360 flagship camera used by many content creators.

At the show, Insta360’s big announcement was Titan, a premium version of the Insta360 Pro offering better lenses and sensors. It’s available later this year. Oh, and there was the lightfield camera prototype, the company’s first step into the volumetric capture world.

Another interesting camera manufacturer at the show was Human Eyes Technology, presenting their Vuze+. With this affordable 3D360 camera you can dive into stereoscopic 360 content and learn the basics about this technology. Side note: The Vuze+ was chosen by National Geographic to shoot some stunning sequences in the International Space Station.

Kandao Obsidian

My favorite VR camera company, Kandao, was at NAB showing new features for its Obsidian R and S cameras. One of the best is the 6DoF capabilities. With this technology, you can generate a depth map from the camera directly in Kandao Studio, the stitching software, which comes free when you buy an Obsidian. With the combination of a 360 stitched image and depth map, you can “walk” into your movie. It’s an awesome technique for better immersion. For me this was by far the best innovation in VR technology presented on the show floor

The live capabilities of Obsidian cameras have been improved, with a dedicated Kandao Live software, which allows you to live stream 4K stereoscopic 360 with optical flow stitching on the fly! And, of course, do not forget their new Qoocam camera. With its three-lens-equipped little stick, you can either do VR 180 stereoscopic or 360 monoscopic, while using depth map technology to refocus or replace the background in post — all with a simple click. Thanks to all these innovations, Kandao is now a top player in the cinematic VR industry.

One Kandao competitor is ZCam. They were there with a couple of new products: the ZCam V1, a 3D360 camera with a tiny form factor. It’s very interesting for shooting scenes where things are very close to the camera. It keeps a good stereoscopy even on nearby objects, which is a major issue with most of VR cameras and rigs. The second one is the small E2 – while it’s not really a VR camera, it can be used as an underwater rig, for example.

ZCam K1 Pro

The ZCam product range is really impressive and completely targeting professionals, from ZCam S1 to ZCam V1 Pro. Important note: take a look at their K1 Pro, a VR 180 camera, if you want to produce high-end content for the Google VR180 ecosystem.

Another VR camera at NAB was Samsung’s Round, offering stereoscopic capabilities. This relatively compact device comes with a proprietary software suite for stitching and viewing 360 shots. Thanks to IP65 normalization, you can use this camera outdoors in difficult weather conditions, like rain, dust or snow. It was great to see the live streaming 4K 3D360 operating on the show floor, using several Round cameras combined with powerful Next Computing hardware.

VR Post
Adobe Creative Cloud 2018 remains the must-have tool to achieve VR post production without losing your mind. Numerous 360-specific functionalities have been added during the last year, after Adobe bought the Mettle Skybox suite. The most impressive feature is that you can now stay in your 360 environment for editing. You just put your Oculus rift headset on and manipulate your Premiere timeline with touch controllers and proceed to edit your shots. Think of it as a Minority Report-style editing interface! I am sure we can expect more amazing VR tools from Adobe this year.

Google’s Lightfield technology

Mettle was at the Dell booth showing their new Adobe CC 360 plugin, called Flux. After an impressive Mantra release last year, Flux is now available for VR artists, allowing them to do 3D volumetric fractals and to create entire futuristic worlds. It was awesome to see the results in a headset!

Distributing VR
So once you have produced your cinematic VR content, how can you distribute it? One option is to use the Liquid Cinema platform. They were at NAB with a major update and some new features, including seamless transitions between a “flat” video and a 360 video. As a content creator you can also manage your 360 movies in a very smart CMS linked to your app and instantly add language versions, thumbnails, geoblocking, etc. Another exciting thing is built-in 6DoF capability right in the editor with a compatible headset — allowing you to walk through your titles, graphics and more!

I can’t leave without mentioning Voysys for live-streaming VR; Kodak PixPro and its new cameras ; Google’s next move into lightfield technology ; Bonsai’s launch of a new version of the Excalibur rig ; and many other great manufacturers, software editors and partners.

See you next time, Sin City.

Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.

Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

Lenovo’s ‘Transform’ event: IT subscriptions and AR

By Claudio Santos

Last week I had the opportunity to attend Lenovo’s “Transform” event, in which the company unveiled its newest releases as well as its plans for the near future. I must say they had quite the lineup ready.

The whole event was divided into two tracks “Datacenters” and “PC and Smart Devices.” Each focused on its own products and markets, but a single idea permeated all announcements in the day. It’s what Lenovo calls the “Fourth Revolution.” That’s what the company calls the next step in integration between devices and the cloud. Their vision is that soon 5G mobile Internet will be available, allowing for devices to seamlessly connect to the cloud on the go and more importantly, always stay connected.

While there were many interesting announcements throughout the day, I will focus on two that seem more closely relatable to most post facilities.

The first is what Lenovo is calling “PC as a service.” They want to sell the bulk of the IT hardware and support needs for companies as subscription-based deals, and that would be awesome! Why? Well, it’s simply a fact of life now that post production happens almost exclusively with the aid of computer software (sorry, if you’re still one of the few cutting film by hand, this article won’t be that interesting for you).

Having to choose, buy and maintain computers for our daily work takes a lot of research and, most notably, time. Between software updates, managing different licenses, subscriptions and hunting down weird quirks of the system, a lot of time is taken away from more important tasks such as editing or client relationship. When you throw a server and a local network in the mix it becomes a hefty job that takes a lot of maintenance.

That’s why bigger facilities employ IT specialists to deal with all that. But many post facilities aren’t big enough to employ a full-time IT person, nor are their needs complex enough to warrant the investment.

Lenovo sees this as an opportunity to simplify the role of the IT department by selling subscriptions that include the hardware, the software and all the necessary support (including a help desk) to keep the systems running without having to invest in a large IT department. More importantly, the subscription would be flexible. So, during periods in which you have need for more stations/support you can increase the scope of the subscription and then shrink it once again when the demands lower, freeing you from absorbing the cost of unused machines/software that would just sit around unused.

I see one big problem in this vision: Lenovo plans to start the service with a minimum of 1,000 seats for a deal. That is far, far more staff than most post facilities have, and at that point it would probably just be worth hiring a specialist that can also help you automate your workflow and develop customized tools for your projects. It is nonetheless an interesting approach, and I hope to see it trickle down to smaller clients as it solidifies as a feasible model.

AR
The other announcement that should interest post facilities is Lenovo’s interest in the AR market. As many of you might know, augmented reality is projected to be an even bigger market than it’s more popular cousin virtual reality, largely due to its more professional application possibilities.

Lenovo has been investing in AR and has partnered up with Metavision to experiment and start working towards real work-environment offerings of the technology. Besides the hand gestures that are always emphasized in AR promo videos, one very simple use-case seems to be in Lenovo’s sights, and that’s one I hope to see being marketable very soon: workspace expansion. Instead of needing three or four different monitors to accommodate our ever-growing number of windows and displays while working, with AR we will be able to place windows anywhere around us, essentially giving us a giant spherical display. A very simple problem with a very simple solution, but one that I believe would increase the productivity of editors by a considerable amount.

We should definitely keep an eye on Lenovo as they embark one this new quest for high-efficiency solutions for businesses, because that’s exactly what the post production industry finds itself in need of right now.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

SMPTE’s ETCA conference takes on OTT, cloud, AR/VR, more

SMPTE has shared program details for its Entertainment Technology in the Connected Age (ETCA) conference, taking place in Mountain View, California, May 8-9 at the Microsoft Silicon Valley Campus.

Called “Redefining the Entertainment Experience,” this year’s conference will explore emerging technologies’ impact on current and future delivery of compelling connected entertainment experiences.

Bob DeHaven, GM of worldwide communications & media at Microsoft Azure, will present the first conference keynote, titled “At the Edge: The Future of Entertainment Carriage.” The growth of on-demand programming and mobile applications, the proliferation of the cloud and the advent of the “Internet of things” demands that video content is available closer to the end user to improve both availability and the quality of the experience.

DeHaven will discuss the relationships taking shape to embrace these new requirements and will explore the roles network providers, content delivery networks (CDNs), network optimization technologies and cloud platforms will play in achieving the industry’s evolving needs.

Hanno Basse, chief technical officer at Twentieth Century Fox Film, will present “Next-Generation Entertainment: A View From the Fox.” Fox distributes content via multiple outlets ranging — from cinema to Blu-ray, over-the-top (OTT), and even VR. Basse will share his views on the technical challenges of enabling next-generation entertainment in a connected age and how Fox plans to address them.

The first conference session, “Rethinking Content Creation and Monetization in a Connected Age,” will focus on multiplatform production and monetization using the latest creation, analytics and search technologies. The session “Is There a JND in It for Me?” will take a second angle, exploring what new content creation, delivery and display technology innovations will mean for the viewer. Panelists will discuss the parameters required to achieve original artistic intent while maintaining a just noticeable difference (JND) quality level for the consumer viewing experience.

“Video Compression: What’s Beyond HEVC?” will explore emerging techniques and innovations, outlining evolving video coding techniques and their ability to handle new types of source material, including HDR and wide color gamut content, as well as video for VR/AR.

Moving from content creation and compression into delivery, “Linear Playout: From Cable to the Cloud” will discuss the current distribution landscape, looking at the consumer apps, smart TV apps, and content aggregators/curators that are enabling cord-cutters to watch linear television, as well as the new business models and opportunities shaping services and the consumer experience. The session will explore tools for digital ad insertion, audience measurement and monetization while considering the future of cloud workflows.

“Would the Internet Crash If Everyone Watched the Super Bowl Online?” will shift the discussion to live streaming, examining the technologies that enable today’s services as well as how technologies such as transparent caching, multicast streaming, peer-assisted delivery and User Datagram Protocol (UDP) streaming might enable live streaming at a traditional broadcast scale and beyond.

“Adaptive Streaming Technology: Entertainment Plumbing for the Web” will focus specifically on innovative technologies and standards that will enable the industry to overcome inconsistencies of the bitrate quality of the Internet.

“IP and Thee: What’s New in 2017?” will delve into the upgrade to Internet Protocol infrastructure and the impact of next-generation systems such as the ATSC 3.0 digital television broadcast system, the Digital Video Broadcast (DVB) suite of internationally accepted open standards for digital television, and fifth-generation mobile networks (5G wireless) on Internet-delivered entertainment services.

Moving into the cloud, “Weather Forecast: Clouds and Partly Scattered Fog in Your Future” examines how local networking topologies, dubbed “the fog,” are complementing the cloud by enabling content delivery and streaming via less traditional — and often wireless — communication channels such as 5G.

“Giving Voice to Video Discovery” will highlight the ways in which voice is being added to pay television and OTT platforms to simplify searches.

In a session that explores new consumption models, “VR From Fiction to Fact” will examine current experimentation with VR technology, emerging use cases across mobile devices and high-end headsets, and strategies for addressing the technical demands of this immersive format.

You can resister for the conference here.

HPA Tech Retreat takes on VR/AR at Tech Retreat Extra

The long-standing HPA Tech Retreat is always a popular destination for tech-focused post pros, and while they have touched on virtual reality and augmented reality in the past, this year they are dedicating an entire day to the topic — February 20, the day before the official Retreat begins. TR-X (Tech Retreat Extra) will feature VR experts and storytellers sharing their knowledge and experiences. The traditional HPA Tech Retreat runs from February 21-24 in Indian Wells, California.

TR-X VR/AR is co-chaired by Lucas Wilson (Founder/Executive Producer at SuperSphereVR) and Marcie Jastrow (Senior VP, Immersive Media & Head of Technicolor Experience Center), who will lead a discussion focused on the changing VR/AR landscape in the context of rapidly growing integration into entertainment and applications.

Marcie Jastrow

Experts and creative panelists will tackle questions such as: What do you need to understand to enable VR in your environment? How do you adapt? What are the workflows? Storytellers, technologists and industry leaders will provide an overview of the technology and discuss how to harness emerging technologies in the service of the artistic vision. A series of diverse case studies and creative explorations — from NASA to the NFL — will examine how to engage the audience.

The TR-X program, along with the complete HPA Tech Retreat program, is available here. Additional sessions and speakers will be announced.

TR-X VR/AR Speakers and Panel Overview
Monday, February 20

Opening and Introductions
Seth Hallen, HPA President

Technical Introduction: 360/VR/AR/MR
Lucas Wilson

Panel Discussion: The VR/AR Market
Marcie Jastrow
David Moretti, Director of Corporate Development, Jaunt
Catherine Day, Head of VR/AR, Missing Pieces
Phil Lelyveld, VR/AR Initiative Program Lead, Entertainment Technology Center at USC

Acquisition Technology
Koji Gardiner, VP, Hardware, Jaunt

Live 360 Production Case Study
Andrew McGovern, VP of VR/AR Productions, Digital Domain

Live 360 Production Case Study
Michael Mansouri, Founder, Radiant Images

Interactive VR Production Case Study
Tim Dillon, Head of VR & Immersive Content, MPC Advertising USA

Immersive Audio Production Case Study
Kyle Schember, CEO, Subtractive

Panel Discussion: The Future
Alan Lasky, Director of Studio Product Development, 8i
Ben Grossmann, CEO, Magnopus
Scott Squires, CTO, Creative Director, Pixvana
Moderator: Lucas Wilson
Jen Dennis, EP of Branded Content, RSA

Panel Discussion: New Voices: Young Professionals in VR
Anne Jimkes, Sound Designer and Composer, Ecco VR
Jyotsna Kadimi, USC Graduate
Sho Schrock, Chapman University Student
Brian Handy, USC Student

TR-X also includes an ATSC 3.0 seminar, focusing on the next-generation television broadcast standard, which is nearing completion and offers a wide range of new content delivery options to the TV production community. This session will explore the expanding possibilities that the new standard provides in video, audio, interactivity and more. Presenters and panelists will also discuss the complex next-gen television distribution ecosystem that content must traverse, and the technologies that will bring the content to life in consumers’ homes.

Early registration is highly recommended for TR-X and the HPA Tech Retreat, which is a perennially sold-out event. Attendees can sign up for TR-X VR/AR, TR-X ATSC or the HPA Tech Retreat.

Main Image: Lucas Wilson.

Missing Pieces hires head of VR/AR/360, adds VR director

Production company Missing Pieces has been investing in VR recently by way of additional talent. Catherine Day has joined the studio as head of VR/AR/360. She was most recently at Jaunt VR where she was executive producer/head of unscripted. VR director Sam Smith has also joined the company as part of its VR directing team.

This bi-coastal studio has a nice body of VR under its belt. They are responsible for Dos Equis’ VR Masquerade and for bringing a president into VR with Bill Clinton’s Inside Impact series. They also created Follow My Lead: The Story of the NBA 2016 Finals, a VR sports documentary for the NBA and Oculus.

In her new role, Day (pictured) will drive VR/AR/360 efforts from the studio’s Los Angeles office and oversee several original VR series that will be announced jointly with WME and partners in the coming months. In her previous role at Jaunt VR, Day led projects for ABC News, RYOT/Huffington Post, Camp 4 Collective, XRez, Tastemade, Outside TV, Civic Nation and Conservation International.

VR director Smith is a CD and VR director who previously worked with MediaMonks on projects for Expedia, Delta, Converse and YT. Smith also has an extensive background in commercial visual effects. His has a deep understanding of post and VFX, which is helpful when developing VR/360 projects. He will also act as technical advisor.

Margarita Mix’s Pat Stoltz gives us the low-down on VR audio

By Randi Altman

Margarita Mix, one of Los Angeles’ long-standing audio and video post facilities, has taken on virtual reality with the addition of 360-degree sound rooms at their facilities in Santa Monica and Hollywood. This Fotokem company now offers sound design, mix and final print masters for VR video and remixing current spots for a full-surround environment.

Workflows for VR are new and developing every day — there is no real standard. So creatives are figuring it out as they go, but they can also learn from those who were early to the party, like Margarita Mix. They recently worked on a full-length VR concert film with the band Eagles of Death Metal and director/producer Art Haynie of Big Monkey Films. The band’s 2015 tour came to an abrupt end after playing the Bataclan concert hall during last year’s terrorist attacks in Paris. The film is expected to be available online and via apps shortly.

Eagles of Death Metal film.

We reached out to Margarita Mix’s senior technical engineer, Pat Stoltz, to talk about his experience and see how the studio is tackling this growing segment of the industry.

Why was now the right time to open VR-dedicated suites?
VR/AR is an exciting emerging market and online streaming is a perfect delivery format, but VR pre-production, production and post is in its infancy. We are bringing sound design, editorial and mixing expertise to the next level based on our long history of industry-recognized work, and elevating audio for VR from a gaming platform to one suitable for the cinematic and advertising realms where VR content production is exploding.

What is the biggest difference between traditional audio post and audio post for VR?
Traditional cinematic audio has always played a very important part in support of the visuals. Sound effects, Foley, background ambiance, dialog and music clarity to set the mood have aided in pulling the viewer into the story. With VR and AR you are not just pulled into the story, you are in the story! Having the ability to accurately recreate the audio of the filmed environment through higher order ambisonics, or object-based mixing, is crucial. Audio does not only play an important part in support of the visuals, but is now a director’s tool to help draw the viewer’s gaze to what he or she wants the audience to experience. Audio for VR is a critical component of storytelling that needs to be considered early in the production process.

What is the question you asked the most from clients in terms of sound for VR?
Surprisingly none! VR/AR is so new that directors and producers are just figuring things out as they go. On a traditional production set, you have audio mixers and boom operators capturing audio on set. On a VR/AR set, there is no hiding. No boom operators or audio mixers can be visible capturing high-quality audio of the performance.

Some productions have relied on the onboard camera microphones. Unfortunately, in most cases, this turns out to be completely unusable. When the client gets all the way to the audio post, there is a realization that hidden wireless mics on all the actors would have yielded a better result. In VR especially, we recommend starting the sound consultation in pre-production, so that we can offer advice and guide decisions for the best quality product.

What question should clients ask before embarking on VR?
They should ask what they want the viewer to get out of the experience. In VR, no two people are going to walk away with the same viewing experience. We recommend staying focused on the major points that they would like the viewer to walk away with. They should then expand that to answer: What do I have to do in VR to drive that point home, not only mentally, but drawing their gaze for visual support? Based on the genre of the project, considerations should be made to “physically” pull the audience in the direction to tell the story best. It could be through visual stepping stones, narration or audio pre-cues, etc.

What tools are you using on VR projects?
Because this is a nascent field, new tools are becoming available by the day, and we assess and use the best option for achieving the highest quality. To properly address this question, we ask: Where is your project going to be viewed? If the content is going to be distributed via a general Web streaming site, then it will need to be delivered in that audio file format.

There are numerous companies writing plug-ins that are quite good to deliver these formats. If you will be delivering to a Dolby VR (object-based preparatory format) supported site, such as Jaunt, then you will need to generate the proper audio file for that platform. Facebook (higher order ambisonics) requires even a different format. We are currently working in all these formats, as well as working closely with leaders in VR sound to create and test new workflows and guide developments in this new frontier.

What’s the one thing you think everyone should know about working and viewing VR?
As we go through life, we each have our own experiences or what we choose to experience. Our frame of reference directs our focus on things that are most interesting to us. Putting on VR goggles, the individual becomes the director. The wonderful thing about VR is now you can take that individual anywhere they want to go… both in this world and out of it. Directors and producers should think about how much can be packed into a story to draw people into the endless ways they perceive their world.

Ronen Tanchum brought on to run The Artery’s new AR/VR division

New York City’s The Artery has named Ronen Tanchum head of its newly launched virtual reality/augmented reality division. He will serve as creative director/technical director.

Tanchum has a rich VFX background, having produced complex effects set-ups and overseen digital tools development for feature films including Deadpool, Transformers, The Amazing Spiderman, Happy Feet 2, Teenage Mutant Ninja Turtles and The Wolverine. He is also the creator of the original VR film When We Land: Young Yosef. His work on The Future of Music — a 360-degree virtual experience from director Greg Barth and Phenomena Labs, which immerses the viewer in a surrealist musical space — won the DA&D Silver Award in the “Best Branded Content” category in 2016.

“VR today stands at just the tip of the iceberg,” says Tanchum. “Before VR came along, we were just observers and controlled our worlds through a mouse and a keyboard. Through the VR medium, humans become active participants in the virtual world — we get to step into our own imaginations with a direct link to our brains for the first time, experiencing the first impressions of a virtual world. As creators, VR offers us a very powerful tool by which to present a unique new experience.”

Tanchum says the first thing he asks a potential new VR client is, ‘Why VR? What is the role of VR in your story? “Coming from our long experiences in the CG world by working on highly demanding creative visual projects, we at The Artery have evolved our collective knowledge and developed a strong pipeline into this new VR platform,” he explains, adding that The Artery’s new division is currently gearing up for a big VR project for a major brand. “We are using it to its fullest to tell stories. We inform our clients that VR shouldn’t be created just because it’s ‘cool.’ The new VR platform should be used to play an integral part of the storyline itself — a well crafted VR experience should embellish and complement the story.”

 

AES Conference focuses on immersive audio for VR/AR

By  Mel Lambert

The AES Convention, which was held at the Los Angeles Convention Center in early October, attracted a broad cross section of production and post professionals looking to discuss the latest technologies and creative offerings. The convention had approximately 13,000 registered attendees and more than 250 brands showing wares in the exhibits halls and demo rooms.

Convention Committee co-chairs Valerie Tyler and Michael MacDonald, along with their team, created the comprehensive schedule of workshops, panels and special events for this year’s show. “The Los Angeles Convention Center’s West Hall was a great new location for the AES show,” said MacDonald. “We also co-located the AVAR conference, and that brought 3D audio for gaming and virtual reality into the mainstream of the AES.”

“VR seems to be the next big thing,” added AES executive director Bob Moses, “[with] the top developers at our event, mapping out the future.”

The two-day, co-located Audio for Virtual and Augmented Reality Conference was expected to attract about 290 attendees, but with aggressive marketing and outreach to the VR and AR communities, pre-registration closed at just over 400.

Aimed squarely at the fast-growing field of virtual/augmented reality audio, this conference focused on the creative process, applications workflow and product development. “Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” said conference co-chair Andres Mayo. “This conference demonstrates that convincing VR and AR productions require audio that follows the motions of the subject and produces a realistic immersive experience.”

Spatial sound that follows head orientation for headsets powered either by dedicated DSP, game engines or smartphones opens up exciting opportunities for VR and AR producers. Oculus Rift, HTC Vive, PlayStation VR and other systems are attracting added consumer interest for the coming holiday season. Many immersive-audio innovators, including DST and Dolby, are offering variants of their cinema systems targeted at this booming consumer marketplace via binaural headphone playback.

Sennheiser’s remarkable new Ambeo VR microphone (pictured left) can be used to capture 3D sound and then post produced to prepare different spatial perspectives — a perfect adjunct for AR/VR offerings. At the high end, Nokia unveiled its Ozo VR camera, equipped with eight camera sensors and eight microphones, as an alternative to a DIY assembly of GoPro cameras, for example.

Two fascinating keynotes bookended the AVAR Conference. The opening keynote, presented by Philip Lelyveld, VR/AR initiative program manager at the USC Entertainment Technology Center, Los Angeles, and called “The Journey into Virtual and Augmented Reality,” defined how virtual, augmented and mixed reality will impact entertainment, learning and social interaction. “Virtual, Augmented and Mixed Reality have the potential of delivering interactive experiences that take us to places of emotional resonance, give us agency to form our own experiential memories, and become part of the everyday lives we will live in the future,” he explained.

“Just as TV programming progressed from live broadcasts of staged performances to today’s very complex language of multithread long-form content,” Lelyveld stressed, “so such media will progress from the current early days of projecting existing media language with a few tweaks to a headset experience into a new VR/AR/MR-specific language that both the creatives and the audience understand.”

Is his closing keynote, “Future Nostalgia, Here and Now: Let’s Look Back on Today from 20 Years Hence,” George Sanger, director of sonic arts at Magic Leap, attempted to predict where VR/AR/MR will be in two decades. “Two decades of progress can change how we live and think in ways that boggle the mind,” he acknowledged. “Twenty years ago, the PC had rudimentary sound cards, now the entire ‘multitrack recording studio’ lives on our computers. By 2036, we will be wearing lightweight portable devices all day. Our media experience will seamlessly merge the digital and physical worlds; how we listen to music will change dramatically. We live in the Revolution of Possibilities.”

According to conference co-chair Linda Gedemer, “It has been speculated by Wall Street [pundits] that VR/AR will be as game changing as the advent of the PC, so we’re in for an incredible journey!”

Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA

AR/VR audio conference taking place with AES show in fall


The AES is tackling the augmented reality and virtual reality creative process, applications workflow and product development for the first time with a dedicated conference that will take place on 9/30-10/1 during the 141st AES Convention at the LA Convention Center’s West Hall.

The two-day program of technical papers, workshops, tutorials and manufacturer’s expo will highlight the creative and technical challenges of providing immersive spatial audio to accompany virtual reality and augmented reality media.

The conference will attract content developers, researchers, manufacturers, consultants and students, in addition to audio engineers seeking to expand their knowledge about sound production for virtual and augmented reality. The companion expo will feature displays from leading-edge manufacturers and service providers looking to secure industry metrics for this emerging field.

“Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” shares conference co-chair Andres Mayo. “This conference will demonstrate that VR and AR productions, using a variety of playback devices, require audio that follows the motions of the subject, and produces a realistic immersive experience. Our program will spotlight the work of leading proponents in this exciting field of endeavor, and how realistic spatial audio can be produced from existing game console and DSP engines.”

Proposed topics include object-based audio mixing for VR/AR, immersive audio in VR/AR broadcast, live VR audio production, developing audio standards for VR/AR, cross platform audio considerations in VR and streaming immersive audio content.

Costs range from $195 for a one-day pass for AES members ($295 for a two-day pass) and $125 for accredited students, to $280/$435 for non-members; Early-bird discounts also are available.

Conference registrants can also attend the 141st AES Convention’s companion exhibition, select educational sessions and special events free of charge with an exhibits-plus badge.

Talking VR content with Phillip Moses of studio Rascali

Phillip Moses, head of VR content developer Rascali, has been working in visual effects for over 25 years. His resume boasts some big-name films, including Alice in Wonderland, Speed Racer and Spider-Man 3, just to name a few. Seven years ago he launched a small boutique visual effects studio, called The Resistance VFX, with VFX supervisor Jeff Goldman.

Two years ago, after getting a demo of an Oculus pre-release Dev Kit 2, Moses realized that “we were poised on the edge of not just a technological breakthrough, but what will ultimately be a new platform for consuming content. To me, this was a shift almost as big as the smartphone, and an exciting opportunity for content creators to begin creating in a whole new ecosystem.”

Phillip Moses

Phillip Moses

Shortly after that, his friends James Chung and Taehoon Oh launched Reload Studios, with the vision of creating the first independently-developed first-person shooter game, designed from the ground up for VR. “As one of the first companies formed around the premise of VR, they attracted quite a bit of interest in the non-gaming sector as well,” he explains. “Last year, they asked me to come aboard and direct their non-gaming division, Rascali. I saw this as a huge opportunity to do what I love best: explore, create and innovate.”

Rascali has been busy. They recently debuted trailers for their first episodic VR projects, Raven and The Storybox Project, on YouTube, Facebook/Oculus Video, Jaunt, Littlstar, Vrideo and Samsung MilkVR. Let’s find out more…

You recently directed two VR trailers. How is directing for VR different than directing for traditional platforms?
Directing for VR is a tricky beast and requires a lot of technical knowledge of the whole process that would not normally be required of directors. To be fair, today’s directors are a very savvy bunch, and most have a solid working knowledge of how visual effects are used in the process. However, for the way I have chosen to shoot the series, it requires the ability to have a pretty solid understanding of not just what can be done, but how to actually do it. To be able to previsualize the process and, ultimately, the end result in your head first is critical to being able to communicate that vision down the line.

Also, from a script and performance perspective, I think it’s important to start with a very important question of “Why VR?” And once you believe you have a compelling answer to that question, then you need to start thinking about how to use VR in your story.  Will you require interaction and participation from the viewer? Will you involve the viewer in any way? Or will you simply allow VR to serve as an additional element of presence and immersion for the viewer?

While you gain many things in VR, you also have to go into the process with a full knowledge of what you ultimately lose. The power of lenses, for example, to capture nuance and to frame an image to evoke an emotional response, is all but lost. You find yourself going back to exploring what works best in a real-world framing — almost like you are directing a play in an intimate theater.

What is the biggest challenge in the post workflow for VR?
Rendering! Everything we are producing for Raven is at 4K left eye, 4K right eye and 60fps. The rendering process alone guarantees that the process will take longer than you hoped. It also guarantees that you will need more data storage than you ever thought necessary.

But other than rendering, I find that the editorial process is also more challenging. With VR, those shots that you thought you were holding onto way too long are actually still too short, and it involves an elaborate process to conform everything for review in a headset between revisions. In many ways, it’s similar to the old process of making your edit decisions, then walking the print into the screening room. You forget how tedious the process can be.
By the way, I’m looking forward to integrating some realtime 360 review into the editorial process. Make it happen Adobe/Avid!

These trailers are meant to generate interest from production partners to green light these as full episodic series. What is the intended length of each episode, and what’s the projected length of time from concept to completion for each episode of the all-CG Storybox, and live-action Raven?
Each one of these projects is designed for completely different audiences, so the answer is a bit different for each one. For Storybox, we are looking to keep each episode under five minutes, with the intention that it is a fairly easy-to-consume piece of content that is accessible to a broad spectrum of ages. We really hope to make the experiences fun, playful and surprising for the viewer, and to create a context for telling these stories that fuels the imagination of kids.

For Storybox, I believe that we can start delivering finished episodes before the end of the third quarter — with a full season representing 12 to 15 episodes. Raven, on the other hand, is a much more complex undertaking. While the VR market is being developed, we are betting on the core VR consumers to really want stories and experiences that range closer to 12 to 15 minutes in duration. We feel this is enough time to tell more complex stories, but still make each episode feel like a fantastic experience that they could not experience anywhere else. If green-lit tomorrow, I believe we would be looking at a four-month production schedule for the pilot episode.

Rascali is a division of Reload Studios, which is developing VR games. Is there a technology transfer of workflows and pipelines and shared best practices across production for entertainment content and games within the company?
Absolutely! While VR is a new technology, there is such a rich heritage of knowledge present at Reload Studios. For example, one question that VR directors are asking themselves is: “How can I direct my audience’s attention to action in ways that are organic and natural?” While this is a new question for film directors — who typically rely on camera to do this work for them — this is a question that the gaming community has been answering for years. Having some of the top designers in the game industry at our disposal is an invaluable asset.

That being said, Reload is much different than most independent game companies. One of their first hires was senior Disney animator Nik Ranieri. Our producing team is composed of top animation producers from Marvel and DC. We have a deep bench of people who give the whole company a very comprehensive knowledge of how content of all types is created.

What was the equipment set-up for the Raven VR shoot? Which camera was used? What tools were used in the post pipeline?
Much of the creative IP for Raven is very much in development, including designs, characters, etc. For this reason, we elected to construct a teaser that highlighted immersive VR vistas that you could expect in the world we are creating. This required us to lean very heavily on the visual effects / CG production process — the VFX pipeline included Autodesk 3ds Max, rendering in V-Ray, with some assistance from Nuke and even Softimage XSI. The entire project was edited in Adobe Premiere.

For our one live-action element, this was shot with a single Red camera, and then projected onto geometry for accurate stereo integration.

Where do you think the prevailing future of VR content is? Narrative, training, therapy, gaming, etc.?
I think your question represents the future of VR. Games, for sure, are going to be leading the charge, as this demographic is the only one on a large scale that will be purchasing the devices required to build a viable market. But much more than games, I’m excited to see growth in all of the areas you listed above, including, most significantly, education. Education could be a huge winner in the growing VR/AR ecosystem.

The reason I elected to join Rascali is to help provide solutions and pave the way for solutions in markets that mostly don’t yet exist.  It’s exciting to be a part of a new industry that has the power to improve and benefit so many aspects of the global community.

NAB 2016: VR/AR/MR and light field technology impressed

By Greg Ciaccio

The NAB 2016 schedule included its usual share of evolutionary developments, which are truly exciting (HDR, cloud hosting/rendering, etc.). One, however, was a game changer with reach far beyond media and entertainment.

This year’s NAB floor plan featured a Virtual Reality Pavilion in the North Hall. In addition, the ETC (USC’s Entertainment Technology Center) held a Virtual Reality Summit that featured many great panel discussions and opened quite a few minds. At least that’s what I gathered by the standing room only crowds that filled the suite. The ETC’s Ken Williams and Erik Weaver, among others, should be credited for delivering quite a program. While VR itself is not a new development, the availability of relatively inexpensive viewers (with Google Cardboard the most accessible) will put VR in the hands of practically everyone.

Programs included discussions on where VR/AR (Augmented Reality) and now MR (Mixed Reality) are heading, business cases and, not to be forgotten, audio. Keep in mind that with headset VR experiences, multi-channel directional sound must be perceivable with just our two ears.

The panels included experts in the field, including Dolby, DTS, Nokia, NextVR, Fox and CNN. In fact, Juan Santillian from Vantage.tv mentioned that Coachella is streaming live in VR. Often, concerts and other live events have a fixed audience size, and many can’t attend due to financial or sell-out situations. VR can allow a much more intimate and immersive experience than being almost anywhere but onstage.

One example, from Fox Sports’ Michael Davies, involved two friends in different cities virtually attending a football game in a third city. They sat next to each other and chatted during the game, with their audio correctly mapped to their seats. There are no limits to applications for VR/AR/MR, and, by all accounts, once you experience it, there is no doubt that this tech is here to stay.

I’ve heard many times this year that mobile will be the monetary driver for wide adoption of VR. Halsey Minor with Voxelus estimates that 85 percent of VR usage will be via a mobile device. Given that more photos and videos are shot on our phones (by far) than on dedicated cameras, this is not surprising. Some of the latest crop of mobile phones are not only fast and contain high dynamic range and wide color gamut, they feature high-end audio processing from Dolby and others. Plus, our reliance on our mobiles ensures that you’ll never forget to bring it with you.

Light Field Imaging
On both Sunday and Tuesday of NAB 2016, programs were devoted to light field imaging. I was already familiar with this truly revolutionary tech, and learned about Lytro, Inc. a few years ago from Internet ads for an early consumer camera. I was intrigued with the idea of controlling focus after shooting. I visited www.lytro.com and was impressed, but the resolution was low, so, for me, this was mainly a proof of concept. Fast forward three years, and Lytro now has a cinema camera!

Jon Karafin (pictured right), Lytro’s head of Light Field Imaging, not only unveiled the camera onstage, but debuted their short Life, produced in association with The Virtual Reality Company (VRC). Life takes us through a man’s life and is told with no dialog, letting us take in the moving images without distraction. Jon then took us through all the picture aspects using Nuke plug-ins, and minds started blowing. The short is directed by Academy Award-winner Robert Stromberg, and shot by veteran cinematographer David Stump, who is chief imaging scientist at VRC.

Many of us are familiar with camera raw capture and know that ISO, color temperature and other picture aspects can be changed post-shooting. This has proven to be very valuable. However, things like focus, f-stop, shutter angle and many other parameters can now be changed, thanks to light field technology — think of it as an X-ray compared to an MRI. In the interests of trying to keep a complicated technology relatively simple, sensors in the camera capture light fields in not only in X and Y space, but two more “angular” directions, forming what Lytro calls 4D space. The result is accurate depth mapping which opens up so many options for filmmakers.

Lytro_Cinema_2

Lytro Cinema Camera

For those who may think that this opens up too many options in post, all parameters can be locked so only those who are granted access can make edits. Some of the parameters that can be changed in post include: Focus, F-Stop, Depth of Field, Shutter Speed, Camera Position, Shutter Angle, Shutter Blade Count, Aperture Aspect Ratio and Fine Control of Depth (for mattes/comps).

Yes, this camera generates a lot of data. The good news is that you can make changes anywhere with an Internet connection, thanks to proxy mode in Nuke and processing rendered in the cloud. Jon demoed this, and images were quickly processed using Google’s cloud.

The camera itself is very large, but Lytro knows that they’ll need to reduce the size (from around seven feet long) to a more maneuverable form factor. However, this is a huge step in proving that a light field cinema camera and a powerful, manageable workflow is not only possible, but will no doubt prove valuable to filmmakers wanting the power and control offered by light field cinematography.

Greg Ciaccio is a technologist focused primarily on finding new technology and workflow solutions for Motion Picture and Television clients. Ciaccio served in technical management roles for the respective Creative Services divisions for both Deluxe and Technicolor.

IKinema at SIGGRAPH with tech preview of natural language interface

IKinema, a provider of realtime animation software for motion capture, games and virtual reality using inverse kinematics, has launched a new natural language interface designed to enable users to produce animation using descriptive commands based on everyday language. The technology, code-named Intimate, is currently in prototype as part of a two-year project with backing by the UK government’s Innovate UK program.

The new interface supplements virtual reality technology such as Magic Leap and Microsoft HoloLens, offering new methods for creating animation that are suitable for professionals but also simple enough for a mass audience. The user can bring in a character and then animate the character from an extensive library of cloud animation, simply by describing what the character is supposed to do.

Intimate is targeted to many applications including pre-production, games, virtual production, virtual and augmented reality and more. The technology is expected to become commercially available in 2016 and the aim is to make an SDK available to any animation package. Currently, the company has a working prototype and has engaged with top studios for the purpose of technology validation and development.

ILMxLAB formed, focusing on immersive entertainment

Industrial Light & Magic (ILM) and parent company Lucasfilm have formed the ILM Experience Lab (ILMxLAB), a new division that will combine the expertise of Lucasfilm, ILM and Skywalker Sound to create immersive entertainment experiences.

For several years, the company has been investing in realtime graphics — building a foundation that allows ILMxLAB to deliver interactive imagery at high-quality levels. ILMxLAB will develop virtual reality, augmented reality, realtime cinema, theme park entertainment and narrative-based experiences for future platforms.

Lucasfilm EVP and ILM president Lynwen Brennan states, “The combination of ILM, Skywalker Sound and Lucasfilm’s story group is unique and that creative collaboration will lead to captivating immersive experiences in the Star Wars universe and beyond. ILMxLAB brings an incredible group of creatives and technologists together to push the boundaries and explore new ways to tell stories. We have a long history of collaborating with the most visionary filmmakers and storytellers and we look forward to continuing these partnerships in this exciting space.”

VP of new media for Lucasfilm Rob Bredow adds, “The pioneering spirit that inspired storytellers and technical artists to improvise, innovate and help imagine a galaxy far, far away is in the DNA of ILMxLAB. We see xLAB as a laboratory for immersive entertainment. It’s amazing to be working in a new medium where we get to help invent how stories are told and experienced, connecting artists with their audiences like never before.”

John Gaeta

“Cinema is a master storyteller’s art form,” says  ILMxLABcreative director John Gaeta. “Until recently, a “‘fourth wall’ has contained this form. Soon, however, we will break through this 4th wall and cinema will become a portal leading to new and immersive platforms for expression. ILMxLAB is a platform for this expansion. We want you to step inside our stories.”