Category Archives: Virtual Reality

The-Artery embraces a VR workflow for Mercedes spots

The-Artery founder and director Vico Sharabani recently brought together an elite group of creative artists and skilled technologists to create a cross-continental VR production pipeline for Mercedes-Benz’s Masters tournament brand campaign called “What Makes Us.”

Emmy-nominated cinematographer Paul Cameron (Westworld) and VFX supervisor Rob Moggach co-directed the project, which features a series six of intense broadcast commercials — including two fully CGI spots that were “shot” in a completely virtual world.

The agency and The-Artery team, including Vico Sharabani (third from the right).

This pair of 30-second commercials, First and Can’t, are the first to be created using a novel, realtime collaborative VR software application called Nu Design with Atom View technology. While in Los Angeles, Cameron worked within a virtual world, choosing camera bodies and lenses inside the space that allowed him to “shoot” for POV and angles that would have taken weeks to complete in the real world.

The software enabled him to grab and move the camera while all artistic camera direction was recorded virtually and used for final renders. This allowed both Sharabani, who was in NYC, and Moggach, who was in Toronto, to interact live and in realtime as if they were standing together on a physical set.

We reached out to Sharabani, Cameron and Moggach for details on VR workflow, and how they see the technology impacting production and creativity.

How did you come to know about Nurulize and the Nu Design Atom View technology?
Vico Sharabani: Scott Metzger, co-founder of Nurulize, is a long-time friend, colleague and collaborator. We have all been supporting each other’s careers and initiatives, so as soon as the alpha version of Nu Design was operational, we jumped on the opportunity of deploying it in real production.

How does the ability to shoot in VR change the production paradigm moving forward?
Rob Moggach: From scout to pre-light to shoot, through to dailies and editorial, it allows us to collaborate on digital productions in a traditional filmmaking process with established roles and procedures that are known to work.

Instead of locking animated productions into a rigid board, previs, animation workflow, a director can make decisions on editorial and find unexpected moments in the capture that wouldn’t necessarily be boarded and animated otherwise. Being able to do all of this without geographical restriction and still feel like you’re together in the same room is remarkable.

What types of projects are ideal for this new production pipeline?
Sharabani: The really beautiful thing for The-Artery, as a first time user of this technology, is to prove that this workflow can be used by companies like us on every project, and not only in films by Steven Spielberg and James Cameron. The obvious ideal fit is for projects like fully CGI productions; previs of big CGI environments that need to be considered in photography; virtual previs of scouted locations in remote or dangerous locations; blocking of digital sets in pre-existing greenscreen or partially built stages; and multiple remote creative teams that need to share a vision and input

What are the specific benefits?
Moggach: With a virtual pipeline, we are able to…
1) Work much faster than traditional previs to quickly capture multiple camera setups.
2) Visualize environments and CGI with a camera in-hand to find shots you didn’t know were there on screen.
3) Interact closely regardless of location and truly feel together in the same place.
4) Use known filmmaking processes, allowing us to capitalize on established wisdom and experience.

What impacts will it have to creativity?
Paul Cameron: For me, the VR workflow added a great impact to the overall creative approach for both commercials. It enabled me to go into the environment and literally grab a camera, move around the car, be in the middle of the car, pull the camera over the car. Basically, it allowed me to put the camera in places I always wanted to put the camera, but it would take hours to get cranes or scaffold for different positions.

The other fascinating thing is that you are able to scale the set up and down. For instance, I was able to scale the car down to 25% its normal size and make a very drastic camera move over the car, handheld with a VR camera, and with the combination of slowing it down, and smoothing it down a bit, we were able to design camera moves that were very organic and very natural.

I think it also allowed me to achieve a greater understanding of the set size and space, the geometry of the set and the relationship of the car to the set. In the past, it would be a process of going through a wireframe, waiting for the rendering — in this case, the car — and programming camera moves. It basically helps with conceptualization of camera moves and shot design in a new way for me.

Also being a director of photography, it is very empowering to be able to grab the camera literally with a controller and move through that space. Again, it just takes a matter of seconds to make very dramatic camera moves, whereas even on set it could take upwards of an hour or two to move a technocrane and actually get a feel for that shot, so it is very empowering overall.

What does it now allow directors to achieve?
Cameron: One of the better features about the VR workflow is that you can actually just teleport yourself around the set while you are inside of it. So, basically, you picture yourself inside this set, and with a left hand controller and one for the right hand, you have the ability to kind of teleport yourself to different perspectives. In this case, the automobile, the geometry and wireframe geometry of the set, so it gives you a very good idea of the perspectives from different angles and you can move around really quickly.

The other thing that I found fascinating was that not only can you move around this set, in this case, I was able to fly… upwards of about 150 feet and look down on the set. This was, while you are immersed in the VR world, quite intoxicating. You are literally flying and hovering above the set, and it kind of feels like you are standing on a beam with no room to move forward or backward without falling.

Paul Cameron

So the ability to move around in an endless set perspective-wise and teleport yourself around and above the set looking down, was amazing. In the case of the Can’t commercial, I was able to teleport on the other side of the wind turbine and look back at the automobile.

Although we had the 3D CADs of sets in the past, and we were able to travel around and look at camera positions, somehow the immediacy and the power of being in the VR environment with the two controllers was quite powerful. I think for one of the sessions I had the glasses on for almost four hours straight. We recorded multiple camera moves, and everybody was quite shocked that I was in the environment for that long. But for me, it was like being on a set, almost like a pre-pre-light or something, where I was able to have my space as a director and move around and get to see my angles and design my shots.

What other tools did you use?
Sharabani: Houdini for CG,Redshift (with support of GridMarkets) for rendering, Nuke for compositing, Flame for finishing, Resolve for color grading and Premiere for editing.

NextComputing, Z Cam, Assimilate team on turnkey VR studio

NextComputing, Z Cam and Assimilate have teamed up to create a complete turnkey VR studio. Foundation VR Studio is designed to provide all aspects of the immersive production process and help the creatives be more creative.

According to Assimilate CEO Jeff Edson, “Partnering with Z Cam last year was an obvious opportunity to bring together the best of integrated 360 cameras with a seamless workflow for both live and post productions. The key is to continue to move the market from a technology focus to a creative focus. Integrated cameras took the discussions up a level of integration away from the pieces. There have been endless discussions regarding capable platforms for 360; the advantage we have is we work with just about every computer maker as well as the component companies, like CPU and GPU manufacturers. These are companies that are willing to create solutions. Again, this is all about trying to help the market focus on the creative as opposed to debates about the technology, and letting creative people create great experiences and content. Getting the technology out of their way and providing solutions that just work helps with this.”

These companies are offering a few options with their Power VR Studio.

The Foundation VR Studio, which costs $8,999 and is available now includes:
• NextComputing Edge T100 workstation
o CPU: 6-core Intel core i7-8700K 3.7GHz processor
o Memory: 16GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

Then there is the Power VR Studio, for $10,999, which is also available now. It includes:
• NextComputing Edge T100 workstation
o CPU: 10-core Intel core i9-7900K 3.3GHz processor
o Memory: 32GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

These companies will be at NAB demoing the systems.

 

 

DG 7.9.18

GTC embraces machine learning and AI

By Mike McCarthy

I had the opportunity to attend GTC 2018, Nvidia‘s 9th annual technology conference in San Jose this week. GTC stands for GPU Technology Conference, and GPU stands for graphics processing unit, but graphics makes up a relatively small portion of the show at this point. The majority of the sessions and exhibitors are focused on machine learning and artificial intelligence.

And the majority of the graphics developments are centered around analyzing imagery, not generating it. Whether that is classifying photos on Pinterest or giving autonomous vehicles machine vision, it is based on the capability of computers to understand the content of an image. Now DriveSim, Nvidia’s new simulator for virtually testing autonomous drive software, dynamically creates imagery for the other system in the Constellation pair of servers to analyze and respond to, but that is entirely machine-to-machine imagery communication.

The main exception to this non-visual usage trend is Nvidia RTX, which allows raytracing to be rendered in realtime on GPUs. RTX can be used through Nvidia’s OptiX API, as well as Microsoft’s DirectX RayTracing API, and eventually through the open source Vulkan cross-platform graphics solution. It integrates with Nvidia’s AI Denoiser to use predictive rendering to further accelerate performance, and can be used in VR applications as well.

Nvidia RTX was first announced at the Game Developers Conference last week, but the first hardware to run it was just announced here at GTC, in the form of the new Quadro GV100. This $9,000 card replaces the existing Pascal-based GP100 with a Volta-based solution. It retains the same PCIe form factor, the quad DisplayPort 1.4 outputs and the NV-Link bridge to pair two cards at 200GB/s, but it jumps the GPU RAM per card from 16GB to 32GB of HBM2 memory. The GP100 was the first Quadro offering since the K6000 to support double-precision compute processing at full speed, and the increase from 3,584 to 5,120 CUDA cores should provide a 40% increase in performance, before you even look at the benefits of the 640 Tensor Cores.

Hopefully, we will see simpler versions of the Volta chip making their way into a broader array of more budget-conscious GPU options in the near future. The fact that the new Nvidia RTX technology is stated to require Volta architecture CPUs leads me to believe that they must be right on the horizon.

Nvidia also announced a new all-in-one GPU supercomputer — the DGX-2 supports twice as many Tesla V100 GPUs (16) with twice as much RAM each (32GB) compared to the existing DGX-1. This provides 81920 CUDA cores addressing 512GB of HBM2 memory, over a fabric of new NV-Link switches, as well as dual Xeon CPUs, Infiniband or 100GbE connectivity, and 32TB of SSD storage. This $400K supercomputer is marketed as the world’s largest GPU.

Nvidia and their partners had a number of cars and trucks on display throughout the show, showcasing various pieces of technology that are being developed to aid in the pursuit of autonomous vehicles.

Also on display in the category of “actually graphics related” was the new Max-Q version of the mobile Quadro P4000, which is integrated into PNY’s first mobile workstation, the Prevail Pro. Besides supporting professional VR applications, the HDMI and dual DisplayPort outputs allow a total of three external displays up to 4K each. It isn’t the smallest or lightest 15-inch laptop, but it is the only system under 17 inches I am aware of that supports the P4000, which is considered the minimum spec for professional VR implementation.

There are, of course, lots of other vendors exhibiting their products at GTC. I had the opportunity to watch 8K stereo 360 video playing off of a laptop with an external GPU. I also tried out the VRHero 5K Plus enterprise-level HMD, which brings the VR experience to whole other level. Much more affordable is TP-Cast’s $300 wireless upgrade Vive and Rift HMDs, the first of many untethered VR solutions. HTC has also recently announced the Vive Pro, which will be available in April for $800. It increases the resolution by 1/3 in both dimensions to 2880×1600 total, and moves from HDMI to DisplayPort 1.2 and USB-C. Besides VR products, they also had all sorts of robots in various forms on display.

Clearly the world of GPUs has extended far beyond the scope of accelerating computer graphics generation, and Nvidia is leading the way in bringing massive information processing to a variety of new and innovative applications. And if that leads us to hardware that can someday raytrace in realtime at 8K in VR, then I suppose everyone wins.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Supersphere offering flypacks for VR/360 streaming

Supersphere, a VR/360° production studio, will be at NAB this year debuting 12G glass-to-glass flypacks optimized for live VR/360° streaming. These multi-geometry (mesh/rectilinear/equirectangular) flypacks can handle 360°, 180°, 4K or HD production and seamlessly mix and match each geometry. They also include built-in VDN (video distribution network) encoding and delivery for live streaming to any platform or custom player.

“Live music, both in streaming and in ticket sales, has posted consistent growth in the US and Worldwide. It’s a multibillion-dollar industry and only getting bigger. We are investing in the immersive streaming market, because we see that trend reflected in our client requests,” explains founder/EP of Supershere. “Clients always want to provide audiences with the most engaging experience possible. An immersive environment is the way to do it.”

Each flypack is standard equipped with Z Cam K1 Pro 180° cameras and Z CAM S1 Pro 360° cameras, and customizable to any camera as productions demand. They are also equipped with Blackmagic’s latest ATEM Production Studio 4K live production switchers to facilitate multi-camera live production across a range of video sources. The included Assimilate Scratch VR Z enables realtime geometry, stitching, color grading, finishing and ambisonic audio. The system also offers fully integrated transcoding and delivery — Teleos Media’s VDN (Video Distribution Network) delivers immersive experiences to any devicewith instant start experience, sustained 16Mbps at high frame rates and 4K + VR resolutions. This allows clients to easily build custom 360° video players on their websites or apps as a destination for live-streamed content, in addition to streaming directly to YouTube, Facebook and other popular platforms.

“These flypacks provide an incredibly robust workflow that takes the complexity out of immersive live production — capable of handling the data required for stunning high-resolution projects in one flexible end-to-end package,” says Wilson. “Plus with Teleos’ VDN capabilities, we make it easy for any client to live stream high-end content directly to whatever device or app best suits their customers’ needs, including the option to quickly build custom, fully integrated 360° live players.”


Z Cam, Assimilate reduce price of S1 VR camera/Scratch VR bundle

The Z Cam S1 VR camera/WonderStitch/Assimilate Scratch VR Z bundle, an integrated VR production workflow offering, is now $3,999, down from $4,999.

The Z Cam S1/Scratch VR Z bundle provides acquisition via Z Cam’s S1 pro VR camera, stitching via the WonderStitch software and a streamlined VR post workflow via Assimilate’s realtime Scratch VR Z tools.

Here are some details:
If streaming live 360 from the Z Cam S1 through Scratch VR Z, users can take advantage of realtime features such as inserting/composting graphics/text overlays, including animations, and keying for elements like greenscreen — all streaming live to Facebook Live 360.

Scratch VR Z can be used to do live camera preview, prior to shooting with the S1. During the shoot, Scratch VR Z is used for dailies and data management, including metadata. It’s a direct connect to the PC and then to the camera via a high-speed Ethernet port. Stitching of the imagery is done in Z Cam’s WonderStitch, now integrated into Scratch VR Z, then comes traditional editing, color grading, compositing, multichannel audio from the S1 or adding external ambisonic sound, finishing and then publishing to all final online or stand-alone 360 platforms.

The Z Cam S1/Scratch VR Z bundle is available now.


Behind the Title: Light Sail VR’s Matthew Celia

NAME: Matthew Celia

COMPANY: LA’s Light Sail VR (@lightsailvr)

CAN YOU DESCRIBE YOUR COMPANY?
Light Sail VR is a virtual reality production company specializing in telling immersive narrative stories. We’ve built a strong branded content business over the last two years working with clients such as Google and GoPro, and studios like Paramount and ABC.

Whether it’s 360 video, cinematic VR or interactive media, we’ve built an end-to-end pipeline to go from script to final delivery. We’re now excited to be moving into creating original IP and more interactive content that fuses cinematic live-action film footage with game engine mechanics.

WHAT’S YOUR JOB TITLE?
Creative Director and Managing Partner

WHAT DOES THAT ENTAIL?
A lot! We’re a small boutique shop so we all wear many hats. First and foremost, I am a director and work hard to deliver a compelling story and emotional connection to the audience for each one of our pieces. Story first is our motto, and I try and approach every technical problem with a creative solution. Figuring out execution is a large part of that.

In addition to the production side, I also carry a lot of the technical responsibilities in post production, such as keeping our post pipeline humming and inventing new workflows. Most recently, I have been dabbling in programming interactive cinema using the Unity game engine.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I am in charge of washing the lettuce when we do our famous “Light Sail VR Sandwich Club” during lunch. Yes, you get fed for free if you work with us, and I make an amazing italian sandwich.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Hard to say. I really like what I do. I like being on set and working with actors because VR is such a great medium for them to play in, and it’s exciting to collaborate with such creative and talented people.

National Parks Service

WHAT’S YOUR LEAST FAVORITE?
Render times and computer crashes. My tech life is in constant beta. Price we pay for being on the bleeding edge, I guess!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like the early morning because it is quiet, my brain is fresh, and I haven’t yet had 20 people asking something of me.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably the same, but at a large company. If I left the film business I’d probably teach. I love working with kids.

WHY DID YOU CHOOSE THIS PROFESSION?
I feel like I’ve wanted to be a filmmaker since I could walk. My parents like to drag out the home movies of me asking to look in my dad’s VHS video camera when I was 4. I spent most of high school in the theater and most people assumed I would be an actor. But senior year I fell in love with film when I shot and cut my first 16mm reversal stock on an old reel-to-reel editing machine. The process was incredibly fun and rewarding and I was hooked. I only recently discovered VR, but in many ways it feels like the right path for me because I think cinematic VR is the perfect intersection of filmmaking and theater.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
On the branded side, we just finished up two tourism videos. One for the National Parks Service which was a 360 tour of the Channel Islands with Jordan Fisher and the other was a 360 piece for Princess Cruises. VR is really great to show people the world. The last few months of my life have been consumed by Light Sail VR’s first original project, Speak of the Devil.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Speak of the Devil is at the top of that list. It’s the first live-action interactive project I’ve worked on and it’s massive. Crafted using the GoPro Odyssey camera in partnership with Google Jump it features over 50 unique locations, 13 different endings and is currently taking up about 80TB of storage (and counting). It is the largest project I’ve worked on to date, and we’ve done it all on a shoestring budget thanks to the gracious contributions of talented creative folks who believed in our vision.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My instant-read grill meat thermometer, my iPhone and my Philips Hue bulbs. Seriously, if you have a baby, it’s a life saver being able to whisper, Hey, Siri, turn off the lights.”

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m really active on several Facebook groups related to 360 video production. You can get a lot of advice and connect directly with vendors and software engineers. It’s a great community.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I tend to pop on some music when I’m doing repetitive mindless tasks, but when I have to be creative or solve a tough tech problem, the music is off so that I can focus. My favorite music to work to tends to be Dave Matthews Band live albums. They get into 20-minute long jams and it’s great.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
De-stressing is really hard when you own your own company. I like to go walking, but if that doesn’t work, I’ll try diving into some cooking for my family, which forces me to focus on something not work related. I tend to feel better after eating a really good meal.


Rogue takes us on VR/360 tour of Supermodel Closets

Rogue is a NYC-based creative boutique that specializes in high-end production and post for film, advertising and digital. Since its founding two years ago, executive creative director, Alex MacLean and his team have produced a large body of work providing color grading, finishing and visual effects for clients such as HBO, Vogue, Google, Vice, Fader and more. For the past three years MacLean has also been at the forefront of VR/360 content for narratives and advertising.

MacLean recently wrapped up post production on four five-minute episodes of 360-degree tours of Supermodel Closets. The series is a project of Conde Nast Entertainment and Vogue for Vogue’s 125th anniversary. If you’re into fashion, this VR tour gives you a glimpse at what supermodels wear in their daily lives. Viewers can look up, down and all around to feel immersed in the closet of each model as she shows her favorite fashions and shares the stories behind their most prized pieces.

 

Tours include the closets of Lily Aldridge, Cindy Crawford, Kendall Jenner  and
Amber Valletta.

MacLean worked with director Julina Tatlock, who is a co-founder and CEO of 30 Ninjas, a digital entertainment company that develops, writes and produces VR, multi-platform and interactive content. Rogue and 30 Ninjas worked together to determine the best workflow for the series. “I always think it’s best practice to collaborate with the directors, DPs and/or production companies in advance of a VR shoot to sort out any technical issues and pre-plan the most efficient production process from shoot to edit, stitching through all the steps of post-production,” reports MacLean. “Foresight is everything; it saves a lot of time, money, and frustration for everyone, especially when working in VR, as well as 3D.”

According to MacLean, they worked with a new camera format, the YI Halo camera, which is designed for professional VR data acquisition. “I often turn to the Assimilate team to discuss the format issues because they always support the latest camera formats in their Scratch VR tools. This worked well again because I needed to define an efficient VR and 3D workflow that would accommodate the conforming, color grading, creating of visual effects and the finishing of a massive amount of data at 6.7K x 6.7K resolution.”

 

The Post
“The post production process began by downloading 30 Ninjas’ editorial, stitched footage from the cloud to ingest into our MacBook Pro workstations to do the conform at 6K x 6K,” explains MacLean. “Organized data management is a critical step in our workflow, and Scratch VR is a champ at that. We were simultaneously doing the post for more than one episode, as well as other projects within the studio, so data efficiency is key.”

“We then moved the conformed raw 6.7K x 6.7K raw footage to our HP Z840 workstations to do the color grading, visual effects, compositing and finishing. You really need powerful workstations when working at this resolution and with this much data,” reports MacLean. “Spherical VR/360 imagery requires focused concentration, and then we’re basically doing everything twice when working in 3D. For these episodes, and for all VR/360 projects, we create a lat/long that breaks out the left eye and right eye into two spherical images. We then replicate the work from one eye to the next, and color correct any variances. The result is seamless color grading.

 

“We’re essentially using the headset as a creative tool with Scratch VR, because we can work in realtime in an immersive environment and see the exact results of work in each step of the post process,” he continues. “This is especially useful when doing any additional compositing, such as clean-up for artifacts that may have been missed or adding or subtracting data. Working in realtime eases the stress and time of doing a new composite of 360 data for the left eye and right eye 3D.”

Playback of content in the studio is very important to MacLean and team, and he calls the choice of multiple headsets another piece to the VR/360 puzzle. “The VR/3D content can look different in each headset so we need to determine a mid-point aesthetic look that displays well in each headset. We have our own playback black box that we use to preview the color grading and visual effects, before committing to rendering. And then we do a final QC review of the content, and for these episodes we did so in Google Daydream (untethered), HTV Live (tethered) and the Oculus Rift (tethered).”

MacLean sees rendering as one of their biggest challenges. “It’s really imperative to be diligent throughout all the internal and client reviews prior to rendering. It requires being very organized in your workflow from production through finishing, and a solid QC check. Content at 6K x 6K, VR/360 and 3D means extremely large files and numerous hours of rendering, so we want to restrict re-rendering as much as possible.”


Storage in the Studio: VFX Studios

By Karen Maierhofer

It takes talent and the right tools to generate visual effects of all kinds, whether it’s building breathtaking environments, creating amazing creatures or crafting lifelike characters cast in a major role for film, television, games or short-form projects.

Indeed, we are familiar with industry-leading content creation tools such as Autodesk’s Maya, Foundry’s Mari and more, which, when placed into the hands of creatives, the result in pure digital magic. In fact, there is quite a bit of technological magic that occurs at visual effects facilities, including one kind in particular that may not have the inherent sparkle of modeling and animation tools but is just as integral to the visual effects process: storage. Storage solutions are the unsung heroes behind most projects, working behind the scenes to accommodate artists and keep their productive juices flowing.

Here we examine three VFX facilities and their use of various storage solutions and setups as they tackle projects large and small.

Framestore
Since it was founded in 1986, Framestore has placed its visual stamp on a plethora of Oscar-, Emmy- and British Academy Film Award-winning visual effects projects, including Harry Potter, Gravity and Guardians of the Galaxy. With increasingly more projects, Framestore expanded from its original UK location in London to North American locales such as Montreal, New York, Los Angeles and Chicago, handling films as well as immersive digital experiences and integrated advertisements for iconic brands, including Guinness, Geico, Coke and BMW.

Beren Lewis

As the company and its workload grew and expanded into other areas, including integrated advertising, so, too, did its storage needs. “Innovative changes, such as virtual-reality projects, brought on high demand for storage and top-tier performance,” says NYC-based Beren Lewis, CTO of advertising and applied technologies at Framestore. “The team is often required to swiftly accommodate multiple workflows, including stereoscopic 4K and VR.”

Without hesitation, Lewis believes storage is typically the most challenging aspect of technology within the VFX workflow. “If the storage isn’t working, then neither are the artists,” he points out. Furthermore, any issues with storage can potentially lead to massive financial implications for the company due to lost time and revenue.

According to Lewis, Framestore uses its storage solution — a Pixit PixStor General Parallel File System (GPFS) storage cluster using the NetApp E-Series hardware – for all its project data. This includes backups to remote co-location sites, video preprocessing, decompression, disaster recovery preparation, scalability and high performance for VFX, finishing and rendering workloads.

The studio moved all the integrated advertising teams over to the PixStor GPFS clusters this past spring. Currently, Framestore has five primary PixStor clusters using NetApp E-Series in use at each office in London, LA, Chicago and Montreal.

According to Lewis, Framestore partnered with Pixit Media and NetApp to take on increasingly complicated and resource-hungry VR projects. “This partnership has provided the global integrated advertising team with higher performance and nonstop access to data,” he says. “The Pixit Media PixStor software-defined scale-out storage solution running on NetApp E-Series systems brings fast, reliable data access for the integrated advertising division so the team can embrace performance and consistency across all five sites, take a cost-effective, simplified approach to disaster recovery and have a modular infrastructure to support multiple workflows and future expansion.”

BMW

Framestore selected its current solution after reviewing several major storage technologies. It was looking for a single namespace that was very stable, while providing great performance, but it also had to be scalable, Lewis notes. “The PixStor ticked all those boxes and provided the right balance between enterprise-grade hardware and support, and open-source standards,” he explains. “That balance allowed us to seamlessly integrate the PixStor into our network, while still maintaining many of the bespoke tools and services that we had developed in-house over the years, with minimum development time.”

In particular, the storage solution provides the required high performance so that the studio’s VFX, finishing and rendering workloads can all run “full-out with no negative effect on the finishing editors’ or graphic artists’ user experience,” Lewis says. “This is a game-changing capability for an industry that typically partitions off these three workloads to keep artists from having to halt operations. PixStor running on E-Series consolidates all three workloads onto a single IT infrastructure with streamlined end-to-end production of projects, which reduces both time to completion and operational costs, while both IT acquisition and maintenance costs are reduced.”

At Framestore, integrating storage into the workflow is simple. The first step after a project is green-lit is the establishment of a new file set on the PixStor GPFS cluster, where ingested footage and all the CG artist-generated project data will live. “The PixStor is at the heart of the integrated advertising storage workflow from start to finish,” Lewis says. Because the PixStor GPFS cluster serves as the primary storage for all integrated advertising project data, the division’s workstations, renderfarm, editing and finishing stations connect to the cluster for review, generation and storage of project content.

Prior to the move to PixStor/NetApp, Framestore had been using a number of different storage offerings. According to Lewis, they all suffered from the same issues in terms of scalability and degradation of performance under render load — and that load was getting heavier and more unpredictable with every project. “We needed a technology that scaled and allowed us to maintain a single namespace but not suffer from continuous slowdowns for artists due to renderfarm load during crunch times or project delivery.”

Geico

As Lewis explains, with the PixStor/NetApp solution, processing was running up to 270,000 IOPS (I/O operations per second), which was at least several times what Framestore’s previous infrastructure would have been able to handle in a single namespace. “Notably, the development workflow for a major theme-park ride was unhindered by all the VR preprocessing, while backups to remote co-location sites synched every two hours without compromising the artist, rendering or finishing workloads,” he says. “This provided a cost-effective, simplified approach to disaster recovery, and Framestore now has a fast, tightly integrated platform to support its expansion plans.”

To stay at the top of its game, Framestore is always reviewing new technologies, and storage is often part of that conversation. To this end, the studio plans to build on the success it has had with PixStor by expanding the storage to handle some additional editorial playback and render workloads using an all-Non-Volatile Memory Express (NVMe) flash tier. Other projects include a review of object storage technology for use as a long-term, off-premises storage target for archival data.

Without question, the industry’s visual demands are rapidly changing. Not long ago, Framestore could easily predict storage and render requirements for a typical project. But that is no longer the case, and the studio finds itself working in ever-increasing resolutions and frame rates. Whereas projects may have been as small as 3TB in the recent past, nowadays the studio regularly handles multiple projects of 300TB or larger. And the storage must be shared with other projects of varying sizes and scope.

“This new ‘unknowns’ element of our workflow puts many strains on all aspects of our pipeline, but especially the storage,” Lewis points out. “Knowing that our storage can cope with the load and can scale allows us to turn our attention to the other issues that these new types of projects bring to Framestore.”

As Lewis notes, working with high-resolution images and large renderfarms create a unique set of challenges for any storage technology that’s not seen in many other fields. The VFX will often test any storage technology well beyond what other industries are capable of. “If there’s an issue or a break point, we will typically find it in spectacular fashion,” he adds.

Rising Sun Pictures
As a contributor to the design and execution of computer-generated effects on more than 100 feature films since its inception 22 years ago, Rising Sun Pictures (RSP) has pushed the technical bar many times over in film as well as television projects. Based in Adelaide, South Australia, RSP has built a top team of VFX artists who have tackled such box-office hits as Thor: Ragnarok, X-Men and Game of Thrones, as well as the Harry Potter and Hunger Games franchises.

Mark Day

Such demanding, high-level projects require demanding, high-level effects, which, in turn, demand a high-performance, reliable storage solution capable of handling varying data I/O profiles. “With more than 200 employees accessing and writing files in various formats, the need for a fast, reliable and scalable solution is paramount to business continuity,” says Mark Day, director of engineering at RSP.

Recently, RSP installed an Oracle ZS5 storage appliance to handle this important function. This high-performance, unified storage system provides NAS and SAN cloud-converged storage capabilities that enable on-premises storage to seamlessly access Oracle Public Cloud. Its advanced hardware and software architecture includes a multi-threading SMP storage operating system for running multiple workloads and advanced data services without performance degradation. The offering also caches data on DRAM or flash cache for optimal performance and efficiency, while keeping data safely stored on high-capacity SSD (solid state disk) or HDD (hard disk drive) storage.

Previously, the studio had been using an Dell EMC Isilon storage cluster with Avere caching appliances, and the company is still employing the solution for parts of its workflow.

When it came time to upgrade to handle RSP’s increased workload, the facility ran a proof of concept with multiple vendors in September 2016 and benchmarked their systems. Impressed with Oracle, RSP began installation in early 2017. According to Day, RSP liked the solution’s ability to support larger packet sizes — now up to 1MB. In addition, he says its “exceptional” analytics engine gives introspection into a render job.

“It has a very appealing [total cost of ownership], and it has caching right out of the box, removing the need for additional caching appliances,” says Day. Storage is at the center of RSP’s workflow, storing all the relevant information for every department — from live-action plates that are turned over from clients, scene setup files and multi-terabyte cache files to iterations of the final product. “All employees work off this storage, and it needs to accommodate the needs of multiple projects and deadlines with zero downtime,” Day adds.

Machine Room

“Visual effects scenes are getting more complex, and in turn, data sizes are increasing. Working in 4K quadruples file sizes and, therefore, impacts storage performance,” explains Day. “We needed a solution that could cope with these requirements and future trends in the industry.”

According to Day, the data RSP deals with is broad, from small setup files to terabyte geocache files. A one-minute 2K DPX sequence is 17GB for the final pass, while 4K is 68GB. “Keep in mind this is only the final pass; a single shot could include hundreds of passes for a heavy computer-generated sequence,” he points out.

Thus, high-performance storage is important to the effective operation of a visual effects company like RSP. In fact, storage helps the artists stay on the creative edge by enabling them to iterate through the creative process of crafting a shot and a look. “Artists are required to iterate their creative process many times to perfect the look of a shot, and if they experience slowdowns when loading scenes, this can have a dramatic effect on how many iterations they can produce. And in turn, this affects employees’ efficiency and, ultimately, the profitability of the company,” says Day.

Thor: Ragnarok

Most recently, RSP used its new storage solution for work on the blockbuster Thor: Ragnarok, in particular, for the Val’s Flashback sequence — which was extremely complex and involved extensive lighting and texture data, as well as high-frame-rate plates (sometimes more than 1,000fps for multiple live-action footage plates). “Before, our storage refresh early versions of this shot could take up to 24 hours to render on our server farm. But since installing our new storage, we saw this drastically reduced to six hours — that’s a 3x improvement, which is a fantastic outcome,” says Day.

Outpost VFX
A full-service VFX studio for film, broadcast and commercials, Outpost VFX, based in Bournemouth, England, has been operational since late 2012. Since that time, the facility has been growing by leaps and bounds, taking on major projects, including Life, Nocturnal Animals, Jason Bourne and 47 Meters Down.

Paul Francis

Due to this fairly rapid expansion, Outpost VFX has seen the need for increased capacity in its storage needs. “As the company grows and as resolution increases and HDR comes in, file sizes increase, and we need much more capacity to deal with that effectively,” says CTO Paul Francis.

When setting up the facility five years ago, the decision was made to go with PixStor from Pixit Media and Synology’s NAS for its storage solution. “It’s an industry-recognized solution that is extremely resilient to errors. It’s fast, robust and the team at Pixit provides excellent support, which is important to us,” says Francis.

Foremost, the solution had to provide high capacity and high speeds. “We need lots of simultaneous connections to avoid bottlenecks and ensure speedy delivery of data,” Francis adds. “This is the only one we’ve used, really. It has proved to be stable enough to support us through our growth over the last couple of years — growth that has included a physical office move and an increase in artist capacity to 80 seats.”

Outpost VFX mainly works with image data and project files for use with Autodesk’s Maya, Foundry’s Nuke, Side Effects’ Houdini and other VFX and animation tools. The challenge this presents is twofold, both large and small: concern for large file sizes, and problems the group can face with small files, such as metadata. Francis explains: “Sequentially loading small files can be time-consuming due to the current technology, so moving to something that can handle both of these areas will be of great benefit to us.”

Locally, artists use a mix of HDDs from a number of different manufacturers to store reference imagery and so forth — older-generation PCs have mostly Western Digital HDDs while newer PCs have generic SSDs. When replacing or upgrading equipment, Outpost VFX uses Samsung 900 Series SSDs, depending on the required performance and current market prices.

Life

Like many facilities, Outpost VFX is always weighing its options when it comes to finding the best solution for its current and future needs. Presently, it is looking at splitting up some of its storage solutions into smaller segments for greater resilience. “When you only have one storage solution and it fails, everything goes down. We’re looking to break our setup into smaller, faster solutions,” says Francis.

Additionally, security is a concern for Outpost VFX when it comes to its clients. According to Francis, certain shows need to be annexed, meaning the studio will need a separate storage solution outside of its main network to handle that data.

When Outpost VFX begins a job, the group ingests all the plates it needs to work on, and they reside in a new job folder created by production and assigned to a specific drive for active jobs. This folder then becomes the go-to for all assets, elements and shot iterations created throughout the production. For security purposes, these areas of the server are only visible to and accessible by artists, who in turn cannot access the Internet; this ensures that the files are “watertight and immune to leaks,” says Francis, adding that with PixStor, the studio is able to set up different partitions for different areas that artists can jump between easily.

How important is storage to Outpost VFX? “Frankly, there’d be no operation without storage!” Francis says emphatically. “We deal with hundreds of terrabytes of data in visual effects, so having high-capacity, reliable storage available to us at all times is absolutely essential to ensure a smooth and successful operation.”

47 Meters Down

Because the studio delivers visual effects across film, TV and commercials simultaneously, storage is an important factor no matter what the crew is working on. A recent film project like 47 Meters Down required the full gamut of visual effects work, as Outpost VFX was the sole vendor for the project. So, the studio needed the space and responsiveness of a storage system that enabled them to deliver more than 420 shots, a number of which featured heavy 3D builds and multiple layers of render elements.

“We had only about 30 artists at that point, so having a stable solution that was easy for our team to navigate and use was crucial,” Francis points out.

Main Image: From Outpost VFX’s Domestos commercial out of agency MullenLowe London.


Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.