Category Archives: VR

Rogue takes us on VR/360 tour of Supermodel Closets

Rogue is a NYC-based creative boutique that specializes in high-end production and post for film, advertising and digital. Since its founding two years ago, executive creative director, Alex MacLean and his team have produced a large body of work providing color grading, finishing and visual effects for clients such as HBO, Vogue, Google, Vice, Fader and more. For the past three years MacLean has also been at the forefront of VR/360 content for narratives and advertising.

MacLean recently wrapped up post production on four five-minute episodes of 360-degree tours of Supermodel Closets. The series is a project of Conde Nast Entertainment and Vogue for Vogue’s 125th anniversary. If you’re into fashion, this VR tour gives you a glimpse at what supermodels wear in their daily lives. Viewers can look up, down and all around to feel immersed in the closet of each model as she shows her favorite fashions and shares the stories behind their most prized pieces.

 

Tours include the closets of Lily Aldridge, Cindy Crawford, Kendall Jenner  and
Amber Valletta.

MacLean worked with director Julina Tatlock, who is a co-founder and CEO of 30 Ninjas, a digital entertainment company that develops, writes and produces VR, multi-platform and interactive content. Rogue and 30 Ninjas worked together to determine the best workflow for the series. “I always think it’s best practice to collaborate with the directors, DPs and/or production companies in advance of a VR shoot to sort out any technical issues and pre-plan the most efficient production process from shoot to edit, stitching through all the steps of post-production,” reports MacLean. “Foresight is everything; it saves a lot of time, money, and frustration for everyone, especially when working in VR, as well as 3D.”

According to MacLean, they worked with a new camera format, the YI Halo camera, which is designed for professional VR data acquisition. “I often turn to the Assimilate team to discuss the format issues because they always support the latest camera formats in their Scratch VR tools. This worked well again because I needed to define an efficient VR and 3D workflow that would accommodate the conforming, color grading, creating of visual effects and the finishing of a massive amount of data at 6.7K x 6.7K resolution.”

 

The Post
“The post production process began by downloading 30 Ninjas’ editorial, stitched footage from the cloud to ingest into our MacBook Pro workstations to do the conform at 6K x 6K,” explains MacLean. “Organized data management is a critical step in our workflow, and Scratch VR is a champ at that. We were simultaneously doing the post for more than one episode, as well as other projects within the studio, so data efficiency is key.”

“We then moved the conformed raw 6.7K x 6.7K raw footage to our HP Z840 workstations to do the color grading, visual effects, compositing and finishing. You really need powerful workstations when working at this resolution and with this much data,” reports MacLean. “Spherical VR/360 imagery requires focused concentration, and then we’re basically doing everything twice when working in 3D. For these episodes, and for all VR/360 projects, we create a lat/long that breaks out the left eye and right eye into two spherical images. We then replicate the work from one eye to the next, and color correct any variances. The result is seamless color grading.

 

“We’re essentially using the headset as a creative tool with Scratch VR, because we can work in realtime in an immersive environment and see the exact results of work in each step of the post process,” he continues. “This is especially useful when doing any additional compositing, such as clean-up for artifacts that may have been missed or adding or subtracting data. Working in realtime eases the stress and time of doing a new composite of 360 data for the left eye and right eye 3D.”

Playback of content in the studio is very important to MacLean and team, and he calls the choice of multiple headsets another piece to the VR/360 puzzle. “The VR/3D content can look different in each headset so we need to determine a mid-point aesthetic look that displays well in each headset. We have our own playback black box that we use to preview the color grading and visual effects, before committing to rendering. And then we do a final QC review of the content, and for these episodes we did so in Google Daydream (untethered), HTV Live (tethered) and the Oculus Rift (tethered).”

MacLean sees rendering as one of their biggest challenges. “It’s really imperative to be diligent throughout all the internal and client reviews prior to rendering. It requires being very organized in your workflow from production through finishing, and a solid QC check. Content at 6K x 6K, VR/360 and 3D means extremely large files and numerous hours of rendering, so we want to restrict re-rendering as much as possible.”

Storage in the Studio: VFX Studios

By Karen Maierhofer

It takes talent and the right tools to generate visual effects of all kinds, whether it’s building breathtaking environments, creating amazing creatures or crafting lifelike characters cast in a major role for film, television, games or short-form projects.

Indeed, we are familiar with industry-leading content creation tools such as Autodesk’s Maya, Foundry’s Mari and more, which, when placed into the hands of creatives, the result in pure digital magic. In fact, there is quite a bit of technological magic that occurs at visual effects facilities, including one kind in particular that may not have the inherent sparkle of modeling and animation tools but is just as integral to the visual effects process: storage. Storage solutions are the unsung heroes behind most projects, working behind the scenes to accommodate artists and keep their productive juices flowing.

Here we examine three VFX facilities and their use of various storage solutions and setups as they tackle projects large and small.

Framestore
Since it was founded in 1986, Framestore has placed its visual stamp on a plethora of Oscar-, Emmy- and British Academy Film Award-winning visual effects projects, including Harry Potter, Gravity and Guardians of the Galaxy. With increasingly more projects, Framestore expanded from its original UK location in London to North American locales such as Montreal, New York, Los Angeles and Chicago, handling films as well as immersive digital experiences and integrated advertisements for iconic brands, including Guinness, Geico, Coke and BMW.

Beren Lewis

As the company and its workload grew and expanded into other areas, including integrated advertising, so, too, did its storage needs. “Innovative changes, such as virtual-reality projects, brought on high demand for storage and top-tier performance,” says NYC-based Beren Lewis, CTO of advertising and applied technologies at Framestore. “The team is often required to swiftly accommodate multiple workflows, including stereoscopic 4K and VR.”

Without hesitation, Lewis believes storage is typically the most challenging aspect of technology within the VFX workflow. “If the storage isn’t working, then neither are the artists,” he points out. Furthermore, any issues with storage can potentially lead to massive financial implications for the company due to lost time and revenue.

According to Lewis, Framestore uses its storage solution — a Pixit PixStor General Parallel File System (GPFS) storage cluster using the NetApp E-Series hardware – for all its project data. This includes backups to remote co-location sites, video preprocessing, decompression, disaster recovery preparation, scalability and high performance for VFX, finishing and rendering workloads.

The studio moved all the integrated advertising teams over to the PixStor GPFS clusters this past spring. Currently, Framestore has five primary PixStor clusters using NetApp E-Series in use at each office in London, LA, Chicago and Montreal.

According to Lewis, Framestore partnered with Pixit Media and NetApp to take on increasingly complicated and resource-hungry VR projects. “This partnership has provided the global integrated advertising team with higher performance and nonstop access to data,” he says. “The Pixit Media PixStor software-defined scale-out storage solution running on NetApp E-Series systems brings fast, reliable data access for the integrated advertising division so the team can embrace performance and consistency across all five sites, take a cost-effective, simplified approach to disaster recovery and have a modular infrastructure to support multiple workflows and future expansion.”

BMW

Framestore selected its current solution after reviewing several major storage technologies. It was looking for a single namespace that was very stable, while providing great performance, but it also had to be scalable, Lewis notes. “The PixStor ticked all those boxes and provided the right balance between enterprise-grade hardware and support, and open-source standards,” he explains. “That balance allowed us to seamlessly integrate the PixStor into our network, while still maintaining many of the bespoke tools and services that we had developed in-house over the years, with minimum development time.”

In particular, the storage solution provides the required high performance so that the studio’s VFX, finishing and rendering workloads can all run “full-out with no negative effect on the finishing editors’ or graphic artists’ user experience,” Lewis says. “This is a game-changing capability for an industry that typically partitions off these three workloads to keep artists from having to halt operations. PixStor running on E-Series consolidates all three workloads onto a single IT infrastructure with streamlined end-to-end production of projects, which reduces both time to completion and operational costs, while both IT acquisition and maintenance costs are reduced.”

At Framestore, integrating storage into the workflow is simple. The first step after a project is green-lit is the establishment of a new file set on the PixStor GPFS cluster, where ingested footage and all the CG artist-generated project data will live. “The PixStor is at the heart of the integrated advertising storage workflow from start to finish,” Lewis says. Because the PixStor GPFS cluster serves as the primary storage for all integrated advertising project data, the division’s workstations, renderfarm, editing and finishing stations connect to the cluster for review, generation and storage of project content.

Prior to the move to PixStor/NetApp, Framestore had been using a number of different storage offerings. According to Lewis, they all suffered from the same issues in terms of scalability and degradation of performance under render load — and that load was getting heavier and more unpredictable with every project. “We needed a technology that scaled and allowed us to maintain a single namespace but not suffer from continuous slowdowns for artists due to renderfarm load during crunch times or project delivery.”

Geico

As Lewis explains, with the PixStor/NetApp solution, processing was running up to 270,000 IOPS (I/O operations per second), which was at least several times what Framestore’s previous infrastructure would have been able to handle in a single namespace. “Notably, the development workflow for a major theme-park ride was unhindered by all the VR preprocessing, while backups to remote co-location sites synched every two hours without compromising the artist, rendering or finishing workloads,” he says. “This provided a cost-effective, simplified approach to disaster recovery, and Framestore now has a fast, tightly integrated platform to support its expansion plans.”

To stay at the top of its game, Framestore is always reviewing new technologies, and storage is often part of that conversation. To this end, the studio plans to build on the success it has had with PixStor by expanding the storage to handle some additional editorial playback and render workloads using an all-Non-Volatile Memory Express (NVMe) flash tier. Other projects include a review of object storage technology for use as a long-term, off-premises storage target for archival data.

Without question, the industry’s visual demands are rapidly changing. Not long ago, Framestore could easily predict storage and render requirements for a typical project. But that is no longer the case, and the studio finds itself working in ever-increasing resolutions and frame rates. Whereas projects may have been as small as 3TB in the recent past, nowadays the studio regularly handles multiple projects of 300TB or larger. And the storage must be shared with other projects of varying sizes and scope.

“This new ‘unknowns’ element of our workflow puts many strains on all aspects of our pipeline, but especially the storage,” Lewis points out. “Knowing that our storage can cope with the load and can scale allows us to turn our attention to the other issues that these new types of projects bring to Framestore.”

As Lewis notes, working with high-resolution images and large renderfarms create a unique set of challenges for any storage technology that’s not seen in many other fields. The VFX will often test any storage technology well beyond what other industries are capable of. “If there’s an issue or a break point, we will typically find it in spectacular fashion,” he adds.

Rising Sun Pictures
As a contributor to the design and execution of computer-generated effects on more than 100 feature films since its inception 22 years ago, Rising Sun Pictures (RSP) has pushed the technical bar many times over in film as well as television projects. Based in Adelaide, South Australia, RSP has built a top team of VFX artists who have tackled such box-office hits as Thor: Ragnarok, X-Men and Game of Thrones, as well as the Harry Potter and Hunger Games franchises.

Mark Day

Such demanding, high-level projects require demanding, high-level effects, which, in turn, demand a high-performance, reliable storage solution capable of handling varying data I/O profiles. “With more than 200 employees accessing and writing files in various formats, the need for a fast, reliable and scalable solution is paramount to business continuity,” says Mark Day, director of engineering at RSP.

Recently, RSP installed an Oracle ZS5 storage appliance to handle this important function. This high-performance, unified storage system provides NAS and SAN cloud-converged storage capabilities that enable on-premises storage to seamlessly access Oracle Public Cloud. Its advanced hardware and software architecture includes a multi-threading SMP storage operating system for running multiple workloads and advanced data services without performance degradation. The offering also caches data on DRAM or flash cache for optimal performance and efficiency, while keeping data safely stored on high-capacity SSD (solid state disk) or HDD (hard disk drive) storage.

Previously, the studio had been using an Dell EMC Isilon storage cluster with Avere caching appliances, and the company is still employing the solution for parts of its workflow.

When it came time to upgrade to handle RSP’s increased workload, the facility ran a proof of concept with multiple vendors in September 2016 and benchmarked their systems. Impressed with Oracle, RSP began installation in early 2017. According to Day, RSP liked the solution’s ability to support larger packet sizes — now up to 1MB. In addition, he says its “exceptional” analytics engine gives introspection into a render job.

“It has a very appealing [total cost of ownership], and it has caching right out of the box, removing the need for additional caching appliances,” says Day. Storage is at the center of RSP’s workflow, storing all the relevant information for every department — from live-action plates that are turned over from clients, scene setup files and multi-terabyte cache files to iterations of the final product. “All employees work off this storage, and it needs to accommodate the needs of multiple projects and deadlines with zero downtime,” Day adds.

Machine Room

“Visual effects scenes are getting more complex, and in turn, data sizes are increasing. Working in 4K quadruples file sizes and, therefore, impacts storage performance,” explains Day. “We needed a solution that could cope with these requirements and future trends in the industry.”

According to Day, the data RSP deals with is broad, from small setup files to terabyte geocache files. A one-minute 2K DPX sequence is 17GB for the final pass, while 4K is 68GB. “Keep in mind this is only the final pass; a single shot could include hundreds of passes for a heavy computer-generated sequence,” he points out.

Thus, high-performance storage is important to the effective operation of a visual effects company like RSP. In fact, storage helps the artists stay on the creative edge by enabling them to iterate through the creative process of crafting a shot and a look. “Artists are required to iterate their creative process many times to perfect the look of a shot, and if they experience slowdowns when loading scenes, this can have a dramatic effect on how many iterations they can produce. And in turn, this affects employees’ efficiency and, ultimately, the profitability of the company,” says Day.

Thor: Ragnarok

Most recently, RSP used its new storage solution for work on the blockbuster Thor: Ragnarok, in particular, for the Val’s Flashback sequence — which was extremely complex and involved extensive lighting and texture data, as well as high-frame-rate plates (sometimes more than 1,000fps for multiple live-action footage plates). “Before, our storage refresh early versions of this shot could take up to 24 hours to render on our server farm. But since installing our new storage, we saw this drastically reduced to six hours — that’s a 3x improvement, which is a fantastic outcome,” says Day.

Outpost VFX
A full-service VFX studio for film, broadcast and commercials, Outpost VFX, based in Bournemouth, England, has been operational since late 2012. Since that time, the facility has been growing by leaps and bounds, taking on major projects, including Life, Nocturnal Animals, Jason Bourne and 47 Meters Down.

Paul Francis

Due to this fairly rapid expansion, Outpost VFX has seen the need for increased capacity in its storage needs. “As the company grows and as resolution increases and HDR comes in, file sizes increase, and we need much more capacity to deal with that effectively,” says CTO Paul Francis.

When setting up the facility five years ago, the decision was made to go with PixStor from Pixit Media and Synology’s NAS for its storage solution. “It’s an industry-recognized solution that is extremely resilient to errors. It’s fast, robust and the team at Pixit provides excellent support, which is important to us,” says Francis.

Foremost, the solution had to provide high capacity and high speeds. “We need lots of simultaneous connections to avoid bottlenecks and ensure speedy delivery of data,” Francis adds. “This is the only one we’ve used, really. It has proved to be stable enough to support us through our growth over the last couple of years — growth that has included a physical office move and an increase in artist capacity to 80 seats.”

Outpost VFX mainly works with image data and project files for use with Autodesk’s Maya, Foundry’s Nuke, Side Effects’ Houdini and other VFX and animation tools. The challenge this presents is twofold, both large and small: concern for large file sizes, and problems the group can face with small files, such as metadata. Francis explains: “Sequentially loading small files can be time-consuming due to the current technology, so moving to something that can handle both of these areas will be of great benefit to us.”

Locally, artists use a mix of HDDs from a number of different manufacturers to store reference imagery and so forth — older-generation PCs have mostly Western Digital HDDs while newer PCs have generic SSDs. When replacing or upgrading equipment, Outpost VFX uses Samsung 900 Series SSDs, depending on the required performance and current market prices.

Life

Like many facilities, Outpost VFX is always weighing its options when it comes to finding the best solution for its current and future needs. Presently, it is looking at splitting up some of its storage solutions into smaller segments for greater resilience. “When you only have one storage solution and it fails, everything goes down. We’re looking to break our setup into smaller, faster solutions,” says Francis.

Additionally, security is a concern for Outpost VFX when it comes to its clients. According to Francis, certain shows need to be annexed, meaning the studio will need a separate storage solution outside of its main network to handle that data.

When Outpost VFX begins a job, the group ingests all the plates it needs to work on, and they reside in a new job folder created by production and assigned to a specific drive for active jobs. This folder then becomes the go-to for all assets, elements and shot iterations created throughout the production. For security purposes, these areas of the server are only visible to and accessible by artists, who in turn cannot access the Internet; this ensures that the files are “watertight and immune to leaks,” says Francis, adding that with PixStor, the studio is able to set up different partitions for different areas that artists can jump between easily.

How important is storage to Outpost VFX? “Frankly, there’d be no operation without storage!” Francis says emphatically. “We deal with hundreds of terrabytes of data in visual effects, so having high-capacity, reliable storage available to us at all times is absolutely essential to ensure a smooth and successful operation.”

47 Meters Down

Because the studio delivers visual effects across film, TV and commercials simultaneously, storage is an important factor no matter what the crew is working on. A recent film project like 47 Meters Down required the full gamut of visual effects work, as Outpost VFX was the sole vendor for the project. So, the studio needed the space and responsiveness of a storage system that enabled them to deliver more than 420 shots, a number of which featured heavy 3D builds and multiple layers of render elements.

“We had only about 30 artists at that point, so having a stable solution that was easy for our team to navigate and use was crucial,” Francis points out.

Main Image: From Outpost VFX’s Domestos commercial out of agency MullenLowe London.

Dell 6.15

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.


A Closer Look: VR solutions for production and post

By Alexandre Regeffe

Back in September, I traveled to Amsterdam to check out new tools relating to VR and 360 production and post. As a producer based in Paris, France, I have been working in the virtual reality part of the business for over two years. While IBC took place in September, the information I have to share is still quite relevant.

KanDao

I saw some very cool technology at the show regarding VR and 360 video, especially within the cinematic VR niche. And niche is the perfect word — I see the market slightly narrowing after the wave of hype that happened a couple of years ago. Personally, I don’t think the public has been reached yet, but pardon my French pessimism. Let’s take a look…

Cameras
One new range of products I found amazing were the Obsidian cameras from manufacturer KanDao. This Chinese brand has a smart product line with their 3D/360 cameras. Starting with the Obsidian Go, they reach pro cinematic levels with the Obsidian R (for Resolution, which is 8K per eye) and the Obsidian S (for speed, which you can capture at 120fps). It offers a small radial form factor, only six eyes to produce very smooth stereoscopy, with very a high resolution per eye, which is one of the keys to reaching a good feeling of immersion using a HMD.

Kandao’s features are promising, including handling 6DoF with depth map generation. To me, this is the future of cinematic VR producing — you will be able to have more freedom as the viewer, translating slightly your point of view to see behind objects with natural parallax distortion in realtime! Let me call it “extended” stereoscopic 360.

I can’t speak about professional 360 cameras without also mentioning the Ozo from Nokia. Considered by users to be the first pro VR camera, the Ozo+ version launched this year with a new ISP and offers astonishing new features, especially when you transfer your shots in the Ozo creator tool, which is in version 2.1.

Nokia Ozo+

Powerful tools, like highlights and shadow recovery, haze removal, auto stabilization and better denoising. are there to improve the overall image quality. Another big thing on the Nokia booth was the version 2.0 of the Ozo Live system. Yes, you can now webcast your live event in stereoscopic 360 with a 4K-per-eye resolution! And you can simply use a (boosted) laptop to do it! All the VR tools from Nokia are part of what they call Ozo Reality, an integrated ecosystem where you can create, deliver and experience cinematic VR.

VR Post
When you talk about VR post you have to talk about stitching — assembling all sources to obtain a 360 image. As a French-educated man, you know I have to complain somehow: I hate stitching. And I often yell at these guys who shoot at wrong camera positions. Spending hours (and money) dealing with seam lines is not my tasse de thé.

A few months before IBC, I found my grace: Mistika VR from SGO. Well known for their color grading tool Mistika Ultima (which is one of the finest in stereoscopic), SGO launched a stitching tool for 360 video. Fantastic results. Fantastic development team.

In this very intuitive tool, you can stitch sources of almost all existing cameras and rigs available on the market now, from Samsung gear 360 to Jaunt. With amazing optical flow algorithms, seam line fine adjustments, color matching and many other features, it is to me by far the best tool for outputing a clean, seamless equirectangular image. And the upcoming Mistika VR 3D for stitching stereoscopic sources is very promising. You know what? Thanks to Mistika VR, the stitching process could be fun. Even for me.

In general, optical flow is a huge improvement for stitching, and we can find this parameter in the Kandao Studio stitching tool (designed only for Obsidian cameras), for instance. When you’re happy with your stitch, you can then edit, color grade and maybe add VFX and interactivity in order to bring a really good experience to viewers.

Immersive video within Adobe Premiere.

Today, Adobe CC takes the lead of the editing scene with their specific 360 tools, such as their contextual viewer. But the big hit was when they acquired the Skybox plugins suite from Mettle, which will be integrated natively in the next Adobe CC version (for Premiere and After Effects).

With this set of tools you can easily manipulate your equirectangular sources, do tripod removal, sky replacements and all the invisible effects that were tricky to do without Skybox. You can then add contextual 360 effects like text, blur, transitions, greenscreen, and much more, in monoscopic and even stereoscopic mode. All this while viewing your timeline directly in your Oculus Rift and in realtime! And, incredibly it’s working — I use these tools all day long.

So let’s talk about the Mettle team. Created by two artists back in 1992, they joined the VR movement three years ago with the Skybox suite. They understood they had to bring tech to creative people. As a result they made smart tools with very well-designed GUI. For instance, look at Mettle’s new Mantra creative toolset for After Effects and Premiere. It is incredible to work with because you get the power to create very artistic designs in 360 in Adobe CC. And if you’re a solid VFX tech, wait for their Volumatrix depth-related VR FX software tools. Working in collaboration with Facebook, Mettle will launch the next big tool to do VFX in 3D/360 environments using camera-generated depth maps. It will open new awesome possibilities for content creators.

You know, the current main issue in cinematic 360 is image quality. Of course, we could talk about resolution or pixel per eye, but I think we should focus on color grading. This task is very creative — bringing emotions to the viewers. For me, the best 360 color grading tool to achieve these goals with uncompromised quality is Scratch VR from Assimilate. Beautiful. Formidable. Scratch is a very powerful color grading system, always on top in terms of technology. Now that they’ve added VR capabilities, you can color grade your stereoscopic equirectangular sources as easily as with normal sources. My favorite is mask repeater function, so you can naturally handle masks even in the back seam, which is almost impossible in other color grading tools. And you can also view your results directly in your HMD.

Scratch VR and ZCam collaboration.

At NAB 2017, they provided Scratch VR Z, an integrated workflow in collaboration with ZCam, the manufacturer of the S1 and S1 Pro. In this workflow you can, for instance, stitch sources directly into Scratch and do super high-quality color grading with realtime live streaming, along with logo insertion, greenscreen capabilities, layouts, etc. Crazy. For finishing, the Scratch VR output module is also very useful, enabling you to render your result in ProRes even on Windows, or in 10-bit H264, and many other formats.

Finishing and Distribution
So your cinematic VR experience is finished (you’ll notice I’ve skipped the sound part of the process, but since it’s not the part I work on I will not speak about this essential stage). But maybe you want to add some interactivity for a better user experience?

I visited IBC’s Future Zone to talk with the Liquid Cinema team. What is it? Simply, it’s a set of tools enabling you to enhance your cinematic VR experience. One important word is storytelling — with liquid cinema you can add an interactive layer to your story. The first tool needed is the authoring application where you drop your sources, which can be movies, stills, 360 and 2D stuff. Then create and enjoy.

For example, you can add graphic layers and enable the viewers gaze function, create multibranching scenarios based on intelligent timelines, play with forced perspective features so your viewer never misses an important thing… you must to try it.

The second part of the suite is about VR distribution. As a content creator you want your experience to be on all existing platforms, HMDs, channels … not an easy feat, but with Liquid Cinema it’s possible. Their player is compatible with Samsung Gear VR, Oculus Rift, HTC Vive, iOS, Android, Daydream and more. It’s coming to Apple TV soon.

IglooVision

The third part of the suite is the management of your content. Liquid Cinema has a CMS tool, which is very simple and allows changes, like geoblocking, easily, and provides useful analytics tools like heat map. And you can use your Vimeo pro account as a CDN if needed. Perfect.

Also in the Future Zone was the igloo from IglooVision. This is one of the best “social” ways to experience cinematic VR that I have ever seen. Enter this room with your friends and you can watch 360 all around and finish your drink (try this with an HMD). Comfortable, isn’t it? You can also use it as a “shared VR production suite” by connecting Adobe Premiere or your favorite tool directly to the system. Boom. You have now an immersive 360-degree monitor around you and your post production team.

So that was my journey into the VR stuff of IBC 2017. Of course, this is a non-exhaustive list of tools, with nothing about sound (which is very important in VR), but it’s my personal choice. Period.

One last thing: VR people. I have met a lot of enthusiastic, smart, interesting and happy women and men, helping content producers like me to push their creative limits. So thanks to all of them and see ya.


Paris-based Alexandre Regeffe is a 25-year veteran of TV and film. He is currently VR post production manager at Neotopy, a VR studio, as well as a VR effects specialist working on After Effects and the entire Adobe suite. His specialty is cinematic VR post workflows.


Sonic Union adds Bryant Park studio targeting immersive, broadcast work

New York audio house Sonic Union has launched a new studio and creative lab. The uptown location, which overlooks Bryant Park, will focus on emerging spatial and interactive audio work, as well as continued work with broadcast clients. The expansion is led by principal mix engineer/sound designer Joe O’Connell, now partnered with original Sonic Union founders/mix engineers Michael Marinelli and Steve Rosen and their staff, who will work out of both its Union Square and Bryant Park locations. O’Connell helmed sound company Blast as co-founder, and has now teamed up with Sonic Union.

In other staffing news, mix engineer Owen Shearer advances to also serve as technical director, with an emphasis on VR and immersive audio. Former Blast EP Carolyn Mandlavitz has joined as Sonic Union Bryant Park studio director. Executive creative producer Halle Petro, formerly senior producer at Nylon Studios, will support both locations.

The new studio, which features three Dolby Atmos rooms, was created and developed by Ilan Ohayon of IOAD (Architect of Record), with architectural design by Raya Ani of RAW-NYC. Ani also designed Sonic’s Union Square studio.

“We’re installing over 30 of the new ‘active’ JBL System 7 speakers,” reports O’Connell. “Our order includes some of the first of these amazing self-powered speakers. JBL flew a technician from Indianapolis to personally inspect each one on site to ensure it will perform as intended for our launch. Additionally, we created our own proprietary mounting hardware for the installation as JBL is still in development with their own. We’ll also be running the latest release of Pro Tools (12.8) featuring tools for Dolby Atmos and other immersive applications. These types of installations really are not easy as retrofits. We have been able to do something really unique, flexible and highly functional by building from scratch.”

Working as one team across two locations, this emerging creative audio production arm will also include a roster of talent outside of the core staff engineering roles. The team will now be integrated to handle non-traditional immersive VR, AR and experiential audio planning and coding, in addition to casting, production music supervision, extended sound design and production assignments.

Main Image Caption: (L-R) Halle Petro, Steve Rosen, Owen Shearer, Joe O’Connell, Adam Barone, Carolyn Mandlavitz, Brian Goodheart, Michael Marinelli and Eugene Green.

 


Tackling VR storytelling challenges with spatial audio

By Matthew Bobb

From virtual reality experiences for brands to top film franchises, VR is making a big splash in entertainment and evolving the way creators tell stories. But, as with any medium and its production, bringing a narrative to life is no easy feat, especially when it’s immersive. VR comes with its own set of challenges unique to the platform’s capacity to completely transport viewers into another world and replicate reality.

Making high-quality immersive experiences, especially for a film franchise, is extremely challenging. Creators must place the viewer into a storyline crafted by the studios and properly guide them through the experience in a way that allows them to fully grasp the narrative. One emerging strategy is to emphasize audio — specifically, 360 spatial audio. VR offers a sense of presence no other medium today can offer. Spatial audio offers an auditory presence that augments a VR experience, amplifying its emotional effects.

My background as audio director for VR experiences includes top film franchises such as Warner Bros. and New Line Cinema’s IT: Float — A Cinematic VR Experience, The Conjuring 2 — Experience Enfield VR 360, Annabelle: Creation VR — Bee’s Room, and the upcoming Greatest Showman VR experience for 20th Century Fox. In the emerging world of VR, I have seen production teams encounter numerous challenges that call for creative solutions. For some of the most critical storytelling moments, it’s crucial for creators to understand the power of spatial audio and its potential to solve some of the most prevalent challenges that arise in VR production.

Most content creators — even some of those involved in VR filmmaking — don’t fully know what 360 spatial audio is or how its implementation within VR can elevate an experience. With any new medium, there are early adopters who are passionate about the process. As the next wave of VR filmmakers emerge, they will need to be informed about the benefits of spatial audio.

Guiding Viewers
Spatial audio is an incredible tool that helps make a VR experience feel believable. It can present sound from several locations, which allows viewers to identify their position within a virtual space in relation to the surrounding environment. With the ability to provide location-based sound from any direction and distance, spatial audio can then be used to produce directional auditory cues that grasp the viewer’s attention and coerce them to look in a certain direction.

VR is still unfamiliar territory for a lot of people, and the viewing process isn’t as straightforward as a 2D film or game, so dropping viewers into an experience can leave them feeling lost and overwhelmed. Inexperienced viewers are also more apprehensive and rarely move around or turn their heads while in a headset. Spatial audio cues prompting them to move or look in a specific direction are critical, steering them to instinctively react and move naturally. On Annabelle: Creation VR — Bee’s Room, viewers go into the experience knowing it’s from the horror genre and may be hesitant to look around. We strategically used audio cues, such as footsteps, slamming doors and a record player that mysteriously turns on and off, to encourage viewers to turn their head toward the sound and the chilling visuals that await.

Lacking Footage
Spatial audio can also be a solution for challenging scene transitions, or when there is a dearth of visuals to work with in a sequence. Well-crafted aural cues can paint a picture in a viewer’s mind without bombarding the experience with visuals that are often unnecessary.

A big challenge when creating VR experiences for beloved film franchises is the need for the VR production team to work in tandem with the film’s production team, making recording time extremely limited. When working on IT: Float, we were faced with the challenge of having a time constraint for shooting Pennywise the Clown. Consequently, there was not an abundance of footage of him to place in the promotional VR experience. Beyond a lack of footage, they also didn’t want to give away the notorious clown’s much-anticipated appearance before the film’s theatrical release. The solution to that production challenge was spatial audio. Pennywise’s voice was strategically used to lead the experience and guide viewers throughout the sewer tunnels, heightening the suspense while also providing the illusion that he was surrounding the viewer.

Avoiding Visual Overkill
Similar to film and video games, sound is half of the experience in VR. With the unique perspective the medium offers, creators no longer have to fully rely on a visually-heavy narrative, which can overwhelm the viewer. Instead, audio can take on a bigger role in the production process and make the project a well-rounded sensory experience. In VR, it’s important for creators to leverage sensory stimulation beyond visuals to guide viewers through a story and authentically replicate reality.

As VR storytellers, we are reimagining ways to immerse viewer in new worlds. It is crucial for us to leverage the power of audio to smooth out bumps in the road and deliver a vivid sense of physical presence unique to this medium.


Matthew Bobb is the CEO of the full-service audio company Spacewalk Sound. He is a spatial audio expert whose work can be seen in top VR experiences for major film franchises.


Editing 360 Video in VR (Part 2)

By Mike McCarthy

In the last article I wrote on this topic, I looked at the options for shooting 360-degree video footage, and what it takes to get footage recorded on a Gear 360 ready to review and edit on a VR-enabled system. The remaining steps in the workflow will be similar regardless of which camera you are using.

Previewing your work is important so, if you have a VR headset you will want to make sure it is installed and functioning with your editing software. I will be basing this article on using an Oculus Rift to view my work in Adobe Premiere Pro 11.1.2 on a Thinkpad P71 with an Nvidia Quadro P5000 GPU. Premiere requires an extra set of plugins to interface to the Rift headset. Adobe acquired Mettle’s Skybox VR Player plugin back in June, and has made it available to Creative Cloud users upon request, which you can do here.

Skybox VR player

Skybox can project the Adobe UI to the Rift, as well as the output, so you could leave the headset on when making adjustments, but I have not found that to be as useful as I had hoped. Another option is to use the GoPro VR Player plugin to send the Adobe Transmit output to the Rift, which can be downloaded for free here (use the 3.0 version or above). I found this to have slightly better playback performance, but fewer options (no UI projection, for example). Adobe is expected to integrate much of this functionality into the next release of Premiere, which should remove the need for most of the current plugins and increase the overall functionality.

Once our VR editing system is ready to go, we need to look at the footage we have. In the case of the Gear 360, the dual spherical image file recorded by the camera is not directly usable in most applications and needs to be processed to generate a single equirectangular projection, stitching the images from both cameras into a single continuous view.

There are a number of ways to do this. One option is to use the application Samsung packages with the camera: Action Director 360. You can download the original version here, but will need the activation code that came with the camera in order to use it. Upon import, the software automatically processes the original stills and video into equirectangular 2:1 H.264 files. Instead of exporting from that application, I pull the temp files that it generates on media import, and use them in Premiere. (C:\Users\[Username]\Documents\CyberLink\ActionDirector\1.0\360) is where they should be located by default. While this is the simplest solution for PC users, it introduces an extra transcoding step to H.264 (after the initial H.265 recording), and I frequently encountered an issue where there was a black hexagon in the middle of the stitched image.

Action Director

Activating Automatic Angle Compensation in the Preferences->Editing panel gets around this bug, while trying to stabilize your footage to some degree. I later discovered that Samsung had released a separate Version 2 of Action Director available for Windows or Mac, which solves this issue. But I couldn’t get the stitched files to work directly in the Adobe apps, so I had to export them, which was yet another layer of video compression. You will need a Samsung activation code that came with the Gear 360 to use any of the versions, and both versions took twice as long to stitch a clip as its run time on my P71 laptop.

An option that gives you more control over the stitching process is to do it in After Effects. Adobe’s recent acquisition of Mettle’s SkyBox VR toolset makes this much easier, but it is still a process. Currently you have to manually request and install your copy of the plugins as a Creative Cloud subscriber. There are three separate installers, and while this stitching process only requires Skybox Suite AE, I would install both the AE and Premiere Pro versions for use in later steps, as well as the Skybox VR player if you have an HMD to preview with. Once you have them installed, you can use the Skybox Converter effect in After Effects to convert from the Gear 360’s fisheye files to the equirectangular assets that Premiere requires for editing VR.

Unfortunately, Samsung’s format is not one of the default conversions supported by the effect, so it requires a little more creativity. The two sensor images have to be cropped into separate comps and with plugin applied to each of them. Setting the Input to fisheye and the output to equirectangular for each image will give the desired distortion. A feathered mask applied to the circle to adjust the seam, and the overlap can be adjusted with the FOV and re-orient camera values.

Since this can be challenging to setup, I have posted an AE template that is already configured for footage from the Gear 360. The included directions should be easy to follow, and the projection, overlap and stitch can be further tweaked by adjusting the position, rotation and mask settings in the sub-comps, and the re-orientation values in the Skybox Converter effects. Hopefully, once you find the correct adjustments for your individual camera, they should remain the same for all of your footage, unless you want to mask around an object crossing the stitch boundary. More info on those types of fixes can be found here. It took me five minutes to export 60 seconds of 360 video using this approach, and there is no stabilization or other automatic image analysis.

Video Stitch Studio

Orah makes Video-Stitch Studio, which is a similar product but with a slightly different feature set and approach. One limitation I couldn’t find a way around is that the program expects the various fisheye source images to be in separate files, and unlike AVP I couldn’t get the source cropping tool to work without rendering the dual fisheye images into separate square video source files. There should be a way to avoid that step, but I couldn’t find one. (You can use the crop effect to remove 1920 pixels on one side or the other to make the conversions in Media Encoder relatively quickly.) Splitting the source file and rendering separate fisheye spheres adds a workflow step and render time, and my one-minute clip took 11 minutes to export. This is a slower option, which might be significant if you have hours of footage to process instead of minutes.

Clearly, there are a variety of ways to get your raw footage stitched for editing. The results vary greatly between the different programs, so I made video to compare the different stitching options on the same source clip. My first attempt was with a locked-off shot in the park, but that shot was too simple to see the differences, and it didn’t allow for comparison of the stabilization options available in some of the programs. I shot some footage from a moving vehicle to see how well the motion and shake would be handled by the various programs. The result is now available on YouTube, fading between each of the five labeled options over the course of the minute long clip. I would categorize this as testing how well the various applications can handle non-ideal source footage, which happens a lot in the real world.

I didn’t feel that any of the stitching options were perfect solutions, so hopefully we will see further developments in that regard in the future. You may want to explore them yourself to determine which one best meets your needs. Once your footage is correctly mapped to equirectangular projection, ideally in a 2:1 aspect ratio, and the projects are rendered and exported (I recommend Cineform or DNxHR), you are ready to edit your processed footage.

Launch Premiere Pro and import your footage as you normally would. If you are using the Skybox Player plugin, turn on Adobe Transmit with the HMD selected as the only dedicated output (in the Skybox VR configuration window, I recommend setting the hot corner to top left, to avoid accidentally hitting the start menu, desktop hide or application close buttons during preview). In the playback monitor, you may want to right click the wrench icon and select Enable VR to preview a pan-able perspective of the video, instead of the entire distorted equirectangular source frame. You can cut, trim and stack your footage as usual, and apply color corrections and other non-geometry-based effects.

In version 11.1.2 of Premiere, there is basically one VR effect (VR Projection), which allows you to rotate the video sphere along all three axis. If you have the Skybox Suite for Premiere installed, you will have some extra VR effects. The Skybox Rotate Sphere effect is basically the same. You can add titles and graphics and use the Skybox Project 2D effect to project them into the sphere where you want. Skybox also includes other effects for blurring and sharpening the spherical video, as well as denoise and glow. If you have Kolor AVP installed that adds two new effects as well. GoPro VR Horizon is similar to the other sphere rotation ones, but allows you to drag the image around in the monitor window to rotate it, instead of manually adjusting the axis values, so it is faster and more intuitive. The GoPro VR Reframe effect is applied to equirectangular footage, to extract a flat perspective from within it. The field of view can be adjusted and rotated around all three axis.

Most of the effects are pretty easy to figure out, but Skybox Project 2D may require some experimentation to get the desired results. Avoid placing objects near the edges of the 2D frame that you apply it to, to keep them facing toward the viewer. The rotate projection values control where the object is placed relative to the viewer. The rotate source values rotate the object at the location it is projected to. Personally, I think they should be placed in the reverse order in the effects panel.

Encoding the final output is not difficult, just send it to Adobe Media Encoder using either H.264 or H.265 formats. Make sure the “Video is VR” box is checked at the bottom of the Video Settings pane, and in this case that the frame layout is set to monoscopic. There are presets for some of the common framesizes, but I would recommend lowering the bitrates, at least if you are using Gear 360 footage. Also, if you have ambisonic audio set channels to 4.0 in the audio pane.

Once the video is encoded, you can upload it directly to Facebook. If you want to upload to YouTube, exports from AME with the VR box checked should work fine, but for videos from other sources you will need to modify the metadata with this app here.  Once your video is uploaded to YouTube, you can embed it on any webpage that supports 2D web videos. And YouTube videos can be streamed directly to your Rift headset using the free DeoVR video player.

That should give you a 360-video production workflow from start to finish. I will post more updated articles as new software tools are developed, and as I get new 360 cameras with which to test and experiment.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


GoPro intros Hero6 and its first integrated 360 solution, Fusion

By Mike McCarthy

Last week, I traveled to San Francisco to attend GoPro’s launch event for its new Hero6 and Fusion cameras. The Hero6 is the next logical step in the company’s iteration of action cameras, increasing the supported frame rates to 4Kp60 and 1080p240, as well as adding integrated image stabilization. The Fusion on the other hand is a totally new product for them, an action-cam for 360-degree video. GoPro has developed a variety of other 360-degree video capture solutions in the past, based on rigs using many of their existing Hero cameras, but Fusion is their first integrated 360-video solution.

While the Hero6 is available immediately for $499, the Fusion is expected to ship in November for $699. While we got to see the Fusion and its footage, most of the hands-on aspects of the launch event revolved around the Hero6. Each of the attendees was provided a Hero6 kit to record the rest of the days events. My group was provided a ride on the RocketBoat through the San Francisco Bay. This adventure took advantage of a number of features of the camera, including the waterproofing, the slow motion and the image stabilization.

The Hero6

The big change within the Hero6 is the inclusion of GoPro’s new custom-designed GP1 image processing chip. This allows them to process and encode higher frame rates, and allows for image stabilization at many frame-rate settings. The camera itself is physically similar to the previous generations, so all of your existing mounts and rigs will still work with it. It is an easy swap out to upgrade the Karma drone with the new camera, which also got a few software improvements. It can now automatically track the controller with the camera to keep the user in the frame while the drone is following or stationary. It can also fly a circuit of 10 waypoints for repeatable shots, and overcoming a limitation I didn’t know existed, it can now look “up.”

There were fewer precise details about the Fusion. It is stated to be able to record a 5.2K video sphere at 30fps and a 3K sphere at 60fps. This is presumably the circumference of the sphere in pixels, and therefore the width of an equi-rectangular output. That would lead us to conclude that the individual fish-eye recording is about 2,600 pixels wide, plus a little overlap for the stitch. (In this article, GoPro’s David Newman details how the company arrives at 5.2K.)

GoPro Fusion for 360

The sensors are slightly laterally offset from one another, allowing the camera to be thinner and decreasing the parallax shift at the side seams, but adding a slight offset at the top and bottom seams. If the camera is oriented upright, those seams are the least important areas in most shots. They also appear to have a good solution for hiding the camera support pole within the stitch, based on the demo footage they were showing. It will be interesting to see what effect the Fusion camera has on the “culture” of 360 video. It is not the first affordable 360-degree camera, but it will definitely bring 360 capture to new places.

A big part of the equation for 360 video is the supporting software and the need to get the footage from the camera to the viewer in a usable way. GoPro already acquired Kolor’s Autopano Video Pro a few years ago to support image stitching for their larger 360 video camera rigs, so certain pieces of the underlying software ecosystem to support 360-video workflow are already in place. The desktop solution for processing the 360 footage will be called Fusion Studio, and is listed as coming soon on their website.

They have a pretty slick demonstration of flat image extraction from the video sphere, which they are marketing as “OverCapture.” This allows a cellphone to pan around the 360 sphere, which is pretty standard these days, but by recording that viewing in realtime they can output standard flat videos from the 360 sphere. This is a much simpler and more intuitive approach to virtual cinematography that trying to control the view with angles and keyframes in a desktop app.

This workflow should result in a very fish-eye flat video, similar to the more traditional GoPro shots, due to the similar lens characteristics. There are a variety of possible approaches to handling the fish-eye look. GoPro’s David Newman was explaining to me some of the solutions he has been working on to re-project GoPro footage into a sphere, to reframe or alter the field of view in a virtual environment. Based on their demo reel, it looks like they also have some interesting tools coming for using the unique functionality that 360 makes available to content creators, using various 360 projections for creative purposes within a flat video.

GoPro Software
On the software front, GoPro has also been developing tools to help its camera users process and share their footage. One of the inherent issues of action-camera footage is that there is basically no trigger discipline. You hit record long before anything happens, and then get back to the camera after the event in question is over. I used to get one-hour roll-outs that had 10 seconds of usable footage within them. The same is true when recording many attempts to do something before one of them succeeds.

Remote control of the recording process has helped with this a bit, but regardless you end up with tons of extra footage that you don’t need. GoPro is working on software tools that use AI and machine learning to sort through your footage and find the best parts automatically. The next logical step is to start cutting together the best shots, which is what Quikstories in their mobile app is beginning to do. As someone who edits video for a living, and is fairly particular and precise, I have a bit of trouble with the idea of using something like that for my videos, but for someone to whom the idea of “video editing” is intimidating, this could be a good place to start. And once the tools get to a point where their output can be trusted, automatically sorting footage could make even very serious editing a bit easier when there is a lot of potential material to get through. In the meantime though, I find their desktop tool Quik to be too limiting for my needs and will continue to use Premiere to edit my GoPro footage, which is the response I believe they expect of any professional user.

There are also a variety of new camera mount options available, including small extendable tripod handles in two lengths, as well as a unique “Bite Mount” (pictured, left) for POV shots. It includes a colorful padded float in case it pops out of your mouth while shooting in the water. The tripods are extra important for the forthcoming Fusion, to support the camera with minimal obstruction of the shot. And I wouldn’t recommend the using Fusion on the Bite Mount, unless you want a lot of head in the shot.

Ease of Use
Ironically, as someone who has processed and edited hundreds of hours of GoPro footage, and even worked for GoPro for a week on paper (as an NAB demo artist for Cineform during their acquisition), I don’t think I had ever actually used a GoPro camera. The fact that at this event we were all handed new cameras with zero instructions and expected to go out and shoot is a testament to how confident GoPro is that their products are easy to use. I didn’t have any difficulty with it, but the engineer within me wanted to know the details of the settings I was adjusting. Bouncing around with water hitting you in the face is not the best environment for learning how to do new things, but I was able to use pretty much every feature the camera had to offer during that ride with no prior experience. (Obviously I have extensive experience with video, just not with GoPro usage.) And I was pretty happy with the results. Now I want to take it sailing, skiing and other such places, just like a “normal” GoPro user.

I have pieced together a quick highlight video of the various features of the Hero6:


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.