Category Archives: 360

Mocha VR: An After Effects user’s review

By Zach Shukan

If you’re using Adobe After Effects to do compositing and you’re not using Mocha, then you’re holding yourself back. If you’re using Mettle Skybox, you need to check out Mocha VR, the VR-enhanced edition of Mocha Pro.

Mocha Pro, and Mocha VR are all standalone programs where you work entirely within the Mocha environment and then export your tracks, shapes or renders to another program to do the rest of the compositing work. There are plugins for Maxon Cinema 4D, The Foundry’s Nuke, HitFilm, and After Effects that allow you to do more with the Mocha data within your chosen 3D or compositing program. Limited-feature versions of Mocha (Mocha AE and Mocha HitFilm) come installed with the Creative Cloud versions of After Effects and HitFilm 4 Pro, and every update of these plugins is getting closer to looking like a full version of Mocha running inside of the effects panel.

Maybe I’m old school, or maybe I just try to get the maximum performance from my workstation, but I always choose to run Mocha VR by itself and only open After Effects when I’m ready to export. In my experience, all the features of Mocha run more smoothly in the standalone than when they’re launched and run inside of After Effects.**

How does Mocha VR compare to Mocha Pro? If you’re not doing VR, stick with Mocha Pro. However, if you are working with VR footage, you won’t have to bend over backwards to keep using Mocha.

Last year was the year of VR, when all my clients wanted to do something with VR. It was a crazy push to be the first to make something and I rode the wave all year. The thing is there really weren’t many tools specifically designed to work with 360 video. Now this year, the post tools for working with VR are catching up.

In the past, I forced previous versions of Mocha to work with 360 footage before the VR version, but since Mocha added its VR-specific features, stabilizing a 360-camera became cake compared to the kludgy way it works with the industry standard After Effects 360 plugin, Skybox. Also, I’ve used Mocha to track objects in 360 before the addition of an equirectangular* camera and it was super-complicated because I had to splice together a whole bunch of tracks to compensate for the 360 camera distortion. Now it’s possible to create a single track to follow objects as they travel around the camera. Read the footnote for an explanation of equirectangular, a fancy word that you need to know if you’re working in VR.

Now let’s talk about the rest of Mocha’s features…

Rotoscoping
I used to rotoscope by tracing every few frames and then refining the frames in between until I found out about the Mocha way to rotoscope. Because Mocha combines rotoscoping with tracking of arbitrary shapes, all you have to do is draw a shape and then use tracking to follow and deform all the way through. It’s way smarter and more importantly, faster. Also, with the Uberkey feature, you can adjust your shapes on multiple frames at once. If you’re still rotoscoping with After Effects alone, you’re doing it the hard way.

Planar Tracking
When I first learned about Mocha it was all about the planar tracker, and that really is still the heart of the program. Mocha’s basically my go-to when nothing else works. Recently, I was working on a shot where a woman had her dress tucked into her pantyhose, and I pretty much had to recreate a leg of a dress that swayed and flowed along with her as she walked. If it wasn’t for Mocha’s planar tracker I wouldn’t have been able to make a locked-on track of the soft-focus (solid color and nearly without detail) side of the dress. After Effects couldn’t make a track because there weren’t enough contrast-y details.

GPU Acceleration
I never thought Mocha’s planar tracking was slow, even though it is slower than point tracking, but then they added GPU acceleration a version or two ago and now it flies through shots. It has to be at least five times as fast now that it’s using my Nvidia Titan X (Pascal), and it’s not like my CPU was a slouch (an 8-core i7-5960X).

Object Removal
I’d be content using Mocha just to track difficult shots and for rotoscoping, but their object-removal feature has saved me hours of cloning/tracking work in After Effects, especially when I’ve used it to remove camera rigs or puppet rigs from shots.

Mocha’s remove module is the closest thing out there to automated object removal***. It’s as simple as 1) create a mask around the object you want to remove, 2) track the background that your object passes in front of, and then 3) render. Okay, there’s a little more to it, but compared to the cloning and tracking and cloning and tracking and cloning and tracking method, it’s pretty great. Also, a huge reason to get the VR edition of Mocha is that the remove module will work with a 360 camera.

Here I used Mocha object removal to remove ropes that pulled a go-cart in a spot for Advil.

VR Outside of After Effects?
I’ve spent most of this article talking about Mocha with After Effects, because it’s what I know best, but there is one VR pipeline that can match nearly all of Mocha VR’s capabilities: the Nuke plugin Cara VR, but there is a cost to that workflow. More on this shortly.

Where you will hit the limit of Mocha VR (and After Effects in general) is if you are doing 3D compositing with CGI and real-world camera depth positioning. Mocha’s 3D Camera Solve module is not optimized for 360 and the After Effects 3D workspace can be limited for true 3D compositing, compared to software like Nuke or Fusion.

While After Effects sort of tacked on its 3D features to its established 2D workflow, Nuke is a true 3D environment as robust as Autodesk Maya or any of the high-end 3D software. This probably sounds great, but you should also know that Cara VR is $4,300 vs. $1,000 for Mocha VR (the standalone + Adobe plugin version) and Nuke starts at $4,300/year vs. $240/year for After Effects.

Conclusion
I think of Mocha as an essential companion to compositing in After Effects, because it makes routine work much faster and it does some things you just can’t do with After Effects alone. Mocha VR is a major release because VR has so much buzz these days, but in reality it’s pretty much just a version of Mocha Pro with the ability to also work with 360 footage.

*Equirectangular is a clever way of unwrapping a 360 spherical projection, a.k.a, the view we see in VR, by flattening it out into a rectangle. It’s a great way to see the whole 360 view in an editing program, but A: it’s very distorted so it can cause problems for tracking and B: anything that is moving up or down in the equirectangular frame will wrap around to the opposite side (a bit like Pacman when he exits the screen), and non-VR tracking programs will stop tracking when something exits the screen on one side.

**Note: According to the developer, one of the main advantages to running Mocha as a plug-in (inside AE, Premiere, Nuke, etc) for 360 video work is that you are using the host program’s render engine and proxy workflow. Having the ability to do all your tracking, masking and object removal on proxy resolutions is a huge benefit when working at large 360 formats that can be as large as 8k stereoscopic. Additionally, the Mocha modules that render, such as reorient for horizon stabilization or remove module will render inside the plug-in making for a streamlined workflow.

***FayOut was a “coming soon” product that promised an even more automated method for object removal, but as of the publishing of this article it appears that they are no longer “coming soon” and may have folded or maybe their technology was purchased and it will be included in a future product. We shall see…
________________________________________
Zach Shukan is the VFX specialist at SilVR and is constantly trying his hand at the latest technologies in the video post production world.

Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

Dell 6.15

Adobe acquires Mettle’s SkyBox tools for 360/VR editing, VFX

Adobe has acquired all SkyBox technology from Mettle, a developer of 360-degree and virtual reality software. As more media and entertainment companies embrace 360/VR, there is a need for seamless, end-to-end workflows for this new and immersive medium.

The Skybox toolset is designed exclusively for post production in Adobe Premiere Pro CC and Adobe After Effects CC and complements Adobe Creative Cloud’s existing 360/VR cinematic production technology. Adobe will integrate SkyBox plugin functionality natively into future releases of Premiere Pro and After Effects.

To further strengthen Adobe’s leadership in 360-degree and virtual reality, Mettle co-founder Chris Bobotis will join Adobe, bringing more than 25 years of production experience to his new role.

“We believe making virtual reality content should be as easy as possible for creators. The acquisition of Mettle SkyBox technology allows us to deliver a more highly integrated VR editing and effects experience to the film and video community,” says Steven Warner, VP of digital video and audio at Adobe. “Editing in 360/VR requires specialized technology, and as such, this is a critical area of investment for Adobe, and we’re thrilled Chris Bobotis has joined us to help lead the charge forward.”

“Our relationship started with Adobe in 2010 when we created FreeForm for After Effects, and has been evolving ever since. This is the next big step in our partnership,” says Bobotis, now director, professional video at Adobe. “I’ve always believed in developing software for artists, by artists, and I’m looking forward to bringing new technology and integration that will empower creators with the digital tools they need to bring their creative vision to life.”

Introduced in April 2015, SkyBox was the first plugin to leverage Mettle’s proprietary 3DNAE technology, and its success quickly led to additional development of 360/VR plugins for Premiere Pro and After Effects.

Today, Mettle’s plugins have been adopted by companies such as The New York Times, CNN, HBO, Google, YouTube, Discovery VR, DreamWorks TV, National Geographic, Washington Post, Apple and Facebook, as well as independent filmmakers and YouTubers.


SGO’s Mistika VR is now available

 

SGO’s Mistika VR software app is now available. This solution has been developed using the company’s established Mistika technology and offers advanced realtime stitching capabilities combined with a new intuitive interface and raw format support with incredible speed.

Using Mistika Optical Flow Technology (our main image), the new VR solution takes camera position information and sequences then stitches the images together using extensive and intelligent pre-sets. Its unique stitching algorithms help with the many challenges facing post teams to allow for the highest image quality.

Mistika VR was developed to encompass and work with as many existing VR camera formats as possible, and SGO is creating custom pre-sets for productions where teams are building the rigs themselves.

The Mistika VR solution is part of SGO’s new natively integrated workflow concept. SGO has been dissecting its current turnkey offering “Mistika Ultima” to develop advanced workflow applications aimed at specific tasks.

Mistika VR runs on Mac, and Windows and is available as a personal or professional (with SGO customer support) edition license. Costs for licenses are:

–  30-day license (with no automatic renewals): Evaluation Version is free; Personal Edition: $78; Professional Edition $110

– Monthly subscription: Personal Edition $55; Professional Edition $78 per month

–  Annual subscription: Personal Edition: $556 per year; Professional Edition: $779 per year


VR Audio — Differences between A Format and B Format

By Claudio Santos

A Format and B Format. What is the difference between them after all? Since things can get pretty confusing, especially with such non-descriptive nomenclature, we thought we’d offer a quick reminder of what each is in the spatial audio world.

A Format and B Format are two analog audio standards that are part of the ambisonics workflow.

A Format is the raw recording of the four individual cardioid capsules in ambisonics microphones. Since each microphone has different capsules at slightly different distances, the A Format is somewhat specific to the microphone model.

B Format is the standardized format derived from the A Format. The first channel carries the amplitude information of the signal, while the other channels determine the directionality through phase relationships between each other. Once you get your sound into B Format you can use a variety of ambisonic tools to mix and alter it.

It’s worth remembering that the B Format also has a few variations on the standard itself; the most important to understand are Channel Order and Normalization standards.

Ambisonics in B Format consists of four channels of audio — one channel carries the amplitude signal while the others represent the directionality in a sphere through phase relationships. Since this can only be achieved by the combination between the channels, it is important that:

– The channels follow a known order
– The relative level between the amplitude channel and the others must be known in order to properly combine them together

Each of these characteristics has a few variations, with the most notable ones being

– Channel Order
– Furse-Malham standard
– ACN standard

– Normalization (level)
– MaxN standard
-SN3D standard

The combination of these variations result in two different B Format standards:
– Furse-Malham – Older standard that is still supported by a variety of plug-ins and other ambisonic processing tools
– AmbiX – Modern standard that has been widely adopted by distribution platforms such as YouTube

Regardless of the format you will deliver your ambisonics file in, it is vital to keep track of the standards you are using in your chain and make the necessary conversions when appropriate. Otherwise rotations and mirrors will end up in the wrong direction and the whole soundsphere will break down into a mess.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.


VR audio terms: Gaze Activation v. Focus

By Claudio Santos

Virtual reality brings a lot of new terminology to the post process, and we’re all having a hard time agreeing on the meaning of everything. It’s tricky because clients and technicians sometimes have different understandings of the same term, which is a guaranteed recipe for headaches in post.

Two terms that I’ve seen being confused a few times in the spatial audio realm are Gaze Activation and Focus. They are both similar enough to be put in the same category, but at the same time different enough that most of the times you have to choose completely different tools and distribution platforms depending on which technology you want to use.

Field of view

Focus
Focus is what the Facebook Spatial Workstation calls this technology, but it is a tricky one to name. As you may know, ambisonics represents a full sphere of audio around the listener. Players like YouTube and Facebook (which uses ambisonics inside its own proprietary .tbe format) can dynamically rotate this sphere so the relative positions of the audio elements are accurate to the direction the audience is looking at. But the sounds don’t change noticeably in level depending on where you are looking.

If we take a step back and think about “surround sound” in the real world, it actually makes perfect sense. A hair clipper isn’t particularly louder when it’s in front of our eyes as opposed to when its trimming the back of our head. Nor can we ignore the annoying person who is loudly talking on their phone on the bus by simply looking away.

But for narrative construction, it can be very effective to emphasize what your audience is looking at. That opens up possibilities, such as presenting the viewer with simultaneous yet completely unrelated situations and letting them choose which one to pay attention to simply by looking in the direction of the chosen event. Keep in mind that in this case, all events are happening simultaneously and will carry on even if the viewer never looks at them.

This technology is not currently supported by YouTube, but it is possible in the Facebook Spatial Workstation with the use of high Focus Values.

Gaze Activation
When we talk about focus, the key thing to keep in mind is that all the events happen regardless of the viewer looking at them or not. If instead you want a certain sound to only happen when the viewer looks at a certain prop, regardless of the time, then you are looking for Gaze Activation.

This concept is much more akin to game audio then to film sound because of the interactivity element it presents. Essentially, you are using the direction of the gaze and potentially the length of the gaze (if you want your viewer to look in a direction for x amount of seconds before something happens) as a trigger for a sound/video playback.

This is very useful if you want to make impossible for your audience to miss something because they were looking in the “wrong” direction. Think of a jump scare in a horror experience. It’s not very scary if you’re looking in the opposite direction, is it?

This is currently only supported if you build your experience in a game engine or as an independent app with tools such as InstaVR.

Both concepts are very closely related and I expect many implementations will make use of both. We should all keep an eye on the VR content distribution platforms to see how these tools will be supported and make the best use of them in order to make 360 videos even more immersive.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.


John Hughes, Helena Packer, Kevin Donovan open post collective

Three industry vets have combined to launch PHD, a Los Angeles-based full-service post collective. Led by John Hughes (founder of Rhythm & Hues), Helena Packer (VFX supervisor/producer) and Kevin Donovan (film/TV/commercials director), PHD works across the genres of VR/AR, independent films, documentaries, TV — including limited series and commercials. In addition to post production, including color grading, offline and online editorial, the visual effects and final delivery, they offer live-action production services. In addition to Los Angeles, PHD has locations in India, Malaysia and South Africa.

Hughes was the co-founder of the legendary VFX shop Rhythm & Hues (R&H) and led that studio for 26 years, earning three Academy Awards for “Best Visual Effects” (Babe, The Golden Compass, Life of Pi) as well as four scientific and engineering Academy Awards.

Packer was inducted into the Academy of Motion Picture Arts and Sciences (AMPAS) in 2008 for her creative contributions to filmmaking as an accomplished VFX artist, supervisor and producer. Her expertise extends beyond feature films to episodic TV, stereoscopic 3D and animation. Packer has been the VFX supervisor and Flame artist for hundreds of commercials and over 20 films, including 21 Jump Street and Charlie Wilson’s War.

Director Kevin Donovan is particularly well-versed in action and visual effects. He directed the feature film, The Tuxedo, and is currently producing the TV series What Would Trejo Do? He has shot over 700 commercials during the course of his career and is the winner of six Cannes Lions.

Since the company’s launch, PHD has worked on a number of projects — two PSAs for the Climate Change organization 5 To Do Today featuring Arnold Schwarzenegger and James Cameron called Don’t Buy It and Precipice
a PSA for the international animal advocacy group WildAid shot in Tanzania and Oregon called Talking Elephant, another for WildAid shot in Cape Town, South Africa called Talking Rhino, and two additional WildAid PSAs featuring actor Josh Duhamel called Souvenir and Situation.

“In a sense, our new company is a reconfigured version of R&H, but now we are much smarter, much more nimble and much more results driven,” says Hughes about PHD. “We have very little overhead to deal with. Our team has worked on hundreds of award-winning films and commercials…”

Main Photo: L-R:  John Hughes, Helena Packer and Kevin Donovan.

FMPX8.14

Liron Ashkenazi-Eldar joins The Artery as design director  

Creative studio The Artery has brought on Liron Ashkenazi-Eldar as lead design director. In her new role, she will spearhead the formation of a department that will focus on design and branding. Ashkenazi-Eldar and team are also developing in-house design capabilities to support the company’s VFX, experiential and VR/AR content, as well as website development, including providing motion graphics, print and social campaigns.

“While we’ve been well established for many years in the areas of production and VFX, our design team can now bring a new dimension to our company,” says Ashkenazi-Eldar, who is based in The Artery’s NYC office. “We are seeking brand clients with strong identities so that we can offer them exciting, new and even weird creative solutions that are not part of the traditional branding process. We will be taking a completely new approach to branding — providing imagery that is more emotional and more personal, instead of just following an existing protocol. Our goal is to provide a highly immersive experience for our new brand clients.”

Originally from Israel, the 27-year-old Ashkenazi-Eldar is a recent graduate of New York’s School of Visual Arts with a BFA degree in Design. She is the winner of a 2017 ADC Silver Cube Award from The One Club, in the category 2017 Design: Typography, for her contributions to a project titled Asa Wife Zine. She led the Creative Team that submitted the project via the School of Visual Arts.

 


Recording live musicians in 360

By Luke Allen

I’ve had the opportunity to record live musicians in a couple of different in-the-field scenarios for 360 video content. In some situations — such as the ubiquitous 360 rock concert video — simply having access to the board feed is all one needs to create a pretty decent spatial mix (although the finer points of that type of mix would probably fill up a whole different article).

But what if you’re shooting in an acoustically interesting space where intimacy and immersion are the goal? What if you’re in the field in the middle of a rainstorm without access to AC power? It’s clear that in most cases, some combination of ambisonic capture and close micing is the right approach.

What I’ve found is that in all but a few elaborate set-ups, a mobile ambisonic recording rig (in my case, built around the Zaxcom Nomad and Soundfield SPS-200) — in addition to three to four omni-directional lavs for close micing — is more than sufficient to achieve excellent results. Last year, I had the pleasure of recording a four-piece country ensemble in a few different locations around Ireland.

Micing a Pub
For this particular job, I had the SPS and four lavs. For most of the day I had planted one Sanken COS-11 on the guitar, one on the mandolin, one on the lead singer and a DPA 4061 inside the upright bass (which sounded great!). Then, for the final song, the band wanted to add a fiddle to the mix — yet I was out of mics to cover everything. We had moved into the partially enclosed porch area of a pub with the musicians perched in a corner about six feet from the camera. I decided to roll the dice and trust the SPS to pick up the fiddle, which I figured would be loud enough in the small space that a lav wouldn’t be used much in the mix anyways. In post, the gamble paid off.

I was glad to have kept the quieter instruments mic’d up (especially the singer and the bass) while the fiddle lead parts sounded fantastic on the ambisonic recordings alone. This is one huge reason why it’s worth it to use higher-end Ambisonic mics, as you can trust them to provide fidelity for more than just ambient recordings.

An Orchestra
In another recent job, I was mixing for a 360 video of an orchestra. During production we moved the camera/sound rig around to different locations in a large rehearsal stage in London. Luckily, on this job we were able to also run small condensers into a board for each orchestra section, providing flexibility in the mix. Still, in post, the director wanted the spatial effect to be very perceptible and dynamic as we jump around the room during the lively performance. The SPS came in handy once again; not only does it offer good first-order spatial fidelity but a wide enough dynamic range and frequency response to be relied on heavily in the mix in situations where the close-mic recordings sounded flat. It was amazing opening up those recordings and listening to the SPS alone through a decent HRTF — it definitely exceeded my expectations.

It’s always good to be as prepared as possible when going into the field, but you don’t always have the budget or space for tons of equipment. In my experience, one high-quality and reliable ambisonic mic, along with some auxiliary lavs and maybe a long shotgun, are a good starting point for any field recording project for 360 video involving musicians.


Sound designer and composer Luke Allen is a veteran spatial audio designer and engineer, and a principal at SilVR in New York City. He can be reached at luke@silversound.us.

What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.