Category Archives: Cameras

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Mercy Christmas director offers advice for indie filmmakers

By Ryan Nelson

After graduating from film school at The University of North Carolina School of the Arts, I was punched in the gut. I had driven into Los Angeles mere hours after the last day of school ready to set Hollywood on fire with my thesis film. But Hollywood didn’t seem to know I’d arrived. A few months later, Hollywood still wasn’t knocking on my door. Desperate to work on film sets and learn the tools of the trade, I took a job as a grip. In hindsight, it was a lucky accident. I spent the next few years watching some of the industry’s most successful filmmakers from just a few feet away.

Like a sponge, I soaked in every aspect of filmmaking that I could from my time on the sets of Avengers, Real Steel, Spider Man 3, Bad Boys 2, Seven Psychopaths, Smokin’ Aces and a slew of Adam Sandler comedies. I spent hours working, watching, learning and judging. How are they blocking the actors in this scene? What sort of cameras are they using? Why did they use that light? When do you move the camera? When is it static? When I saw the finished films in theaters, I ultimately asked myself, did it all work?

During that same time, I wrote and directed a slew of my own short films. I tried many of the same techniques I’d seen on set. Some of those attempts succeeded and some failed.

Recently, the stars finally aligned and I directed my first feature-length film, Mercy Christmas, from a script I co-wrote with my wife Beth Levy Nelson. After five years of writing, fundraising, production and post production, the movie is finished. We made the movie outside the Hollywood system, using crowd funding, generous friends and loving family members to compile enough cash to make the ultra-low-budget version of the Mercy Christmas screenplay.

I say low budget because it was financially, but thanks to my time on set, years of practice and much trial and error, the finished film looks and feels like much more than it cost.

Mercy Christmas, by the way, features Michael Briskett, who meets the perfect woman and his ideal Christmas dream comes true when she invites him to her family’s holiday celebration. Michael’s dream shatters, however, when he realizes that he will be the Christmas dinner. The film is currently on iTunes.

My experience working professionally in the film business while I struggled to get my shot at directing taught me many things. I learned over those years that a mastery of the techniques and equipment used to tell stories for film was imperative.

The stories I gravitate towards tend to have higher concept set pieces. I really enjoy combining action and character. At this point in my career, the budgets are more limited. However, I can’t allow financial restrictions to hold me back from the stories I want to tell. I must always find a way to use the tools available in their best way.

Ryan Nelson with camera on set.

Two Cameras
I remember an early meeting with a possible producer for Mercy Christmas. I told him I was planning to shoot two cameras. The producer chided me, saying it would be a waste of money. Right then, I knew I didn’t want to work with that producer, and I didn’t.

Every project I do now and in the future will be two cameras. And the reason is simple: It would be a waste of money not to use two cameras. On a limited budget, two cameras offer twice the coverage. Yes, understanding how to shoot two cameras is key, but it’s also simple to master. Cross coverage is not conducive to lower budget lighting so stacking the cameras on a single piece of coverage gives you a medium shot and close shot at the same time. Or for instance, when shooting the wide master shot, you can also get a medium master shot to give the editor another option to breakaway to while building a scene.

In Mercy Christmas, we have a fight scene that consists of seven minutes of screen time. It’s a raucous fight that covers three individual fights happening simultaneously. We scheduled three days to shoot the fight. Without two cameras it would have taken more days to shoot, and we definitely didn’t have more days in the budget.

Of course, two camera rentals and camera crews are budget concerns, so the key is to find a lower budget but high-quality camera. For Mercy Christmas, we chose the Canon C-300 Mark II. We found the image to be fantastic. I was very happy with the final result. You can also save money by only renting one lens package to use for both cameras.

Editing
Good camera coverage doesn’t mean much without an excellent editor. Our editor for Mercy Christmas, Matt Evans, is a very good friend and also very experienced in post. Like me, Matt started at the bottom and worked his way up. Along the way, he worked on many studio films as apprentice editor, first assistant editor and finally editor. Matt’s preferred tool is Avid Media Composer. He’s incredibly fast and understands every aspect of the system.

Matt’s technical grasp is superb, but his story sense is the real key. Matt’s technique is a fun thing to witness. He approaches a scene by letting the footage tell him what to do on a first pass. Soaking in the performances with each take, Matt finds the story that the images want to tell. It’s almost as if he’s reading a new script based on the images. I am delighted each time I can watch Matt’s first pass on a scene. I always expect to see something I hadn’t anticipated. And it’s a thrill.

Color Grading
Another aspect that should be budgeted into an independent film is professional color grading. No, your editor doing color does not count. A professional post house with a professional color grader is what you need. I know this seems exorbitant for a small-budget indie film, but I’d highly recommend planning for it from the beginning. We budgeted color grading for Mercy Christmas because we knew it would take the look to professional levels.

Color grading is not only a tool for the cinematographer it’s a godsend for the director as well. First and foremost, it can save a shot, making a preferred take that has an inferior look actually become a usable take. Second, I believe strongly that color is another tool for storytelling. An audience can be as moved by color as by music. Every detail coming to the audience is information they’ll process to understand the story. I learned very early in my career how shots I saw created on set were accentuated in post by color grading. We used Framework post house in Los Angeles on Mercy Christmas. The colorist was David Sims who did the color and conform in DaVinci Resolve 12.

In the end, my struggle over the years did gain my one of my best tools: experience. I’ve taken the time to absorb all the filmmaking I’ve been surrounded by. Watching movies. Working on sets. Making my own.

After all that time chasing my dream, I kept learning, refining my skills and honing my technique. For me, filmmaking is a passion, a dream and a job. All of those elements made me the storyteller I am today and I wouldn’t change a thing.

Dell 6.15

On Hold: Making an indie web series

By John Parenteau

On Hold is an eight-episode web series, created and co-written by myself and Craig Kuehne, about a couple of guys working at a satellite company for an India-based technology firm. They have little going for themselves except each other, and that’s not saying much. Season 1 is available now, and we are in prepro on Season 2.

While I personally identify as a filmmaker, I’ve worn a wide range of hats in the entertainment industry since graduating from USC School of Cinematic Arts in the late ‘80s. As a visual effects supervisor, I’ve been involved in projects as diverse as Star Trek: Voyager and Hunger Games. I have also filled management roles at companies such as Amblin Entertainment, Ascent Media, Pixomondo and Shade VFX.

That’s me in the chair, conferring on setup.

It was with my filmmaker hat on that I recently partnered with Craig, a long-time veteran of visual effects, whose credits include Westworld and Game of Thrones. We thought it might be interesting to share our experiences as we ventured into live-action production.

It’s not unique that Craig and I want to be filmmakers. I think most industry professionals, who are not already working as directors or producers, strive to eventually reach that goal. It’s usually the reason people like us get into the business in the first place, and what many of us continue to pursue. Often we’ve become successful in another aspect of entertainment and found it difficult to break out of those “golden handcuffs.” I know Craig and I have both felt that way for years, despite having led fairly successful lives as visual effects pros.

But regardless of our successes in other roles, we still identify ourselves as filmmakers, and at some point, you just have to make the big push or let the dream go. I decided to live by my own mantra that “filmmakers make film.” Thus, On Hold was born.

Why the web series format, you might ask? With so many streaming and online platforms focused on episodic material, doing a series would show we are comfortable with the format, even if ours was a micro-version of a full series. We had, for years, talked about doing a feature film, but that type of project takes so many resources and so much coordination. It just seemed daunting in a no-budget scenario. The web series concept allows us to produce something that resembles a marketable project, essentially on little or no budget. In addition, the format is easily recreated for an equally low budget, so we knew we could do a second season of the show once we had done the first.

This is Craig, pondering a shot.

The Story
We have been friends for years, and the idea for the series came from both our friendship and  our own lives. Who hasn’t felt, as they were getting older, that maybe some of the life choices they made might not have been the best? That can be a serious topic, but we took a comedic angle, looking for the extremes. Our main characters, Jeff (Jimmy Blakeney) and Larry (Paul Vaillancourt), are subtle reflections of us (Craig is Jeff, the somewhat over-thinking, obsessive nerd, and I’m Larry, a bit of a curmudgeon, who can take himself way too seriously), but they quickly took a life of their own, as did the rest of the cast. We added in Katy (Brittney Bertier), their over-energetic intern, Connie (Kelly Keaton), Jeff’s bigger-than-life sister, and Brandon (Scott Rognlien), the creepy and not-very- bright boss. The chemistry just clicked. They say casting is key, and we certainly discovered that on this project. We were very lucky to find the actors we did, and  played off of each other perfectly.

So what does it take to do a web series? First off, writing was key. We spent a few months working out the overall storyline of the first season and then honed in on the basic outlines of each episode. We actually worked out a rough overall arc of the show itself, deciding on a four-season project, which gave us a target to aim for. It was just some basic imagery for an ultimate ending of the show, but it helped keep us focused and helped drive the structure of the early episodes. We split up writing duties, each working on alternate episodes and then sharing scripts with each other. We tried to be brutally honest; It was important that the show reflect both of our views. We spent many nights arguing over certain moments in each episode, both very passionate about the storyline.

In the end we could see we had something good, we just needed to add our talented actors to make it great.

On Hold

The Production
We shot on a Blackmagic Cinema camera, which was fairly new at that point. I wanted the flexibility of different lenses but a high-resolution and high-quality picture. I had never been thrilled with standard DSLR cameras, so I thought the Blackmagic camera would be a good option. To top it off, I could get one for free — always a deciding factor at our budget level. We ended up shooting with a single Canon zoom lens that Craig had, and for the most part it worked fine. I can’t tell you how important the “glass” you shoot with can be. If we had the budget I would have rented some nice Zeiss lenses or something equally professional, and the quality of the image reflects the lack of budget. But the beauty of the Blackmagic Cinema Camera is that it shoots such a nice image already, and at such a high resolution, that we knew we would have some flexibility in post. We recorded in Apple ProRes.

As a DP, I have shot everything from PBS documentaries to music videos, commercials and EPKs (a.k.a. behind the scenes projects), and have had the luxury of working with a load of gear, sometimes with a single light. At USC Film School, my alma mater, you learn to work with what you have, so I learned early to adapt my style to the gear on hand. I ended up using a single lighting kit (a Lowell DP 3 head kit) which worked fine. Shooting comedy is always more about static angles and higher key lighting, and my limited kit made that easily accessible. I would usually lift the ambience in the room by bouncing a light off a wall or ceiling area off camera, then use bounce cards on C-stands to give some source light from the top/side, complementing but not competing with the existing fluorescents in the office. The bigger challenges were when we shot toward the windows. The bright sunlight outside, even with the blinds closed, was a challenge, but we creatively scheduled those shots for early or late in the day.

Low-budget projects are always an exercise in inventiveness and flexibility, mostly by the crew. We had a few people helping off and on, but ultimately it came down to the two of us wearing most of the hats and our associate producer, Maggie Jones, filling in the gaps. She handled the SAG paperwork, some AD tasks, ordered lunch and even operated the boom microphone. That left me shooting all but one episode, while we alternated directing episodes. We shot an episode a day, using a friend’s office on the weekends for free. We made sure we created shot lists ahead of time, so I could see what he had in mind when I shot Craig’s episodes, but also so he could act as a backup check on my list when I was directing.

The Blackmagic camera at work.

One thing about SAG — we decided to go with the guild’s new media contract for our actors. Most of them were already SAG, and while they most likely would have been fine shooting such a small project non-union, we wanted them to be comfortable with the work. We also wanted to respect the guild. Many people complain that working under SAG, especially at this level, is a hassle, but we found it to be exactly the opposite. The key is keeping up with the paperwork each day you shoot. Unless you are working incredibly long hours, or plan to abuse your talent (not a good idea regardless), it’s fairly easy to remain compliant. Maggie managed the daily paperwork and ensured we broke for lunch as per the requirements. Other than that, it was a non-issue.

The Post
Much like our writing and directing, Craig and I split editorial tasks. We both cut on Apple Final Cut Pro X (he with pleasure, me begrudgingly), and shared edits with each other. It was interesting to note differences in style. I tended to cut long, letting scenes breathe. Craig, a much better editor than I, had snappier cuts that moved quicker. This isn’t to say my way didn’t work at times, but it was a nice balance as we made comments on each other’s work. You can tell my episodes are a bit longer than his, but I learned from the experience and managed to shorten my episodes significantly.

I did learn another lesson, one called “killing your darlings.” In one episode, we had as scene where Jeff enjoyed a box of donuts, fishing through them to find the fruit-filled one he craved. The process of him licking each one and putting them back, or biting into a few and spitting out pieces, was hilarious onset, but in editorial I soon learned that too much of a good thing can be bad. Craig persuaded me to trim the scene, and I realized quickly that having one strong beat is just as good as several.

We had a variety of issues with other areas of post, but with no budget we could do little about them. Our “mix” consisted of adjusting levels in our timeline. Our DI amounted to a little color correction. While we were happy with the end result, we realized quickly that we want to make season two even better.

On Hold

The Lessons
A few things pop out as areas needing improvement. First of all, shooting a comedy series with a great group of improv comedians mandates at least two cameras. Both Craig and I, as directors, would do improv takes with the actors after getting the “scripted version,” but some of it was not usable since cutting between different improv takes from a single camera shoot is nearly impossible. We also realized the importance of a real sound mixer on set. Our single mic, mono tracks, run by our unprofessional hands, definitely needed some serious fixing in post. Simply having more experienced hands would have made our day more efficient as well.

For post, I certainly wanted to use newer tools, and we called in some favors for finishing. A confident color correction really makes the image cohesive, and even a rudimentary audio mix can remove many sound issues.

All in all, we are very proud of our first season of On Hold. Despite the technical issues and challenges, what really came together was the performances, and, ultimately, that is what people are watching. We’ve already started development on Season 2, which we will start shooting in January 2018, and we couldn’t be more excited.

The ultimate lesson we’ve learned is that producing a project like On Hold is not as hard as you might think. Sure it has its challenges, but what part of entertainment isn’t a challenge? As Tom Hanks says in A League of Their Own, “It’s supposed to be hard. If it wasn’t hard everyone would do it.” Well, this time, the hard work was worth it, and has inspired us to continue on. Ultimately, isn’t that the point of it all? Whether making films for millions of dollars, or no-budget web series, the point is making stuff. That’s what makes us filmmakers.

 

 


Timecode Systems intros SyncBac Pro for GoPro Hero6

Not long after GoPro introduced its latest offering, Timecode Systems released a customized SyncBac Pro for GoPro Hero6 Black cameras, a timecode-sync solution for the newest generation of action cameras.

By allowing the Hero6 to generate its own frame-accurate timecode, the SyncBac Pro creates the capability to timecode-sync multiple GoPro cameras wirelessly over long-range RF. If GoPro cameras are being used as part of a wider multicamera shoot, SyncBac Pro also allows GoPro cameras to timecode-sync with pro cameras and audio devices. At the end of a shoot, the edit team receives SD cards with frame-accurate timecode embedded into the MP4 file. According to Timecode Systems, using SyncBac Pro for timecode saves around 85 percent in post.

“With the Hero6, GoPro has added features that advance camera performance and image quality, which increases the appeal of using GoPro cameras for professional filming for television and film,” says Ashok Savdharia, CTO at Timecode Systems. “SyncBac Pro further enhances the camera’s compatibility with professional production methods by adding the ability to integrate footage into a multicamera film and broadcast workflow in the same way as larger-scale professional cameras.”

The new SyncBac Pro for GoPro Hero6 Black will start shipping this winter, and it is now available for preorder.


Color plays big role in director Sean Baker’s The Florida Project

Director Sean Baker is drawing wide praise for his realistic portrait of life on the fringe in America in his new film The Florida Project. Baker applies a light touch to the story of a precocious six-year-old girl living in the shadow of Disney World, giving it the feel of a slice-of-life documentary. That quality is carried through in the film’s natural look. Where Baker shot his previous film, Tangerine, entirely with an iPhone, The Florida Project was recorded almost wholly on anamorphic 35mm film by cinematographer Alexis Zabe.

Sam Daley

Post finishing for the film was completed at Technicolor PostWorks New York, which called on a traditional digital intermediate workflow to accommodate Baker’s vision. The work began with scanning the 35mm negative to 2K digital files for dailies and editorial. It ended months later with rescanning at 4K and 6K resolution, editorial conforming and color grading in the facility’s 4K DI theater. Senior colorist Sam Daley applied the final grade via Blackmagic Resolve v.12.5.

Shooting on film was a perfect choice, according to Daley, as it allowed Baker and Zabe to capture the stark contrasts of life in Central Florida. “I lived in Florida for six years, so I’m familiar with the intensity of light and how it affects color,” says Daley. “Pastels are prominent in the Florida color palette because of the way the sun bleaches paint.”

He adds that Zabe used Kodak Vision3 50D and 250D stock for daylight scenes shot in the hot Florida sun, noting, “The slower stock provided a rich color canvas, so much so, that at times we de-emphasized the greenery so it didn’t feel hyper real.”

The film’s principal location is a rundown motel, ironically named the Magic Castle. It does not share the sun-bleached look of other businesses and housing complexes in the area as it has been freshly painted a garish shade of purple.

Baker asked Daley to highlight such contrasts in the grade, but to do so subtly. “There are many colorful locations in the movie,” Daley says. “The tourist traps you see along the highway in Kissimmee are brightly colored. Blue skies and beautiful sunsets appear throughout the film. But it was imperative not to allow the bright colors in the background to distract from the characters in the foreground. The very first instruction that I got from Sean was to make it look real, then dial it up a notch.”

Mixing Film and Digital for Night Shots
To make use of available light, nighttime scenes were not shot on film, but rather were captured digitally on an Arri Alexa. Working in concert with color scientists from Technicolor PostWorks New York and Technicolor Hollywood, Daley helmed a novel workflow to make the digital material blend with scenes that were film-original. He first “pre-graded” the digital shots and then sent them to Technicolor Hollywood where they were recorded out to film. After processing at FotoKem, the film outs were returned to Technicolor Hollywood and scanned to 4K digital files. Those files were rushed back to New York via Technicolor’s Production Network where Daley then dropped them into his timeline for final color grading. The result of the complex process was to give the digitally acquired material a natural film color and grain structure.

“It would have been simpler to fly the digitally captured scenes into my timeline and put on a film LUT and grain FX,” explains Daley, “but Sean wanted everything to have a film element. So, we had to rethink the workflow and come up with a different way to make digital material integrate with beautifully shot film. The process involved several steps, but it allowed us to meet Sean’s desire for a complete film DI.”

Calling on iPhone for One Scene
A scene near the end of the film was, for narrative reasons, captured with an iPhone. Daley explains that, although intended to stand out from the rest of the film, the sequence couldn’t appear so different that it shocked the audience. “The switch from 4K scanned film material to iPhone footage happens via a hard cut,” he explains. “But it needed to feel like it was part of the same movie. That was a challenge because the characteristics of Kodak motion picture stock are quite different from an iPhone.”

The iPhone material was put through the same process as the Alexa footage; it was pre-graded, recorded out to film and scanned back to digital. “The grain helps tie it to the rest of the movie,” reports Daley. “And the grain that you see is real; it’s from the negative that the scene was recorded out to. There are no artificial looks and nothing gimmicky about any of the looks in this film.”

The apparent lack of artifice is, in fact, one of the film’s great strengths. Daley notes that even a rainbow that appears in a key moment was captured naturally. “It’s a beautiful movie,” says Daley. “It’s wonderfully directed, photographed and edited. I was very fortunate to be able to add my touch to the imagery that Sean and Alexis captured so beautifully.”


A Closer Look: VR solutions for production and post

By Alexandre Regeffe

Back in September, I traveled to Amsterdam to check out new tools relating to VR and 360 production and post. As a producer based in Paris, France, I have been working in the virtual reality part of the business for over two years. While IBC took place in September, the information I have to share is still quite relevant.

KanDao

I saw some very cool technology at the show regarding VR and 360 video, especially within the cinematic VR niche. And niche is the perfect word — I see the market slightly narrowing after the wave of hype that happened a couple of years ago. Personally, I don’t think the public has been reached yet, but pardon my French pessimism. Let’s take a look…

Cameras
One new range of products I found amazing were the Obsidian cameras from manufacturer KanDao. This Chinese brand has a smart product line with their 3D/360 cameras. Starting with the Obsidian Go, they reach pro cinematic levels with the Obsidian R (for Resolution, which is 8K per eye) and the Obsidian S (for speed, which you can capture at 120fps). It offers a small radial form factor, only six eyes to produce very smooth stereoscopy, with very a high resolution per eye, which is one of the keys to reaching a good feeling of immersion using a HMD.

Kandao’s features are promising, including handling 6DoF with depth map generation. To me, this is the future of cinematic VR producing — you will be able to have more freedom as the viewer, translating slightly your point of view to see behind objects with natural parallax distortion in realtime! Let me call it “extended” stereoscopic 360.

I can’t speak about professional 360 cameras without also mentioning the Ozo from Nokia. Considered by users to be the first pro VR camera, the Ozo+ version launched this year with a new ISP and offers astonishing new features, especially when you transfer your shots in the Ozo creator tool, which is in version 2.1.

Nokia Ozo+

Powerful tools, like highlights and shadow recovery, haze removal, auto stabilization and better denoising. are there to improve the overall image quality. Another big thing on the Nokia booth was the version 2.0 of the Ozo Live system. Yes, you can now webcast your live event in stereoscopic 360 with a 4K-per-eye resolution! And you can simply use a (boosted) laptop to do it! All the VR tools from Nokia are part of what they call Ozo Reality, an integrated ecosystem where you can create, deliver and experience cinematic VR.

VR Post
When you talk about VR post you have to talk about stitching — assembling all sources to obtain a 360 image. As a French-educated man, you know I have to complain somehow: I hate stitching. And I often yell at these guys who shoot at wrong camera positions. Spending hours (and money) dealing with seam lines is not my tasse de thé.

A few months before IBC, I found my grace: Mistika VR from SGO. Well known for their color grading tool Mistika Ultima (which is one of the finest in stereoscopic), SGO launched a stitching tool for 360 video. Fantastic results. Fantastic development team.

In this very intuitive tool, you can stitch sources of almost all existing cameras and rigs available on the market now, from Samsung gear 360 to Jaunt. With amazing optical flow algorithms, seam line fine adjustments, color matching and many other features, it is to me by far the best tool for outputing a clean, seamless equirectangular image. And the upcoming Mistika VR 3D for stitching stereoscopic sources is very promising. You know what? Thanks to Mistika VR, the stitching process could be fun. Even for me.

In general, optical flow is a huge improvement for stitching, and we can find this parameter in the Kandao Studio stitching tool (designed only for Obsidian cameras), for instance. When you’re happy with your stitch, you can then edit, color grade and maybe add VFX and interactivity in order to bring a really good experience to viewers.

Immersive video within Adobe Premiere.

Today, Adobe CC takes the lead of the editing scene with their specific 360 tools, such as their contextual viewer. But the big hit was when they acquired the Skybox plugins suite from Mettle, which will be integrated natively in the next Adobe CC version (for Premiere and After Effects).

With this set of tools you can easily manipulate your equirectangular sources, do tripod removal, sky replacements and all the invisible effects that were tricky to do without Skybox. You can then add contextual 360 effects like text, blur, transitions, greenscreen, and much more, in monoscopic and even stereoscopic mode. All this while viewing your timeline directly in your Oculus Rift and in realtime! And, incredibly it’s working — I use these tools all day long.

So let’s talk about the Mettle team. Created by two artists back in 1992, they joined the VR movement three years ago with the Skybox suite. They understood they had to bring tech to creative people. As a result they made smart tools with very well-designed GUI. For instance, look at Mettle’s new Mantra creative toolset for After Effects and Premiere. It is incredible to work with because you get the power to create very artistic designs in 360 in Adobe CC. And if you’re a solid VFX tech, wait for their Volumatrix depth-related VR FX software tools. Working in collaboration with Facebook, Mettle will launch the next big tool to do VFX in 3D/360 environments using camera-generated depth maps. It will open new awesome possibilities for content creators.

You know, the current main issue in cinematic 360 is image quality. Of course, we could talk about resolution or pixel per eye, but I think we should focus on color grading. This task is very creative — bringing emotions to the viewers. For me, the best 360 color grading tool to achieve these goals with uncompromised quality is Scratch VR from Assimilate. Beautiful. Formidable. Scratch is a very powerful color grading system, always on top in terms of technology. Now that they’ve added VR capabilities, you can color grade your stereoscopic equirectangular sources as easily as with normal sources. My favorite is mask repeater function, so you can naturally handle masks even in the back seam, which is almost impossible in other color grading tools. And you can also view your results directly in your HMD.

Scratch VR and ZCam collaboration.

At NAB 2017, they provided Scratch VR Z, an integrated workflow in collaboration with ZCam, the manufacturer of the S1 and S1 Pro. In this workflow you can, for instance, stitch sources directly into Scratch and do super high-quality color grading with realtime live streaming, along with logo insertion, greenscreen capabilities, layouts, etc. Crazy. For finishing, the Scratch VR output module is also very useful, enabling you to render your result in ProRes even on Windows, or in 10-bit H264, and many other formats.

Finishing and Distribution
So your cinematic VR experience is finished (you’ll notice I’ve skipped the sound part of the process, but since it’s not the part I work on I will not speak about this essential stage). But maybe you want to add some interactivity for a better user experience?

I visited IBC’s Future Zone to talk with the Liquid Cinema team. What is it? Simply, it’s a set of tools enabling you to enhance your cinematic VR experience. One important word is storytelling — with liquid cinema you can add an interactive layer to your story. The first tool needed is the authoring application where you drop your sources, which can be movies, stills, 360 and 2D stuff. Then create and enjoy.

For example, you can add graphic layers and enable the viewers gaze function, create multibranching scenarios based on intelligent timelines, play with forced perspective features so your viewer never misses an important thing… you must to try it.

The second part of the suite is about VR distribution. As a content creator you want your experience to be on all existing platforms, HMDs, channels … not an easy feat, but with Liquid Cinema it’s possible. Their player is compatible with Samsung Gear VR, Oculus Rift, HTC Vive, iOS, Android, Daydream and more. It’s coming to Apple TV soon.

IglooVision

The third part of the suite is the management of your content. Liquid Cinema has a CMS tool, which is very simple and allows changes, like geoblocking, easily, and provides useful analytics tools like heat map. And you can use your Vimeo pro account as a CDN if needed. Perfect.

Also in the Future Zone was the igloo from IglooVision. This is one of the best “social” ways to experience cinematic VR that I have ever seen. Enter this room with your friends and you can watch 360 all around and finish your drink (try this with an HMD). Comfortable, isn’t it? You can also use it as a “shared VR production suite” by connecting Adobe Premiere or your favorite tool directly to the system. Boom. You have now an immersive 360-degree monitor around you and your post production team.

So that was my journey into the VR stuff of IBC 2017. Of course, this is a non-exhaustive list of tools, with nothing about sound (which is very important in VR), but it’s my personal choice. Period.

One last thing: VR people. I have met a lot of enthusiastic, smart, interesting and happy women and men, helping content producers like me to push their creative limits. So thanks to all of them and see ya.


Paris-based Alexandre Regeffe is a 25-year veteran of TV and film. He is currently VR post production manager at Neotopy, a VR studio, as well as a VR effects specialist working on After Effects and the entire Adobe suite. His specialty is cinematic VR post workflows.


Winners: IBC2017 Impact Awards

postPerspective has announced the winners of our postPerspective Impact Awards from IBC2017. All winning products reflect the latest version of the product, as shown at IBC.

The postPerspective Impact Award winners from IBC2017 are:

• Adobe for Creative Cloud
• Avid for Avid Nexis Pro
• Colorfront for Transkoder 2017
• Sony Electronics for Venice CineAlta camera

Seeking to recognize debut products and key upgrades with real-world applications, the postPerspective Impact Awards are determined by an anonymous judging body made up of industry pros. The awards honor innovative products and technologies for the post production and production industries that will influence the way people work.

“All four of these technologies are very worthy recipients of our first postPerspective Impact Awards from IBC,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category. You’ll notice that our awards from IBC span the entire pro pipeline, from acquisition to on-set dailies to editing/compositing to storage.

“As IBC falls later in the year, we are able to see where companies are driving refinements to really elevate workflow and enhance production. So we’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We’re very proud of that fact, and it makes our awards quite special.”

IBC2017 took place September 15-19 in Amsterdam. postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at the 2018 NAB Show.


Red intros Monstro 8K VV, a full-frame sensor

Red Digital Cinema has a new cinematic full-frame sensor for its Weapon cameras called the Monstro 8K VV. Monstro evolves beyond the Dragon 8K VV sensor with improvements in image quality including dynamic range and shadow detail.

This newest camera and sensor combination, Weapon 8K VV, offers full-frame lens coverage, captures 8K full-format motion at up to 60fps, produces ultra-detailed 35.4 megapixel stills and delivers incredibly fast data speeds — up to 300MB/s. And like all of Red’s DSMC2 cameras, Weapon shoots simultaneous RedCode RAW and Apple ProRes or Avid DNxHD/HR recording. It also adheres to the company’s Obsolescence Obsolete — its operating principle that allows current Red owners to upgrade their technology as innovations are unveiled and move between camera systems without having to purchase all new gear.

The new Weapon is priced at $79,500 (for the camera brain) with upgrades for carbon fiber Weapon customers available for $29,500. Monstro 8K VV will replace the Dragon 8K VV in Red’s line-up, and customers that had previously placed an order for a Dragon 8K VV sensor will be offered this new sensor beginning now. New orders will start being fulfilled in early 2018.

Red has also introduced a service offering for all carbon fiber Weapon owners called Red Armor-W. Red Armor-W offers enhanced and extended protection beyond Red Armor, and also includes one sensor swap each year.

According to Red president Jarred Land, “We put ourselves in the shoes of our customers and see how we can improve how we can support them. Red Armor-W builds upon the foundation of our original extended warranty program and includes giving customers the ability to move between sensors based upon their shooting needs.”

Additionally, Red has made its enhanced image processing pipeline (IPP2) available in-camera with the company’s latest firmware release (V.7.0) for all cameras with Helium and Monstro sensors. IPP2 offers a completely overhauled workflow experience, featuring enhancements such as smoother highlight roll-off, better management of challenging colors, an improved demosaicing algorithm and more.


GoPro intros Hero6 and its first integrated 360 solution, Fusion

By Mike McCarthy

Last week, I traveled to San Francisco to attend GoPro’s launch event for its new Hero6 and Fusion cameras. The Hero6 is the next logical step in the company’s iteration of action cameras, increasing the supported frame rates to 4Kp60 and 1080p240, as well as adding integrated image stabilization. The Fusion on the other hand is a totally new product for them, an action-cam for 360-degree video. GoPro has developed a variety of other 360-degree video capture solutions in the past, based on rigs using many of their existing Hero cameras, but Fusion is their first integrated 360-video solution.

While the Hero6 is available immediately for $499, the Fusion is expected to ship in November for $699. While we got to see the Fusion and its footage, most of the hands-on aspects of the launch event revolved around the Hero6. Each of the attendees was provided a Hero6 kit to record the rest of the days events. My group was provided a ride on the RocketBoat through the San Francisco Bay. This adventure took advantage of a number of features of the camera, including the waterproofing, the slow motion and the image stabilization.

The Hero6

The big change within the Hero6 is the inclusion of GoPro’s new custom-designed GP1 image processing chip. This allows them to process and encode higher frame rates, and allows for image stabilization at many frame-rate settings. The camera itself is physically similar to the previous generations, so all of your existing mounts and rigs will still work with it. It is an easy swap out to upgrade the Karma drone with the new camera, which also got a few software improvements. It can now automatically track the controller with the camera to keep the user in the frame while the drone is following or stationary. It can also fly a circuit of 10 waypoints for repeatable shots, and overcoming a limitation I didn’t know existed, it can now look “up.”

There were fewer precise details about the Fusion. It is stated to be able to record a 5.2K video sphere at 30fps and a 3K sphere at 60fps. This is presumably the circumference of the sphere in pixels, and therefore the width of an equi-rectangular output. That would lead us to conclude that the individual fish-eye recording is about 2,600 pixels wide, plus a little overlap for the stitch. (In this article, GoPro’s David Newman details how the company arrives at 5.2K.)

GoPro Fusion for 360

The sensors are slightly laterally offset from one another, allowing the camera to be thinner and decreasing the parallax shift at the side seams, but adding a slight offset at the top and bottom seams. If the camera is oriented upright, those seams are the least important areas in most shots. They also appear to have a good solution for hiding the camera support pole within the stitch, based on the demo footage they were showing. It will be interesting to see what effect the Fusion camera has on the “culture” of 360 video. It is not the first affordable 360-degree camera, but it will definitely bring 360 capture to new places.

A big part of the equation for 360 video is the supporting software and the need to get the footage from the camera to the viewer in a usable way. GoPro already acquired Kolor’s Autopano Video Pro a few years ago to support image stitching for their larger 360 video camera rigs, so certain pieces of the underlying software ecosystem to support 360-video workflow are already in place. The desktop solution for processing the 360 footage will be called Fusion Studio, and is listed as coming soon on their website.

They have a pretty slick demonstration of flat image extraction from the video sphere, which they are marketing as “OverCapture.” This allows a cellphone to pan around the 360 sphere, which is pretty standard these days, but by recording that viewing in realtime they can output standard flat videos from the 360 sphere. This is a much simpler and more intuitive approach to virtual cinematography that trying to control the view with angles and keyframes in a desktop app.

This workflow should result in a very fish-eye flat video, similar to the more traditional GoPro shots, due to the similar lens characteristics. There are a variety of possible approaches to handling the fish-eye look. GoPro’s David Newman was explaining to me some of the solutions he has been working on to re-project GoPro footage into a sphere, to reframe or alter the field of view in a virtual environment. Based on their demo reel, it looks like they also have some interesting tools coming for using the unique functionality that 360 makes available to content creators, using various 360 projections for creative purposes within a flat video.

GoPro Software
On the software front, GoPro has also been developing tools to help its camera users process and share their footage. One of the inherent issues of action-camera footage is that there is basically no trigger discipline. You hit record long before anything happens, and then get back to the camera after the event in question is over. I used to get one-hour roll-outs that had 10 seconds of usable footage within them. The same is true when recording many attempts to do something before one of them succeeds.

Remote control of the recording process has helped with this a bit, but regardless you end up with tons of extra footage that you don’t need. GoPro is working on software tools that use AI and machine learning to sort through your footage and find the best parts automatically. The next logical step is to start cutting together the best shots, which is what Quikstories in their mobile app is beginning to do. As someone who edits video for a living, and is fairly particular and precise, I have a bit of trouble with the idea of using something like that for my videos, but for someone to whom the idea of “video editing” is intimidating, this could be a good place to start. And once the tools get to a point where their output can be trusted, automatically sorting footage could make even very serious editing a bit easier when there is a lot of potential material to get through. In the meantime though, I find their desktop tool Quik to be too limiting for my needs and will continue to use Premiere to edit my GoPro footage, which is the response I believe they expect of any professional user.

There are also a variety of new camera mount options available, including small extendable tripod handles in two lengths, as well as a unique “Bite Mount” (pictured, left) for POV shots. It includes a colorful padded float in case it pops out of your mouth while shooting in the water. The tripods are extra important for the forthcoming Fusion, to support the camera with minimal obstruction of the shot. And I wouldn’t recommend the using Fusion on the Bite Mount, unless you want a lot of head in the shot.

Ease of Use
Ironically, as someone who has processed and edited hundreds of hours of GoPro footage, and even worked for GoPro for a week on paper (as an NAB demo artist for Cineform during their acquisition), I don’t think I had ever actually used a GoPro camera. The fact that at this event we were all handed new cameras with zero instructions and expected to go out and shoot is a testament to how confident GoPro is that their products are easy to use. I didn’t have any difficulty with it, but the engineer within me wanted to know the details of the settings I was adjusting. Bouncing around with water hitting you in the face is not the best environment for learning how to do new things, but I was able to use pretty much every feature the camera had to offer during that ride with no prior experience. (Obviously I have extensive experience with video, just not with GoPro usage.) And I was pretty happy with the results. Now I want to take it sailing, skiing and other such places, just like a “normal” GoPro user.

I have pieced together a quick highlight video of the various features of the Hero6:


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.