Tag Archives: Mike McCarthy

Review: Dell’s 8K LCD monitor

By Mike McCarthy

At CES 2017, Dell introduced its UP3218K LCD 32-inch monitor, which was the first commercially available 8K display. It runs 7680×4320 pixels at 60fps, driven by two DisplayPort 1.4 cables. That is over 33 million pixels per frame, and nearly 2 billion per second, which requires a lot of GPU power to generate. Available since March, not long ago I was offered one to review as part of a wider exploration of 8K video production workflows, and there will be more articles about that larger story in the near future.

For this review, I will be focusing on only this product and its uses.

The UP3218K showed up in a well-designed box that was easy to unpack — it was also easy getting the monitor onto the stand. I plugged it into my Nvidia Quadro P6000 card with the included DisplayPort cables, and it came up as soon as I turned it on… at full 60Hz and without any issues or settings to change. Certain devices with only one DisplayPort 1.4 connector will only power the display at 30Hz, as full 60Hz connections saturate the bandwidth of two DP 1.4 cables, but the display does require a Displayport 1.4 connection, and will not revert to lower resolution when connected to a 1.2 port. This limits the devices that can drive it to Pascal-based GPUs on the Nvidia side, or top-end Vega GPUs on the AMD side. I have a laptop with a P5000 in it, so I was disappointed to discover that the DisplayPort connector was still only 1.2, thereby making it incompatible with this 8K monitor.

Dell’s top Precision laptops (7720 and 7520) support DP1.4, while HP and Lenovo’s mobile workstations do not yet. This is a list of every device I am aware of that explicitly claims to support 8K output:
1. Quadro P6000, P5000, P4000, P2000 workstation GPU cards
2. TitanX and Geforce10 Series graphics cards
3. RadeonPro SSG, WX9100 & WX7100 workstation GPU cards
4. RX Vega 64 and 56 graphics cards
5. Dell Precision 7520 and 7720 mobile workstations
6. Comment if you know of other laptops with DP1.4

So once you have a system that can drive the monitor, what can you do with it? Most people reading this article will probably be using this display as a dedicated full-screen monitor for their 8K footage. But smooth 8K editing and playback is still a ways away for most people. The other option is to use it as your main UI monitor to control your computer and its applications. In either case, color can be as important as resolution when it comes to professional content creation, and Dell has brought everything it has to the table in this regard as well.

The display supports Dell’s PremierColor toolset, which is loosely similar to the functionality that HP offers under their DreamColor branding. PremierColor means a couple of things, including that the display has the internal processing power that allows it to correctly emulate different color spaces; it can also be calibrated with an X-Rite iDisplay Pro independent of the system driving it. It also interfaces with a few software tools that Dell has developed for its professional users. The mo

st significant functionality within that feature set is the factory-calibrated options for emulating AdobeRGB, sRGB, Rec.709 and DCI-P3. Dell tests each display individually after manufacturing to ensure that it is color accurate. These are great features, but they are not unique to this monitor, and many users have been using them on other display models for the last few years. While color accuracy is important, the main selling point of this particular model is resolution, and lots of it. And that is what I spent the majority of my time analyzing.

Resolution
The main issue here is the pixel density. Ten years ago, 24-inch displays were 1920×1200, and 30-inch displays had 2560×1600 pixels. This was around 100 pixels per inch, and most software was hard coded to look correct at that size. When UHD displays were released, the 32-inch version had a DPI of 140. That resulted in applications looking quite small and hard to read on the vast canvas of pixels, but this trend increased pressure on software companies to scale their interfaces better for high DPI displays. Windows 7 was able to scale things up an extra 50%, but a lot of applications ignored that setting or were not optimized for it. Windows 10 now allows scaling beyond 300%, which effectively triples the size of the text and icons. We have gotten to the point where even 15-inch laptops have UHD screens, resulting in 280 DPI, which is unreadable to most people without interface scaling.

Premiere Pro

With 8K resolution, this monitor has 280 DPI, twice that of a 4K display of similar size. This is on par with a 15-inch UHD laptop screen, but laptops are usually viewed from a much closer range. Since I am still using Windows 7 on my primary workstation, I was expecting 280 DPI to be unusable for effective work. And while everything is undoubtedly small, it is incredibly crisp, and once I enabled Windows scaling at 150%, it was totally usable (although I am used to small fonts and lots of screen real estate). The applications I use, especially Adobe CC, scale much smoother than they used to, so everything looks great, even with Windows 7, as long as I sit fairly close to the monitor.

I can edit 6K footage in Premiere Pro at full resolution for the first time, with space left over for my timeline and tool panels. In After Effects, I can work on 4K shots in full resolution and still have 70 layers of data visible in my composition. In Photoshop, setting the UI to 200% causes the panel to behave similar to a standard 4K 32-inch display, but with your image having four times the detail. I can edit my 5.6K DSLR files in full resolution, with nearly every palette open to work smoothly through my various tools.

This display replaces my 34-inch curved U3415W as my new favorite monitor for Adobe apps, although I would still prefer the extra-wide 34-inch display for gaming and other general usability. But for editing or VFX work, the 8K panel is a dream come true. Every tool is available at the same time, and all of your imagery is available at HiDPI quality.

Age of Empires II

When gaming, the resolution doesn’t typically affect the field of view of 3D applications, but for older 2D games, you can see the entire map at once. Age of Empires II HD offers an expansive view of really small units, but there is a texture issue with the background of the bottom quarter of the screen. I think I used to see this at 4K as well, and it got fixed in an update, so maybe the same thing will happen with this one, once 8K becomes more common.

I had a similar UI artifact issue in RedCine player when I full-screened the Window on the 8K display, which was disappointing since that was one of the few ways to smoothly play 8K footage on the monitor at full resolution. Using it as a dedicated output monitor works as well, but I did run into some limitations. I did eventually get it to work with RedCine-X Pro, after initially experiencing some aspect ratio issues. It would playback cached frames smoothly, but only for 15 seconds at a time before running out of decoded frames, even with a Rocket-X accelerator card.

When configured as a secondary display for dedicated full-screen output, it is accessible via Mercury Transmit in the Adobe apps. This is where it gets interesting, because the main feature that this monitor brings to the table is increased resolution. While that is easy to leverage in Photoshop, it is very difficult to drive that many pixels in real-time for video work, and decreasing the playback resolution negates the benefit of having an 8K display. At this point, effectively using the monitor becomes more an issue of workflow.

After Effects

I was going to use 8K Red footage for my test, but that wouldn’t play smoothly in Premiere, even on my 20-core workstation, so I converted it to a variety of other files to test with. I created 8K test assets that matched the monitor resolution in DNxHR, Cineform, JPEG2000, OpenEXR and HEVC. DNxHR was the only format that offered full-resolution playback at 8K, and even that resulted in dropped frames on a regular basis. But being able to view 8K video is pretty impressive, and probably forever shifts my view of “sharp” in the subjective sense, but we are at a place where we are still waiting for the hardware to catch up in regards to processing power — for 8K video editing to be an effective reality for users.

Summing Up
The UP3218K is the ultimate monitor for content creators and artists looking for a large digital canvas, regardless of whether that is measured in inches or pixels. All those pixels come at a price — it is currently available from Dell for $3,900. Is it worth it? That will depend on what your needs and your budget are. Is a Mercedes Benz worth the increased price over a Honda? Some people obviously think so.

There is no question that this display and the hardware to drive it effectively would be a luxury to the average user. But for people who deal with high resolution content on a regular basis, the increased functionality that it offers them can’t be measured in the same way, and reading an article and seeing pictures online can’t compare to actually using the physical item. The screenshots are all scaled to 25% to be a reasonable size for the web. I am just trying to communicate a sense of the scope of the desktop real estate available to users on an 8K screen. So yes, it is expensive, but at the moment, it is the highest resolution monitor that money can buy, and the closest alternative (5K screens) does not even come close.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

 

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making 6 Below for Barco Escape

By Mike McCarthy

There is new movie coming out this week that is fairly unique. Telling the true story of Eric LeMarque surviving eight days lost in a blizzard, 6 Below: Miracle on the Mountain is the first film shot and edited in its entirety for the new Barco Escape theatrical format. If you don’t know what Barco Escape is, you are about to find out.

This article is meant to answer just about every question you might have about the format and how we made the film, on which I was post supervisor, production engineer and finishing editor.

What is Barco Escape?
Barco Escape is a wraparound visual experience — it consists of three projection screens filling the width of the viewer’s vision with a total aspect ratio of 7.16:1. The exact field of view will vary depending on where you are sitting in the auditorium, but usually 120-180 degrees. Similar to IMAX, it is not about filling the entire screen with your main object but leaving that in front of the audience and letting the rest of the image surround them and fill their peripheral vision in a more immersive experience. Three separate 2K scope theatrical images play at once resulting in 6144×858 pixels of imagery to fill the room.

Is this the first Barco Escape movie?
Technically, four other films have screened in Barco Escape theaters, the most popular one being last year’s release of Star Trek Beyond. But none of these films used the entire canvas offered by Escape throughout the movie. They had up to 20 minutes of content on the side screens, but the rest of the film was limited to the center screen that viewers are used to. Every shot in 6 Below was framed with the surround format in mind, and every pixel of the incredibly wide canvas is filled with imagery.

How are movies created for viewing in Escape?
There are two approaches that can be used to fill the screen with content. One is to place different shots on each screen in the process of telling the story. The other is to shoot a wide enough field of view and high enough resolution to stretch a single image across the screens. For 6 Below, director Scott Waugh wanted to shoot everything at 6K, with the intention of filling all the screens with main image. “I wanted to immerse the viewer in Eric’s predicament, alone on the mountain.”

Cinematographer Michael Svitak shot with the Red Epic Dragon. He says, “After testing both spherical and anamorphic lens options, I chose to shoot Panavision Primo 70 prime lenses because of their pristine quality of the entire imaging frame.” He recorded in 6K-WS (2.37:1 aspect ratio at 6144×2592), framing with both 7:1 Barco Escape and a 2.76:1 4K extraction in mind. Red does have an 8:1 option and a 4:1 option that could work if Escape was your only deliverable. But since there are very few Escape theaters at the moment, you would literally be painting yourself into a corner. Having more vertical resolution available in the source footage opens up all sorts of workflow possibilities.

This still left a few challenges in post: to adjust the framing for the most comfortable viewing and to create alternate framing options for other deliverables that couldn’t use the extreme 7:1 aspect ratio. Other projects have usually treated the three screens separately throughout the conform process, but we treated the entire canvas as a single unit until the very last step, breaking out three 2K streams for the DCP encode.

What extra challenges did Barco Escape delivery pose for 6 Below’s post workflow?
Vashi Nedomansky edited the original 6K R3D files in Adobe Premiere Pro, without making proxies, on some maxed-out Dell workstations. We did the initial edit with curved ultra-wide monitors and 4K TVs. “Once Mike McCarthy optimized the Dell systems, I was free to edit the source 6K Red RAW files and not worry about transcodes or proxies,” he explains. “With such a quick turnaround everyday, and so much footage coming in, it was critical that I could jump on the footage, cut my scenes, see if they were playing well and report back to the director that same day if we needed additional shots. This would not have been possible time-wise if we were transcoding and waiting for footage to cut. I kept pushing the hardware and software, but it never broke or let me down. My first cut was 2 hours and 49 minutes long, and we played it back on one Premiere Pro timeline in realtime. It was crazy!”

All of the visual effects were done at the full shooting resolution of 6144×2592, as was the color grade. Once Vashi had the basic cut in place, there was no real online conform, just some cleanup work to do before sending it to color as an 8TB stack of 6K frames. At that point, we started examining it from the three-screen perspective with three TVs to preview it in realtime, courtesy of the Mosaic functionality built into Nvidia’s Quadro GPU cards. Shots were realigned to avoid having important imagery in the seams, and some areas were stretched to compensate for the angle of the side screens from the audiences perspective.

DP Michael Svitak and director Scott Waugh

Once we had the final color grade completed (via Mike Sowa at Technicolor using Autodesk Lustre), we spent a day in an Escape theater analyzing the effect of reflections between the screens and its effect on the contrast. We made a lot of adjustments to keep the luminance of the side screens from washing out the darks on the center screen, which you can’t simulate on TVs in the edit bay. “It was great to be able to make the final adjustments to the film in realtime in that environment. We could see the results immediately on all three screens and how they impacted the room,” says Waugh.

Once we added the 7.1 mix, we were ready to export assets for our delivery in many different formats and aspect ratios. Making the three streams for Escape playback was a simple as using the crop tool in Adobe Media Encoder to trim the sides in 2K increments.

How can you see movies in the Barco Escape format?
Barco maintains a list of theaters that have Escape screens installed, which can be found at ready2escape.com. But for readers in the LA area, the only opportunity to see a film in Barco Escape in the foreseeable future is to attend one of the Thursday night screenings of 6Below at the Regal LA Live Stadium or the Cinemark XD at Howard Hughes Center. There are other locations available to see the film in standard theatrical format, but as a new technology, Barco Escape is only available in a limited number of locations. Hopefully, we will see more Escape films and locations to watch them in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

GoPro intros Hero6 and its first integrated 360 solution, Fusion

By Mike McCarthy

Last week, I traveled to San Francisco to attend GoPro’s launch event for its new Hero6 and Fusion cameras. The Hero6 is the next logical step in the company’s iteration of action cameras, increasing the supported frame rates to 4Kp60 and 1080p240, as well as adding integrated image stabilization. The Fusion on the other hand is a totally new product for them, an action-cam for 360-degree video. GoPro has developed a variety of other 360-degree video capture solutions in the past, based on rigs using many of their existing Hero cameras, but Fusion is their first integrated 360-video solution.

While the Hero6 is available immediately for $499, the Fusion is expected to ship in November for $699. While we got to see the Fusion and its footage, most of the hands-on aspects of the launch event revolved around the Hero6. Each of the attendees was provided a Hero6 kit to record the rest of the days events. My group was provided a ride on the RocketBoat through the San Francisco Bay. This adventure took advantage of a number of features of the camera, including the waterproofing, the slow motion and the image stabilization.

The Hero6

The big change within the Hero6 is the inclusion of GoPro’s new custom-designed GP1 image processing chip. This allows them to process and encode higher frame rates, and allows for image stabilization at many frame-rate settings. The camera itself is physically similar to the previous generations, so all of your existing mounts and rigs will still work with it. It is an easy swap out to upgrade the Karma drone with the new camera, which also got a few software improvements. It can now automatically track the controller with the camera to keep the user in the frame while the drone is following or stationary. It can also fly a circuit of 10 waypoints for repeatable shots, and overcoming a limitation I didn’t know existed, it can now look “up.”

There were fewer precise details about the Fusion. It is stated to be able to record a 5.2K video sphere at 30fps and a 3K sphere at 60fps. This is presumably the circumference of the sphere in pixels, and therefore the width of an equi-rectangular output. That would lead us to conclude that the individual fish-eye recording is about 2,600 pixels wide, plus a little overlap for the stitch. (In this article, GoPro’s David Newman details how the company arrives at 5.2K.)

GoPro Fusion for 360

The sensors are slightly laterally offset from one another, allowing the camera to be thinner and decreasing the parallax shift at the side seams, but adding a slight offset at the top and bottom seams. If the camera is oriented upright, those seams are the least important areas in most shots. They also appear to have a good solution for hiding the camera support pole within the stitch, based on the demo footage they were showing. It will be interesting to see what effect the Fusion camera has on the “culture” of 360 video. It is not the first affordable 360-degree camera, but it will definitely bring 360 capture to new places.

A big part of the equation for 360 video is the supporting software and the need to get the footage from the camera to the viewer in a usable way. GoPro already acquired Kolor’s Autopano Video Pro a few years ago to support image stitching for their larger 360 video camera rigs, so certain pieces of the underlying software ecosystem to support 360-video workflow are already in place. The desktop solution for processing the 360 footage will be called Fusion Studio, and is listed as coming soon on their website.

They have a pretty slick demonstration of flat image extraction from the video sphere, which they are marketing as “OverCapture.” This allows a cellphone to pan around the 360 sphere, which is pretty standard these days, but by recording that viewing in realtime they can output standard flat videos from the 360 sphere. This is a much simpler and more intuitive approach to virtual cinematography that trying to control the view with angles and keyframes in a desktop app.

This workflow should result in a very fish-eye flat video, similar to the more traditional GoPro shots, due to the similar lens characteristics. There are a variety of possible approaches to handling the fish-eye look. GoPro’s David Newman was explaining to me some of the solutions he has been working on to re-project GoPro footage into a sphere, to reframe or alter the field of view in a virtual environment. Based on their demo reel, it looks like they also have some interesting tools coming for using the unique functionality that 360 makes available to content creators, using various 360 projections for creative purposes within a flat video.

GoPro Software
On the software front, GoPro has also been developing tools to help its camera users process and share their footage. One of the inherent issues of action-camera footage is that there is basically no trigger discipline. You hit record long before anything happens, and then get back to the camera after the event in question is over. I used to get one-hour roll-outs that had 10 seconds of usable footage within them. The same is true when recording many attempts to do something before one of them succeeds.

Remote control of the recording process has helped with this a bit, but regardless you end up with tons of extra footage that you don’t need. GoPro is working on software tools that use AI and machine learning to sort through your footage and find the best parts automatically. The next logical step is to start cutting together the best shots, which is what Quikstories in their mobile app is beginning to do. As someone who edits video for a living, and is fairly particular and precise, I have a bit of trouble with the idea of using something like that for my videos, but for someone to whom the idea of “video editing” is intimidating, this could be a good place to start. And once the tools get to a point where their output can be trusted, automatically sorting footage could make even very serious editing a bit easier when there is a lot of potential material to get through. In the meantime though, I find their desktop tool Quik to be too limiting for my needs and will continue to use Premiere to edit my GoPro footage, which is the response I believe they expect of any professional user.

There are also a variety of new camera mount options available, including small extendable tripod handles in two lengths, as well as a unique “Bite Mount” (pictured, left) for POV shots. It includes a colorful padded float in case it pops out of your mouth while shooting in the water. The tripods are extra important for the forthcoming Fusion, to support the camera with minimal obstruction of the shot. And I wouldn’t recommend the using Fusion on the Bite Mount, unless you want a lot of head in the shot.

Ease of Use
Ironically, as someone who has processed and edited hundreds of hours of GoPro footage, and even worked for GoPro for a week on paper (as an NAB demo artist for Cineform during their acquisition), I don’t think I had ever actually used a GoPro camera. The fact that at this event we were all handed new cameras with zero instructions and expected to go out and shoot is a testament to how confident GoPro is that their products are easy to use. I didn’t have any difficulty with it, but the engineer within me wanted to know the details of the settings I was adjusting. Bouncing around with water hitting you in the face is not the best environment for learning how to do new things, but I was able to use pretty much every feature the camera had to offer during that ride with no prior experience. (Obviously I have extensive experience with video, just not with GoPro usage.) And I was pretty happy with the results. Now I want to take it sailing, skiing and other such places, just like a “normal” GoPro user.

I have pieced together a quick highlight video of the various features of the Hero6:


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Lenovo’s ThinkPad P71 mobile workstation

By Mike McCarthy

Lenovo was nice enough to send me their newest VR-enabled mobile workstation to test out on a VR workflow project I am doing. The new ThinkPad P71 is a beast with a 17-inch UHD IPS screen. The model they sent to me was equipped with the fastest available processor, a Xeon E5-1535M v6 with four cores processing eight threads at an official speed of 3.1GHz. It has 32GB of DDR4-2400 ECC RAM, with two more slots allowing that to be doubled to 64GB if desired.

The system’s headline feature is the Nvidia Quadro P5000 mobile GPU, with 2,048 CUDA cores, fed by another 16GB of dedicated DDR5 memory. The storage configuration is a single NVMe 1TB SSD populating one of two available M.2 slots. This configuration is currently available for $5,279, discounted to $4,223.20 on Lenovo.com right now. So while it is not cheap, it is one of the most powerful mobile workstations you can buy right now.

Connectivity wise, it has dual Thunderbolt 3 ports, which can also be used for USB 3.1 Type C devices. It has four more USB 3.1 Type A ports and a Gigabit Ethernet port. You have a number of options for display connectivity. Besides the Thunderbolt ports, there is a MiniDP 1.2 port and an HDMI 1.4 port (1.4 based on Intel graphics limitations). It has an SDXC slot, an ExpressCard34 slot, and a single 1/8-inch headphone mic combo jack. The system also has a docking connector and a rectangular port for the included 230W power adaptor.

It has the look and feel of a traditional ThinkPad, which goes back to the days when they were made by IBM. It has the customary TrackPoint as well as a touchpad. Both have three mouse buttons, which I like in theory, but I constantly find myself trying to click with the center button to no avail. I would either have to get used to it, or set the center action to click as well, defeating the purpose of the third button. The Fn key in the bottom corner will take some getting used to as well, as I keep hitting that instead of CTRL, but I adapted to a similar configuration on my current laptop.

I didn’t like the combo jack at first, because it required a cheap adapter, but now that I have gotten one, I see why that is the future, once all the peripherals support it. I had plugged my mic and headphones in backwards as recently as last week, so it is an issue when the ports aren’t clearly labeled and the combo jack solves that once and for all. It is a similar jack to most cell phones, and you only need an adapter for the mic functionality, regular headphones work by default.

The system doesn’t weigh as much as I expected, probably due to the lack of spinning disks or optical drive, which can be added if desired. It came relatively clean, with Windows 10 Pro installed, without too many other applications or utilities pre-installed. It had all of the needed drivers and a simple utility for operating the integrated X-Rite Pantone color calibrator for the screen. There was a utility for adding any other applications that would normally be included, which I used to download the Lenovo Performance Tuner. I use the Performance Tuner more for monitoring usage than adjusting settings, but can be nice to have everything in one place, especially in Windows 10.

The system boots up in about 10 seconds, and shuts down even faster. Hibernating takes twice as long, which is to be expected with that much RAM to be cached to disk, even with an NVMe SSD. But that may be worth the extra time to keep your applications open. My initial tests of the SSD showed a 1700MB/s write speed with 2500MB/s reads. Longer endurance tests resulted in write speeds decreasing to 1200MB/s, but the read speeds remained consistently above 2500MB/s. That should be more than enough throughput for most media work, even allowing me to playback uncompressed 6K content, and should allow 4K uncompressed media capture if you connect an I/O device to the Thunderbolt bus.

The main application I use on a daily basis is Adobe Premiere Pro, so most of my performance evaluation revolves around that program, although I used a few others as well. I was able to load a full feature film off of a USB3 drive with no issues. The 6K Cineform and DNxHR media played back at ½ res without issue. The 6K R3D files played at ¼ res without dropping frames, which is comparable to my big tower.

My next playback test was fairly unique to my workflow, but a good benchmark of what is possible. I was able to connect three 1080p televisions to the MiniDP port, using an MST (Multi-Stream Transport) hub, with three HDMI ports. Using the Nvidia Mosaic functionality offered by the Quadro P5000 card, I can span them into a single display, which Premiere can send output to, via the Adobe’s Mercury Playback engine. This configuration allows me to playback 6K DNxHR 444 files to all three screens, directly off the timeline, at half res, without dropping frames. My 6K H.265 files playback at full res outside Premiere. That is a pretty impressive display for a laptop. Once I had maxed out the possibilities for playback, I measured a few encodes. In general, the P71 takes about twice as long to encode things in Adobe Media Encoder as my 20-core desktop workstation, but is twice as fast as my existing quad Core i7 4860 laptop.

The other application I have been taxing my system with recently is DCP-O-Matic. It takes 30 hours to render my current movie to a 4K DCP on my desktop, which is 18x the runtime, but I know most of my system’s 20 cores are sitting idle based on the software threading. Doing a similar encode on the Lenovo system took 12.5x the run time, so that means my 100-minute film should take 21 hours. The higher base frequency of the quad core CPU definitely makes a difference in this instance.

The next step was to try my HMD headset with it to test out the VR capability. My Oculus Rift installed without issues, which is saying something, based on the peculiarities of Oculus’ software. Maybe there is something to that “VR-ready” program, but I did frequently have issues booting up the system with the Rift connected, so I recommend plugging it in after you have your system up and running. Everything VR-related ran great, except for the one thing I actually wanted to do, which was edit 360 video in Premiere, with the HMD. There was some incompatibility between the drivers for the laptop and the software. (Update: Setting the graphics system to Discrete instead of Hybrid in the BIOS solves this problem. This solution works with both PPro11’s Skybox Player, and PPro12’s new SteamVR based approach.)

There are a variety of ways to test battery life, but since this is a VR-ready system that seemed to be the best approach. How long would it support using a VR headset before needing to plug in? I got just short of an hour of heavy 3D VR usage before I started getting low battery warnings. I was hoping to be able to close the display to save power, since I am not looking at it while using the headset. (I usually set the Close Lid action to Do Nothing on all my systems because I want to be able to walk into the other room to show someone something on my timeline without effecting the application. If I want to sleep the system, I can press the button.) But whenever the Rift is active, closing the lid puts the machine to sleep immediately, regardless of the settings. So you have to run the display and the HMD anytime you are working in VR. And don’t plan on doing extensive work without plugging in.

Now to be fair, setting up to use VR involves preparing the environment and configuring sensors, so adding power to that mix is a reasonable requirement and very similar to 3D gaming. Portable doesn’t always mean untethered. But for browsing the Internet, downloading project files and editing articles, I would expect about four hours of battery life from the system before needing to recharge. It is really hard to accurately estimate run time when the system’s performance and power needs scale so much depending on the user’s activities. The GPU alone scales from 5 watts to 100 watts depending on what is being processed, but the run time is not out of line with what is to be expected from products in this class of performance.

Summing Up
All in all, the P71 is an impressive piece of equipment, and one of only a few ways you can currently get a portable professional VR solution. I recognize that most of my applications aren’t using all of the power I would be carrying around in a P71, so for my own work, I would probably hope to find a smaller and lighter-weight system at the expense of some of that processing power. But for people who have uncompromising needs for the fastest system they can possibly get, the Lenovo P71 fits the bill. It is a solid performer that can do an impressive amount of processing, while still being able to come with you wherever you need to go.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.