Tag Archives: Mike McCarthy

Making 6 Below for Barco Escape

By Mike McCarthy

There is new movie coming out this week that is fairly unique. Telling the true story of Eric LeMarque surviving eight days lost in a blizzard, 6 Below: Miracle on the Mountain is the first film shot and edited in its entirety for the new Barco Escape theatrical format. If you don’t know what Barco Escape is, you are about to find out.

This article is meant to answer just about every question you might have about the format and how we made the film, on which I was post supervisor, production engineer and finishing editor.

What is Barco Escape?
Barco Escape is a wraparound visual experience — it consists of three projection screens filling the width of the viewer’s vision with a total aspect ratio of 7.16:1. The exact field of view will vary depending on where you are sitting in the auditorium, but usually 120-180 degrees. Similar to IMAX, it is not about filling the entire screen with your main object but leaving that in front of the audience and letting the rest of the image surround them and fill their peripheral vision in a more immersive experience. Three separate 2K scope theatrical images play at once resulting in 6144×858 pixels of imagery to fill the room.

Is this the first Barco Escape movie?
Technically, four other films have screened in Barco Escape theaters, the most popular one being last year’s release of Star Trek Beyond. But none of these films used the entire canvas offered by Escape throughout the movie. They had up to 20 minutes of content on the side screens, but the rest of the film was limited to the center screen that viewers are used to. Every shot in 6 Below was framed with the surround format in mind, and every pixel of the incredibly wide canvas is filled with imagery.

How are movies created for viewing in Escape?
There are two approaches that can be used to fill the screen with content. One is to place different shots on each screen in the process of telling the story. The other is to shoot a wide enough field of view and high enough resolution to stretch a single image across the screens. For 6 Below, director Scott Waugh wanted to shoot everything at 6K, with the intention of filling all the screens with main image. “I wanted to immerse the viewer in Eric’s predicament, alone on the mountain.”

Cinematographer Michael Svitak shot with the Red Epic Dragon. He says, “After testing both spherical and anamorphic lens options, I chose to shoot Panavision Primo 70 prime lenses because of their pristine quality of the entire imaging frame.” He recorded in 6K-WS (2.37:1 aspect ratio at 6144×2592), framing with both 7:1 Barco Escape and a 2.76:1 4K extraction in mind. Red does have an 8:1 option and a 4:1 option that could work if Escape was your only deliverable. But since there are very few Escape theaters at the moment, you would literally be painting yourself into a corner. Having more vertical resolution available in the source footage opens up all sorts of workflow possibilities.

This still left a few challenges in post: to adjust the framing for the most comfortable viewing and to create alternate framing options for other deliverables that couldn’t use the extreme 7:1 aspect ratio. Other projects have usually treated the three screens separately throughout the conform process, but we treated the entire canvas as a single unit until the very last step, breaking out three 2K streams for the DCP encode.

What extra challenges did Barco Escape delivery pose for 6 Below’s post workflow?
Vashi Nedomansky edited the original 6K R3D files in Adobe Premiere Pro, without making proxies, on some maxed-out Dell workstations. We did the initial edit with curved ultra-wide monitors and 4K TVs. “Once Mike McCarthy optimized the Dell systems, I was free to edit the source 6K Red RAW files and not worry about transcodes or proxies,” he explains. “With such a quick turnaround everyday, and so much footage coming in, it was critical that I could jump on the footage, cut my scenes, see if they were playing well and report back to the director that same day if we needed additional shots. This would not have been possible time-wise if we were transcoding and waiting for footage to cut. I kept pushing the hardware and software, but it never broke or let me down. My first cut was 2 hours and 49 minutes long, and we played it back on one Premiere Pro timeline in realtime. It was crazy!”

All of the visual effects were done at the full shooting resolution of 6144×2592, as was the color grade. Once Vashi had the basic cut in place, there was no real online conform, just some cleanup work to do before sending it to color as an 8TB stack of 6K frames. At that point, we started examining it from the three-screen perspective with three TVs to preview it in realtime, courtesy of the Mosaic functionality built into Nvidia’s Quadro GPU cards. Shots were realigned to avoid having important imagery in the seams, and some areas were stretched to compensate for the angle of the side screens from the audiences perspective.

DP Michael Svitak and director Scott Waugh

Once we had the final color grade completed (via Mike Sowa at Technicolor using Autodesk Lustre), we spent a day in an Escape theater analyzing the effect of reflections between the screens and its effect on the contrast. We made a lot of adjustments to keep the luminance of the side screens from washing out the darks on the center screen, which you can’t simulate on TVs in the edit bay. “It was great to be able to make the final adjustments to the film in realtime in that environment. We could see the results immediately on all three screens and how they impacted the room,” says Waugh.

Once we added the 7.1 mix, we were ready to export assets for our delivery in many different formats and aspect ratios. Making the three streams for Escape playback was a simple as using the crop tool in Adobe Media Encoder to trim the sides in 2K increments.

How can you see movies in the Barco Escape format?
Barco maintains a list of theaters that have Escape screens installed, which can be found at ready2escape.com. But for readers in the LA area, the only opportunity to see a film in Barco Escape in the foreseeable future is to attend one of the Thursday night screenings of 6Below at the Regal LA Live Stadium or the Cinemark XD at Howard Hughes Center. There are other locations available to see the film in standard theatrical format, but as a new technology, Barco Escape is only available in a limited number of locations. Hopefully, we will see more Escape films and locations to watch them in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

GoPro intros Hero6 and its first integrated 360 solution, Fusion

By Mike McCarthy

Last week, I traveled to San Francisco to attend GoPro’s launch event for its new Hero6 and Fusion cameras. The Hero6 is the next logical step in the company’s iteration of action cameras, increasing the supported frame rates to 4Kp60 and 1080p240, as well as adding integrated image stabilization. The Fusion on the other hand is a totally new product for them, an action-cam for 360-degree video. GoPro has developed a variety of other 360-degree video capture solutions in the past, based on rigs using many of their existing Hero cameras, but Fusion is their first integrated 360-video solution.

While the Hero6 is available immediately for $499, the Fusion is expected to ship in November for $699. While we got to see the Fusion and its footage, most of the hands-on aspects of the launch event revolved around the Hero6. Each of the attendees was provided a Hero6 kit to record the rest of the days events. My group was provided a ride on the RocketBoat through the San Francisco Bay. This adventure took advantage of a number of features of the camera, including the waterproofing, the slow motion and the image stabilization.

The Hero6

The big change within the Hero6 is the inclusion of GoPro’s new custom-designed GP1 image processing chip. This allows them to process and encode higher frame rates, and allows for image stabilization at many frame-rate settings. The camera itself is physically similar to the previous generations, so all of your existing mounts and rigs will still work with it. It is an easy swap out to upgrade the Karma drone with the new camera, which also got a few software improvements. It can now automatically track the controller with the camera to keep the user in the frame while the drone is following or stationary. It can also fly a circuit of 10 waypoints for repeatable shots, and overcoming a limitation I didn’t know existed, it can now look “up.”

There were fewer precise details about the Fusion. It is stated to be able to record a 5.2K video sphere at 30fps and a 3K sphere at 60fps. This is presumably the circumference of the sphere in pixels, and therefore the width of an equi-rectangular output. That would lead us to conclude that the individual fish-eye recording is about 2,600 pixels wide, plus a little overlap for the stitch. (In this article, GoPro’s David Newman details how the company arrives at 5.2K.)

GoPro Fusion for 360

The sensors are slightly laterally offset from one another, allowing the camera to be thinner and decreasing the parallax shift at the side seams, but adding a slight offset at the top and bottom seams. If the camera is oriented upright, those seams are the least important areas in most shots. They also appear to have a good solution for hiding the camera support pole within the stitch, based on the demo footage they were showing. It will be interesting to see what effect the Fusion camera has on the “culture” of 360 video. It is not the first affordable 360-degree camera, but it will definitely bring 360 capture to new places.

A big part of the equation for 360 video is the supporting software and the need to get the footage from the camera to the viewer in a usable way. GoPro already acquired Kolor’s Autopano Video Pro a few years ago to support image stitching for their larger 360 video camera rigs, so certain pieces of the underlying software ecosystem to support 360-video workflow are already in place. The desktop solution for processing the 360 footage will be called Fusion Studio, and is listed as coming soon on their website.

They have a pretty slick demonstration of flat image extraction from the video sphere, which they are marketing as “OverCapture.” This allows a cellphone to pan around the 360 sphere, which is pretty standard these days, but by recording that viewing in realtime they can output standard flat videos from the 360 sphere. This is a much simpler and more intuitive approach to virtual cinematography that trying to control the view with angles and keyframes in a desktop app.

This workflow should result in a very fish-eye flat video, similar to the more traditional GoPro shots, due to the similar lens characteristics. There are a variety of possible approaches to handling the fish-eye look. GoPro’s David Newman was explaining to me some of the solutions he has been working on to re-project GoPro footage into a sphere, to reframe or alter the field of view in a virtual environment. Based on their demo reel, it looks like they also have some interesting tools coming for using the unique functionality that 360 makes available to content creators, using various 360 projections for creative purposes within a flat video.

GoPro Software
On the software front, GoPro has also been developing tools to help its camera users process and share their footage. One of the inherent issues of action-camera footage is that there is basically no trigger discipline. You hit record long before anything happens, and then get back to the camera after the event in question is over. I used to get one-hour roll-outs that had 10 seconds of usable footage within them. The same is true when recording many attempts to do something before one of them succeeds.

Remote control of the recording process has helped with this a bit, but regardless you end up with tons of extra footage that you don’t need. GoPro is working on software tools that use AI and machine learning to sort through your footage and find the best parts automatically. The next logical step is to start cutting together the best shots, which is what Quikstories in their mobile app is beginning to do. As someone who edits video for a living, and is fairly particular and precise, I have a bit of trouble with the idea of using something like that for my videos, but for someone to whom the idea of “video editing” is intimidating, this could be a good place to start. And once the tools get to a point where their output can be trusted, automatically sorting footage could make even very serious editing a bit easier when there is a lot of potential material to get through. In the meantime though, I find their desktop tool Quik to be too limiting for my needs and will continue to use Premiere to edit my GoPro footage, which is the response I believe they expect of any professional user.

There are also a variety of new camera mount options available, including small extendable tripod handles in two lengths, as well as a unique “Bite Mount” (pictured, left) for POV shots. It includes a colorful padded float in case it pops out of your mouth while shooting in the water. The tripods are extra important for the forthcoming Fusion, to support the camera with minimal obstruction of the shot. And I wouldn’t recommend the using Fusion on the Bite Mount, unless you want a lot of head in the shot.

Ease of Use
Ironically, as someone who has processed and edited hundreds of hours of GoPro footage, and even worked for GoPro for a week on paper (as an NAB demo artist for Cineform during their acquisition), I don’t think I had ever actually used a GoPro camera. The fact that at this event we were all handed new cameras with zero instructions and expected to go out and shoot is a testament to how confident GoPro is that their products are easy to use. I didn’t have any difficulty with it, but the engineer within me wanted to know the details of the settings I was adjusting. Bouncing around with water hitting you in the face is not the best environment for learning how to do new things, but I was able to use pretty much every feature the camera had to offer during that ride with no prior experience. (Obviously I have extensive experience with video, just not with GoPro usage.) And I was pretty happy with the results. Now I want to take it sailing, skiing and other such places, just like a “normal” GoPro user.

I have pieced together a quick highlight video of the various features of the Hero6:


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Lenovo’s ThinkPad P71 mobile workstation

By Mike McCarthy

Lenovo was nice enough to send me their newest VR-enabled mobile workstation to test out on a VR workflow project I am doing. The new ThinkPad P71 is a beast with a 17-inch UHD IPS screen. The model they sent to me was equipped with the fastest available processor, a Xeon E5-1535M v6 with four cores processing eight threads at an official speed of 3.1GHz. It has 32GB of DDR4-2400 ECC RAM, with two more slots allowing that to be doubled to 64GB if desired.

The system’s headline feature is the Nvidia Quadro P5000 mobile GPU, with 2,048 CUDA cores, fed by another 16GB of dedicated DDR5 memory. The storage configuration is a single NVMe 1TB SSD populating one of two available M.2 slots. This configuration is currently available for $5,279, discounted to $4,223.20 on Lenovo.com right now. So while it is not cheap, it is one of the most powerful mobile workstations you can buy right now.

Connectivity wise, it has dual Thunderbolt 3 ports, which can also be used for USB 3.1 Type C devices. It has four more USB 3.1 Type A ports and a Gigabit Ethernet port. You have a number of options for display connectivity. Besides the Thunderbolt ports, there is a MiniDP 1.2 port and an HDMI 1.4 port (1.4 based on Intel graphics limitations). It has an SDXC slot, an ExpressCard34 slot, and a single 1/8-inch headphone mic combo jack. The system also has a docking connector and a rectangular port for the included 230W power adaptor.

It has the look and feel of a traditional ThinkPad, which goes back to the days when they were made by IBM. It has the customary TrackPoint as well as a touchpad. Both have three mouse buttons, which I like in theory, but I constantly find myself trying to click with the center button to no avail. I would either have to get used to it, or set the center action to click as well, defeating the purpose of the third button. The Fn key in the bottom corner will take some getting used to as well, as I keep hitting that instead of CTRL, but I adapted to a similar configuration on my current laptop.

I didn’t like the combo jack at first, because it required a cheap adapter, but now that I have gotten one, I see why that is the future, once all the peripherals support it. I had plugged my mic and headphones in backwards as recently as last week, so it is an issue when the ports aren’t clearly labeled and the combo jack solves that once and for all. It is a similar jack to most cell phones, and you only need an adapter for the mic functionality, regular headphones work by default.

The system doesn’t weigh as much as I expected, probably due to the lack of spinning disks or optical drive, which can be added if desired. It came relatively clean, with Windows 10 Pro installed, without too many other applications or utilities pre-installed. It had all of the needed drivers and a simple utility for operating the integrated X-Rite Pantone color calibrator for the screen. There was a utility for adding any other applications that would normally be included, which I used to download the Lenovo Performance Tuner. I use the Performance Tuner more for monitoring usage than adjusting settings, but can be nice to have everything in one place, especially in Windows 10.

The system boots up in about 10 seconds, and shuts down even faster. Hibernating takes twice as long, which is to be expected with that much RAM to be cached to disk, even with an NVMe SSD. But that may be worth the extra time to keep your applications open. My initial tests of the SSD showed a 1700MB/s write speed with 2500MB/s reads. Longer endurance tests resulted in write speeds decreasing to 1200MB/s, but the read speeds remained consistently above 2500MB/s. That should be more than enough throughput for most media work, even allowing me to playback uncompressed 6K content, and should allow 4K uncompressed media capture if you connect an I/O device to the Thunderbolt bus.

The main application I use on a daily basis is Adobe Premiere Pro, so most of my performance evaluation revolves around that program, although I used a few others as well. I was able to load a full feature film off of a USB3 drive with no issues. The 6K Cineform and DNxHR media played back at ½ res without issue. The 6K R3D files played at ¼ res without dropping frames, which is comparable to my big tower.

My next playback test was fairly unique to my workflow, but a good benchmark of what is possible. I was able to connect three 1080p televisions to the MiniDP port, using an MST (Multi-Stream Transport) hub, with three HDMI ports. Using the Nvidia Mosaic functionality offered by the Quadro P5000 card, I can span them into a single display, which Premiere can send output to, via the Adobe’s Mercury Playback engine. This configuration allows me to playback 6K DNxHR 444 files to all three screens, directly off the timeline, at half res, without dropping frames. My 6K H.265 files playback at full res outside Premiere. That is a pretty impressive display for a laptop. Once I had maxed out the possibilities for playback, I measured a few encodes. In general, the P71 takes about twice as long to encode things in Adobe Media Encoder as my 20-core desktop workstation, but is twice as fast as my existing quad Core i7 4860 laptop.

The other application I have been taxing my system with recently is DCP-O-Matic. It takes 30 hours to render my current movie to a 4K DCP on my desktop, which is 18x the runtime, but I know most of my system’s 20 cores are sitting idle based on the software threading. Doing a similar encode on the Lenovo system took 12.5x the run time, so that means my 100-minute film should take 21 hours. The higher base frequency of the quad core CPU definitely makes a difference in this instance.

The next step was to try my HMD headset with it to test out the VR capability. My Oculus Rift installed without issues, which is saying something, based on the peculiarities of Oculus’ software. Maybe there is something to that “VR-ready” program, but I did frequently have issues booting up the system with the Rift connected, so I recommend plugging it in after you have your system up and running. Everything VR-related ran great, except for the one thing I actually wanted to do, which was edit 360 video in Premiere, with the HMD. There was some incompatibility between the drivers for the laptop and the software. (Update: Setting the graphics system to Discrete instead of Hybrid in the BIOS solves this problem. This solution works with both PPro11’s Skybox Player, and PPro12’s new SteamVR based approach.)

There are a variety of ways to test battery life, but since this is a VR-ready system that seemed to be the best approach. How long would it support using a VR headset before needing to plug in? I got just short of an hour of heavy 3D VR usage before I started getting low battery warnings. I was hoping to be able to close the display to save power, since I am not looking at it while using the headset. (I usually set the Close Lid action to Do Nothing on all my systems because I want to be able to walk into the other room to show someone something on my timeline without effecting the application. If I want to sleep the system, I can press the button.) But whenever the Rift is active, closing the lid puts the machine to sleep immediately, regardless of the settings. So you have to run the display and the HMD anytime you are working in VR. And don’t plan on doing extensive work without plugging in.

Now to be fair, setting up to use VR involves preparing the environment and configuring sensors, so adding power to that mix is a reasonable requirement and very similar to 3D gaming. Portable doesn’t always mean untethered. But for browsing the Internet, downloading project files and editing articles, I would expect about four hours of battery life from the system before needing to recharge. It is really hard to accurately estimate run time when the system’s performance and power needs scale so much depending on the user’s activities. The GPU alone scales from 5 watts to 100 watts depending on what is being processed, but the run time is not out of line with what is to be expected from products in this class of performance.

Summing Up
All in all, the P71 is an impressive piece of equipment, and one of only a few ways you can currently get a portable professional VR solution. I recognize that most of my applications aren’t using all of the power I would be carrying around in a P71, so for my own work, I would probably hope to find a smaller and lighter-weight system at the expense of some of that processing power. But for people who have uncompromising needs for the fastest system they can possibly get, the Lenovo P71 fits the bill. It is a solid performer that can do an impressive amount of processing, while still being able to come with you wherever you need to go.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.