Tag Archives: Adobe Premiere

Review: Dell’s Precision T5820 workstation

By Brady Betzel

Multimedia creators are looking for faster, more robust computer systems and seeing an increase in computing power among all brands and products. Whether it’s an iMac Pro with a built-in 5K screen or a Windows-based, Nvidia-powered PC workstation, there are many options to consider. Many of today’s content creation apps are operating-system-agnostic, but that’s not necessarily true of hardware — mainly GPUs. So for those looking at purchasing a new system, I am going to run through one of Dell’s Windows-based offerings: the Dell Precision T5820 workstation.

The most important distinction between a “standard” computer system and a workstation is the enterprise-level quality and durability of internal parts. While you might build or order a custom-built system for less money, you will most likely not get the same back-end assurances that “workstations” bring to the party. Workstations aren’t always the fastest, but they are built with zero downtime and hardware/software functionality in mind. So while non-workstations might use high-quality components, like an Nvidia RTX 2080 Ti (a phenomenal graphics card), they aren’t necessarily meant to run 24 hours a day, 365 days a year. On the other hand, the Nvidia Quadro series GPUs are enterprise-level graphics cards that are meant to run constantly with low failure rates. This is just one example, but I think you get the point: Workstations run constantly and are warrantied against breakdowns — typically.

Dell Precision T5820
Dell has a long track record of building everyday computer systems that work. Even more impressive are its next-level workstation computers that not only stand up to constant use and abuse but are also certified with independent software vendors (ISVs). ISV is a designation that suggests Dell has not only tested but supports the end-user’s primary software choices. For instance, in the nonlinear editing software space I found out that Dell had tested the Precision T5820 workstation with Adobe Premiere Pro 13.x in Windows 10 and has certified that the AMD Radeon Pro WX 2100 and 3100 GPUs with 18.Q3.1 drivers are approved.

You can see for yourself here. Dell also has driver suggestions from some recent versions of Avid Media Composer, as well as other software packages. That being said, Dell not only tests but will support hardware configurations in the approved software apps.

Beyond the ISV certifications and the included three-year hardware warranty with on-site/in-home service after remote diagnostics, how does the Dell Precision T5820 perform? Well, it’s fast and well-built.

The specs are as follows:
– Intel Xeon W-2155 3.3GHz, 4.5GHz Turbo, 10-core, 13.75MB cache with hyperthreading
– Windows 10 Pro (four cores plus for workstations — this is an additional cost)
– Precision 5820 Tower with 950W chassis
– Nvidia Quadro P4000, 8GB, four DisplayPorts (5820T)
– 64GB (8x8GB) 2666MHz DDR and four RDIMM ECC
– Intel vPro technology enabled
– Dell Ultra-Speed Drive Duo PCIe SSD x8 Card, 1 M.2 512GB PCIe NVMe class 50 Solid State Drive (boot drive)
– 3.5-inch 2TB 7200rpm SATA hard drive (secondary drive)
– Wireless keyboard and mouse
– 1Gb network interface card
– USB 3.1 G2 PCIe card (two Type C ports, one DisplayPort)
– Three years hardware warranty with onsite/in-home service after remote diagnosis

All of this costs around $5,200 without tax or shipping and not including any sale prices.

The Dell Precision T5820 is the mid-level workstation offering from Dell that finds the balance between affordability, performance and reliability — kind of the “better, Cheaper, faster” concept. It is one of the quietest Dell workstations I have tested. Besides the spinning hard drive that was included on the model I was sent, there aren’t many loud cards or fans that distract me when I turn on the system. Dell is touting the new multichannel thermal design for advanced cooling and acoustics.

The actual 5820 case is about the size of a mid-sized tower system but feels much slimmer. I even cracked open the case to tinker around with the internal components. The inside fans and multichannel cooling are sturdy and even a little hard to remove without some force — not necessarily a bad thing. You can tell that Dell made it so that when something fails, it is a relatively simple replacement. The insides are very modular. The front of the 5820 has an optical drive, some USB ports (including two USB-C ports) and an audio port. If you get fancy, you can order the systems with what Dell calls “Flex Bays” in the front. You can potentially add up to six 2.5-inch or five 3.5-inch drives and front-accessible storage of up to four M.2 or U.2 PCIe NVMe SSDs. The best part about the front Flex Bays is that, if you choose to use M.2 or U.2 media, they are hot-swappable. This is great for editing projects that you want to archive to an M.2 or save to your Blackmagic DaVinci Resolve cache and remove later.

In the back of the workstation, you get audio in/out, one serial port, PS/2, Ethernet and six USB 3.1 Gen 1 Type A ports. This particular system was outfitted with an optional USB 3.1 Gen 2 10GB/s Type C card with one DisplayPort passthrough. This is used for the Dell UltraSharp 32-inch 4K (UHD) USB-C monitor that I received along with the T5820.

The large Dell UltraSharp 32-inch monitor (U3219Q) offers a slim footprint and a USB-C connection that is very intriguing, but they aren’t giving them away. They cost $879.99 if ordered through Dell.com. With the ultra-minimal Infinity Edge bezel, 400 nits of brightness for HDR content, up to UHD (3840×2160) resolution, 60Hz refresh rate and multiple input/output connections, you can see all of your work in one large IPS panel. For those of you who want to run two computers off one monitor, this Dell UltraSharp has a built-in KVM switch function. Anyone with a MacBook Pro featuring USB-C/Thunderbolt 3 ports can in theory use one USB-C cable to connect and charge. I say “in theory” only because I don’t have a new MacBook Pro to test it on. But for PCs, you can still use the USB-C as a hub.

The monitor comes equipped with a DisplayPort 1.4, HDMI, four USB 3.0 Type A ports and a USB-C port. Because I use my workstation mainly for video and photo editing, I am always concerned with proper calibration. The U3219Q is purported by Dell to be 99% Adobe sRGB-, 95% DCI-P3- and 99% Rec. 709-accurate, so if you are using Resolve and outputting through a DeckLink, you will be able to get some decent accuracy and even use it for HDR. Over the years, I have really fallen in love with Dell monitors. They don’t break the bank, and they deliver crisp and accurate images, so there is a lot to love. Check out more of this monitor here.

Performance
Working in media creation I jump around between a bunch of apps and plugins, from Media Composer to Blackmagic’s DaVinci Resolve and even from Adobe After Effects to Maxon’s Cinema 4D. So I need a system that can not only handle CPU-focused apps like After Effects but GPU-weighted apps like Resolve. With the Intel Xeon and Nvidia Quadro components, this system should work just fine. I ran some tests in Premiere Pro, After Effects and Resolve. In fact, I used Puget Systems’ benchmarking tool with Premiere and After Effects projects. You can find one for Premiere here. In addition, I used the classic 3D benchmark Cinebench R20 from Maxon, and even did some of my own benchmarks.

In Premiere, I was able to play 4K H.264 (50MB and 100MB 10-bit) and ProRes files (HQ and 4444) in realtime at full resolution. Red Raw 4K was able to playback in full-quality debayer. But as the Puget Systems’ Premiere Benchmark shows, 8K (as well as heavily effected clips) started to bog the system down. With 4K, the addition of Lumetri color correction slowed down playback and export a little bit — just a few frames under realtime. It was close though. At half quality I was essentially playing in realtime. According to the Puget Systems’ Benchmark, the overall CPU score was much higher than the GPU score. Adobe uses a lot of single core processing. While certain effects, like resizes and blurs, will open up the GPU pipes, I saw the CPU (single-core) kicking in here.

In the Premiere Pro tests, the T5820 really shined bright when working with mezzanine codec-based media like ProRes (HQ and 4444) and even in Red 4K raw media. The T5820 seemed to slow down when multiple layers of effects, such as color correction and blurs, were added on top of each other.

In After Effects, I again used Puget Systems’ benchmark — this time the After Effects-specific version. Overall, the After Effects scoring was a B or B-, which isn’t terrible considering it was up against the prosumer powerhouse Nvidia RTX 2080. (Puget Systems used the 2080 as the 100% score). It seemed the tracking on the Dell T5820 was a 90%, while Render and Preview scores were around 80%. While this is just what it says — a benchmark — it’s a great way to see comparisons between machines like the benchmark standard Intel i9, RTX 2080 GPU, 64GB of memory and much more.

In Resolve 16 Beta 7, I ran multiple tests on the same 4K (UHD), 29.97fps Red Raw media that Puget Systems used in its benchmarks. I created four 10-minute sequences:
Sequence 1: no effects or LUTs
Sequence 2: three layers of Resolve OpenFX Gaussian blurs on adjustment layers in the Edit tab
Sequence 3: five serial nodes of Blur Radius (at 1.0) created in the Color tab
Sequence 4: in the Color tab, spatial noise reduction was set at 25 radius to medium, blur set to 1.0 and sharpening in the Blur tab set to zero (it starts at 0.5).

Sequence 1, without any effects, would play at full debayer quality in real time and export at a few frames above real time, averaging about 33fps. Sequence 2, with Resolve’s OpenFX Gaussian blur applied three times to the entire frame via adjustment layers in the Edit tab, would play back in real time and export at between 21.5fps and 22.5fps. Sequence 3, with five serial nodes of blur radius set at 1.0 in the Blur tab in the Color tab, would play realtime and export at about 23fps. Once I added a sixth serial blur node, the system would no longer lock onto realtime playback. Sequence 4 — with spatial noise reduction set at 25 radius to medium, blur set to 1.0 and sharpening in the Blur tab set to zero in the Color tab — would play back at 1fps to 2fps and export at 6.5fps.

All of these exports were QuickTime-based H.264s exported using the Nvidia encoder (the native encoder would slow it down by 10 frames or so). The settings were UHD resolution; “automatic — best” quality; disabled frame reordering; force sizing to highest quality; force debayer to highest quality and no audio. Once I stacked two layers of raw Red 4K media, I started to drop below realtime playback, even without color correction or effects. I even tried to play back some 8K media, and I would get about 14fps on full-res. Premium debayer, 14 to 16 on half res. Premium 25 on half res. good, and 29.97fps (realtime) on quarter res. good.

Using the recently upgraded Maxon Cinebench R20 benchmark, I found the workstation to be performing adequately around the fourth-place spot. Keep in mind, there are thousands of combinations of results that can be had depending on CPU, GPU, memory and more. These are only sample results that you could verify against your own for 3D artists. The Cinebench R20 results were CPU: 4682, CPU (single-core): 436, and MP ratio: 10.73x. If you Google or check out some threads for Cinebench R20 result comparisons, you will eventually find some results to compare mine against. My results are a B to B+. A much higher-end Intel Xeon or i9 or an AMD Threadripper processor would really punch this system up a weight class.

Summing Up
The Dell Precision T5820 workstation comes with a lot of enterprise-level benefits that simply don’t come with your average consumer system. The components are meant to be run constantly, and Dell has tested its systems against current industry applications using the hardware in these systems to identify the best optimizations and driver packages with these ISVs. Should anything fail, Dell’s three-year warranty (which can be upgraded) will get you up and running fast. Before taxes and shipping, the Dell T5820 I was sent for review would retail for just under $5,200 (maybe even a little more with the DVD drive, recovery USB drive, keyboard and mouse). This is definitely not the system to look at if you are a DIYer or an everyday user who does not need to be running 24 hours a day, seven days a week.

But in a corporate environment, where time is money and no one wants to be searching for answers, the Dell T5820 workstation with accompanying three-year ProSupport with next-day on-site service will be worth the $5,200. Furthermore, it’s invaluable that optimization with applications such as the Adobe Creative Suite is built-in, and Dell’s ProSupport team has direct experience working in those professional apps.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 

Review: The Loupedeck+ editing console for stills and video

By Brady Betzel

As an online editor I am often tasked with wearing multiple job hats, including VFX artist, compositor, offline editor, audio editor and colorist, which requires me to use special color correction panel hardware. I really love photography and cinematography but have never been able to use the color correction hardware I’m used to in  Adobe’s Photoshop or Lightroom, so for the most part I’ve only done basic photo color correction.

You could call it a hobby, although this knowledge definitely helps many aspects of my job. I’ve known Photoshop for years and use it for things like building clean plates to use in apps like Boris FX Mocha Pro and After Effects, but I had never really mastered Lightroom. However, that changed when I saw the Loupedeck. I was really intrigued with its unique layout but soon dismissed it since it didn’t work on video… until now. I’m happy to say the new Loupedeck+ works with both photo and video apps.

Much like the Tangent Element and Wave or Blackmagic Micro and Mini panels, the Loupedeck+ is made to adjust parameters like contrast, exposure, saturation, highlights, shadows and individual colors. But, unlike Tangent or Blackmagic products, the Loupedeck+ functions not only in Adobe Premiere and Apple Final Cut Pro X but in image editing apps like Lightroom 6, Photoshop CC, and Skylum Aurora HDR; the audio editing app Adobe Audition and the VFX app Adobe After Effects. There’s also beta integration with Capture One.

It works via USB 2.0 connection on Windows 10 and Mac OS 10.12 or later. In order to use the panel and adjust its keys, you must also download the Loupedeck software, which you can find here. The Loupedeck+ costs just $249 dollars, which is significantly less than many of the other color correction panels on the market offering so many functions.

Digging In
In this review, I am going to focus on Loupedeck+’s functionality with Premiere, but keep in mind that half of what makes this panel interesting is that you can jump into Lightroom Classic or Photoshop and have the same, if not more, functionality. Once you install the Loupedeck software, you should restart your system. When I installed the software I had some weird issues until I restarted.

When inside of Premiere, you will need to tell the app that you are using this specific control panel by going to the Edit menu > Preferences > Control Surface > click “Add” and select Loupedeck 2. This is for a PC, but Mac OS works in a similar way. From there you are ready to use the Loupedeck+. If you have any customized keyboard shortcuts (like I do) I would suggest putting your keyboard shortcuts to default for the time being, since they might cause the Loupedeck+ to use different keypresses than you originally intended.

Once I got inside of Premiere, I immediately opened up the Lumetri color panels and began adjusting contrast, exposure and saturation, which are all clearly labeled on the Loupedeck+. Easy enough, but what if you want to use the Loupedeck+ as an editing panel as well as a basic color correction console? That’s when you will want to print out pages six through nine of the Premiere Pro Loupedeck+ manual, which you can find here. (If you like to read on a tablet you could pull that up there, but I like paper for some reason… sorry trees.) In these pages, you will see that there are four layers of controls built into the Loupedeck+.

Shortcuts
Not only can you advance frames using the arrow keypad, jump to different edit points with the jog dial, change LUTs, add keyframes and extend edits, you also have three more layers of shortcuts. To get to the second layer of shortcuts, press the “Fn” button located toward the lower left, and the Fn layer will appear. Here you can do things like adjust the shadows and midtones on the X and Y axes, access the Type Tool or add edits to all tracks. To go even further, you can access the “Custom” mode, which has defaults but can be customized to whichever keypress and functions the Loupedeck+ app allows.

Finally, while in the Custom mode, you can press the Fn button again and enter “Custom Fn” mode — the fourth and final layer of shortcuts. Man, that is a lot of customizable buttons. Do I need all those buttons? Probably not, but still, they are there —and it’s better to have too much than not enough, right?

Beyond the hundreds of shortcuts in the Loupedeck+ console you have eight color-specific scroll wheels for adjusting. In Lightroom Classic, these tools are self-explanatory as they adjust each color’s intensity.

In Premiere they work a little differently. To the left of the color scroll wheels are three buttons: hue, saturation and luminance (Hue, Sat and Lum, respectively). In the standard mode, they each equate to a different color wheel: Hue = highlights, Sat = midtones and Lum = shadows. The scroll wheel above red will adjust the up/down movement in the selected color wheel’s x-axis, orange will adjust the left/right movement in the selected color wheel’s y-axis, and yellow will adjust the intensity (or luminance) of the color wheel.

Controlling the Panel
In traditional color correction panels, color correction is controlled by roller balls surrounded by a literal wheel to control intensity. It’s another way to skin a cat. I personally love the feel of the Tangent Element Tk panel, which simply has three roller balls and rings to adjust the hue, but some people might like the ability to precisely control the color wheels in x- and y-axis.

To solve my issue, I used both. In the preferences, I enabled both Tangent and Loupedeck options. It worked perfectly (once I restarted)! I just couldn’t get past the lack of hue balls and rings in the Loupedeck, but I really love the rest of the knobs and buttons. So in a weird hodge-podge, you can combine a couple of panels to get a more “affordable” set of correction panels. I say affordable in quotes because, as of this review, the Tangent Element Tk panels are over $1,100 for one panel, while the entire set is over $3,000.

So if you already have the Tangent Element Tk panel, but want a more natural button and knob layout, the Loupedeck+ is a phenomenal addition as long as you are staying within the Adobe or FCP X world. And while I clearly like the Tangent Elements panels, I think the overall layout and design of the Loupedeck+ is more efficient and overall more modern.

Summing Up
In the end, I really like the Loupedeck+. I love being able to jump back and forth between photo and video apps seamlessly with one panel. What I think I love the most is the “Export” button in the upper right corner of the Loupedeck+. I wish that button existed on all panels.

When using the Loupedeck+, you can really get your creative juices flowing by hitting the “Full Screen” button and color correcting away, even using multiple adjustments at once to achieve your desired look — similar to how a lot of people use other color correction panels. And at $249, the Loupedeck+ might be the overall best value for the functionality of any editing/color correction panel currently out there.

Can I see using it when editing? I can, but I am such a diehard keyboard and Wacom tablet user that I have a hard time using a panel for editing functions like trimming and three-point edits. I did try the trimming functionality and it was great, not only on a higher-end Intel Xeon-based system but on an even older Windows laptop. The responsiveness was pretty impressive and I am a sucker for adjustments using dials, sliders and roller balls.

If you want to color correct using panels, I think the Loupedeck+ is going to fit the bill for you if you work in Adobe Creative Suite or FCP X. If you are a seasoned colorist, you will probably start to freak out at the lack of rollerballs to adjust hues of shadows, midtones and highlights. But if you are a power user who stays inside the Adobe Creative Cloud ecosystem, there really isn’t a better panel for you. Just print up the shortcut pages of the manual and tape them to the wall by your monitor for constant reference.

As with anything, you will only get faster with repetition. Not only did I test out color correcting footage for this review, I also used the Loupedeck+ in Adobe Lightroom Classic to correct my images!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Editing Roundtable

By Randi Altman

The world of the editor has changed over the years as a result of new technology, the types of projects they are being asked to cut (looking at you social media) and the various deliverables they must create. Are deadlines still getting tighter and are budgets still getting smaller? The answer is yes, but some editors are adapting to the trends, and companies that make products for editors are helping by making the tools more flexible and efficient so pros can get to where they need to be.

We posed questions to various editors working in TV, short form and indies, who do a variety of jobs, as well as to those making the tools they use on a daily basis. Enjoy.

Cut+Run Editor/Partner Pete Koob

What trends do you see in commercial editing? Good or bad?
I remember 10 years ago a “colleague,” who was an interactive producer at the time, told me rather haughtily that I’d be out of work in a few years when all advertising became interactive and lived online. Nothing could have been further from the truth, of course, and I think editors everywhere have found that the viewer migration from TV to online has yielded an even greater need for content.

The 30-second spot still exists, both online and on TV, but the opportunities for brands to tell more in-depth stories across a wide range of media platforms mean that there’s a much more diverse breadth of work for editors, both in terms of format and style.

For better or worse, we’ve also seen every human being with a phone become their own personal brand manager with a highly cultivated and highly saturated digital presence. I think this development has had a big impact on the types of stories we’re telling in advertising and how we’re telling them. The genre of “docu-style” editing is evolving in a very exciting way as more and more companies are looking to find real people whose personal journeys embody their brands. Some of the most impressive editorial work I see these days is a fusion of styles — music video, fashion, documentary — all being brought to bear on telling these real stories, but doing it in a way that elevates them above the noise of the daily social media feed.

Selecting the subjects in a way that feels authentic — and not just like a brand co-opting someone’s personal struggle — is essential, but when done well, there are some incredibly inspirational and emotional stories to be told. And as a father of a young girl, it’s been great to show my daughter all the empowering stories of women being told right now, especially when they’re done with such a fresh and exciting visual language.

What is it about commercial editing that attracted you and keeps attracting you?
Probably the thing that keeps me most engaged with commercial editing is the variety and volume of projects throughout the year. Cutting commercials means you’re on to the next one before you’ve really finished the last.

The work feels fresh when I’m constantly collaborating with different people every few weeks on a diverse range of projects. Even if I’m cutting with the same directors, agencies or clients, the cast of characters always rotates to some degree, and that keeps me on my toes. Every project has its own unique challenges, and that compels me to constantly find new ways to tell stories. It’s hard for me to get bored with my work when the work is always changing.

Conoco’s Picnic spot

Can you talk about challenges specific to short-form editing?
I think the most obvious challenge for the commercial editor is time. Being able to tell a story efficiently and poignantly in a 60-, 30-, 15- or even six-second window reveals the spot editor’s unique talent. Sometimes that time limit can be a blessing, but more often than not, the idea on the page warrants a bigger canvas than the few seconds allotted.

It’s always satisfying to feel as if I’ve found an elegant editorial solution to telling the story in a concise manner, even if that means re-imagining the concept slightly. It’s a true testament to the power of editing and one that is specific to editing commercials.

How have social media campaigns changed the way you edit, if at all?
Social media hasn’t changed the way I edit, but it has certainly changed my involvement in the campaign as a whole. At its worst, the social media component is an afterthought, where editors are asked to just slap together a quick six-second cutdown or reformat a spot to fit into a square framing for Instagram. At its best, the editor is brought into the brainstorming process and has a hand in determining how the footage can be used inventively to disperse the creative into different media slots. One of the biggest assets of an editor on any project is his or her knowledge of the material, and being able to leverage that knowledge to shape the campaign across all platforms is incredibly rewarding.

Phillips 76 “Jean and Gene”

What system do you edit on, and what else other than editing are you asked to supply?
We edit primarily on Avid Media Composer. I still believe that nothing else can compete when it comes to project sharing, and as a company it allows for the smoothest means of collaboration between offices around the world. That being said, clients continue to expect more and more polish from the offline process, and we are always pushing our capabilities in motion graphics and visual effects in After Effects and color finessing in Blackmagic DaVinci Resolve.

What projects have you worked on recently?
I’ve been working on some bigger campaigns that consist of a larger number of spots. Two campaigns that come to mind are a seven-spot TV campaign for Phillips 76 gas stations and 13 short online films for Subaru. It’s fun to step back and look at how they all fit together, and sometimes you make different decisions about an individual spot based on how it sits in the larger group.

The “Jean and Gene” spots for 76 were particularly fun because it’s the same two characters who you follow across several stories, and it almost feels like a mini TV series exploring their life.

Earlier in the  year I worked on a Conoco campaign, featuring the spots Picnic, First Contact and River, via Carmichael Lynch.

Red Digital Cinema Post and Workflow Specialist Dan Duran

How do you see the line between production and post blurring?
Both post and on set production are evolving with each other. There has always been a fine line between them, but as tech grows and becomes more affordable, you’re seeing tools that previously would have been used only in post bleed onto set.

One of my favorite trends is seeing color-managed workflows on locations. With full color control pipelines being used with calibrated SDR and HDR monitors, a more accurate representation of what the final image will look like is given. I’ve also seen growth in virtual productions where you’re able to see realtime CGI and environments on set directly through camera while shooting.

What are the biggest trends you’ve been facing in product development?
Everyone is always looking for the highest image quality at the best price point. As sensor technology advances, we’re seeing users ask for more and more out of the camera. Higher sensitivity, faster frame rates, more dynamic range and a digital RAW that allows them to effortlessly shape the images into a very specific creative look that they’re trying to achieve for their show. 8K provides a huge canvas to work with, offering flexibility in what they are trying to capture.

Smaller cameras are able to easily adapt into a whole new myriad of support accessories to achieve shots in ways that weren’t always possible. Along with the camera/sensor revolution, Red has seen a lot of new cinema lenses emerge, each adding their own character to the image as it hits the photo sites.

What trends do you see from editors these days. What enables their success?
I’ve seen post production really take advantage of modern tech to help improve and innovate new workflows. Being able to view higher resolution, process footage faster and playback off of a laptop shows how far hardware has come.

We have been working more with partners to help give pros the post tools they need to be more efficient. As an example, Red recently teamed up with Nvidia to not only get realtime full resolution 8K playback on laptops, but also allow for accelerated renders and transcode times much faster than before. Companies collaborating to take advantage of new tech will enable creative success.

AlphaDogs Owner/Editor Terence Curren

What trends do you see in editing? Good or bad.
There is a lot of content being created across a wide range of outlets and formats, from theatrical blockbusters and high-end TV shows all the way down to one-minute videos for Instagram. That’s positive for people desiring to use their editing skills to do a lot of storytelling. The flip side is that with so much content being created, the dollars to pay editors gets stretched much thinner. Barring high-end content creation, the overall pay rates for editors have been going down.

The cost of content capture is a tiny fraction of what it was back in the film days. The good part of that is there is a greater likelihood that the shot you need was actually captured. The downside is that without the extreme expense of shooting associated with film, we’ve lost the disciplines of rehearsing scenes thoroughly, only shooting while the scene is being performed, only printing circled takes, etc. That, combined with reduced post schedules, means for the most part editors just don’t have the time to screen all the footage captured.

The commoditization of the toolsets, (some editing systems are actually free) combined with the plethora of training materials readily available on the Internet and in most schools means that video storytelling is now a skill available to everyone. This means that the next great editors won’t be faced with the barriers to entry that past generations experienced, but it also means that there’s a much larger field of editors to choose from. The rules of supply and demand tell us that increased availability and competition of a service reduces its cost. Traditionally, many editors have been able to make upper-middle-class livings in our industry, and I don’t see as much of that going forward.

To sum it up, it’s a great time to become an editor, as there’s plenty of work and therefore lots of opportunity. But along with that, the days of making a higher-end living as an editor are waning.

What is it about editing that attracted you and keeps attracting you?
I am a storyteller at heart. The position of editor is, in my opinion, matched with the director and writer for responsibility of the structural part of telling the story. The writer has to invent the actual story out of whole cloth. The director has to play traffic cop with a cornucopia of moving pieces under a very tight schedule while trying to maintain the vision of the pieces of the story necessary to deliver the final product. The editor takes all those pieces and gives the final rewrite of the story for the audience to hopefully enjoy.

Night Walk

As with writing, there are plenty of rules to guide an editor through the process. Those rules, combined with experience, make the basic job almost mechanical much of the time. But there is a magic thing that happens when the muse strikes and I am inspired to piece shots together in some way that just perfectly speaks to the audience. Being such an important part of the storytelling process is uniquely rewarding for a storyteller like me.

Can you talk about challenges specific to short-form editing versus long-form?
Long-form editing is a test of your ability to maintain a fresh perspective of your story to keep the pacing correct. If you’ve been editing a project for weeks or months at a time, you know the story and all the pieces inside out. That can make it difficult to realize you might be giving too much information or not enough to the audience. Probably the most important skill for long form is the ability to watch a cut you’ve been working on for a long time and see it as a first-time viewer. I don’t know how others handle it, but for me there is a mental process that just blanks out the past when I want to take a critical fresh viewing.

Short form brings the challenge of being ruthless. You need to eliminate every frame of unnecessary material without sacrificing the message. While the editors don’t need to keep their focus for weeks or months, they have the challenge of getting as much information into that short time as possible without overwhelming the audience. It’s a lot like sprinting versus running a marathon. It exercises a different creative muscle that also enjoys an immediate reward.

Lafayette Escadrille

I can’t say I prefer either one over the other, but I would be bored if I didn’t get to do both over time, as they bring different disciplines and rewards.

How have social media campaigns changed the way you edit, if at all? Can you talk about the variety of deliverables and how that affects things?
Well, there is the horrible vertical framing trend, but that appears to be waning, thankfully. Seriously, though, the Instagram “one minute” limit forces us all to become commercial editors. Trying to tell the story in as short a timeframe as possible, knowing it will probably be viewed on a phone in a bright and noisy environment is a new challenge for seasoned editors.

There is a big difference between having a captive audience in a theater or at home in front of the TV and having a scattered audience whose attention you are trying to hold exclusively amid all the distractions. This seems to require more overt attention-grabbing tricks, and it’s unfortunate that storytelling has come to this point.

As for deliverables, they are constantly evolving, which means each project can bring all new requirements. We really have to work backward from the deliverables now. In other words, one of our first questions now is, “Where is this going?” That way we can plan the appropriate workflows from the start.

What system do you edit on and what else other than editing are you asked to supply?
I primarily edit on Media Composer, as it’s the industry standard in my world. As an editor, I can learn any tool to use. I have cut with Premiere and FCP. It’s knowing where to make the edit that is far more important than how to make the edit.

When I started editing in the film days, we just cut picture and dialogue. There were other editors for sound beyond the basic location-recorded sound. There were labs from which you ordered something as simple as a dissolve or a fade to black. There were color timers at the film lab who handled the look of the film. There were negative cutters that conformed the final master. There were VFX houses that handled anything that wasn’t actually shot.

Now, every editor has all the tools at hand to do all those tasks themselves. While this is helpful in keeping costs down and not slowing the process, it requires editors to be a jack-of-all-trades. However, what typically follows that term is “and master of none.”

Night Walk

One of the main advantages of separate people handling different parts of the process is that they could become really good at their particular art. Experience is the best teacher, and you learn more doing the same thing every day than occasionally doing it. I’ve met a few editors over the years that truly are masters in multiple skills, but they are few and far between.

Using myself as an example, if the client wants some creatively designed show open, I am not the best person for that. Can I create something? Yes. Can I use After Effects? Yes, to a minor degree. Am I the best person for that job? No. It is not what I have trained myself to do over my career. There is a different skill set involved in deciding where to make a cut versus how to create a heavily layered, graphically designed show open. If that is what I had dedicated my career to doing, then I would probably be really good at it, but I wouldn’t be as good at knowing where to make the edit.

What projects have gone through the studio recently?
We work on a lot of projects at AlphaDogs. The bulk of our work is on modest-budget features, documentaries and unscripted TV shows. A recent example is a documentary on World War I fighter pilots called The Lafayette Escadrille and an action-thriller starring Eric Roberts and Mickey Rourke, called Night Walk.

Unfortunately for me I have become so focused on running the company that I haven’t been personally working on the creative side as much as I would like. While keeping a post house running in the current business climate is its own challenge, I don’t particularly find it as rewarding as “being in the chair.”

That feeling is offset by looking back at all the careers I have helped launch through our internship program and by offering entry-level employment. I’ve also tried hard to help editors over the years through venues like online user groups and, of course, our own Editors’ Lounge events and videos. So I guess that even running a post house can be rewarding in its own way.

Luma Touch Co-Founder/Lead Designer Terri Morgan

Have there been any talks among NLE providers about an open timeline? Being able to go between Avid, Resolve or Adobe with one file like an AAF or XML?
Because every edit system uses its own editing paradigms (think Premiere versus FCP X), creating an open exchange is challenging. However, there is an interesting effort by Pixar (https://github.com/PixarAnimationStudios/OpenTimelineIO) that includes adapters for the wide range of structural differences of some editors. There are also efforts for standards in effects and color correction. The core editing functionality in LumaFusion is built to allow easy conversion in and out to different formats, so adapting to new standards will not be challenging in most cases.

With AI becoming a popular idea and term, at what point does it stop? Is there a line where AI won’t go?
Looking at AI strictly as it relates to video editing, we can see that its power is incrementally increasing, and automatically generated movies are getting better. But while a neural network might be able to put together a coherent story, and even mimic a series of edits to match a professional style, it will still be cookie-cutter in nature, rather than being an artistic individual endeavor.

What we understand from our customers — and from our own experience — is that people get profound joy from being the storyteller or the moviemaker. And we understand that automatic editing does not provide the creative/ownership satisfaction that you get from crafting your own movie. You only have to make one automatic movie to learn this fact.

It is also clear that movie viewers feel a lack of connection or even annoyance when watching an automatically generated movie. You get the same feeling when you pay for parking at an automated machine, and the machine says, “Thank you, have a nice day.”

Here is a question from one of our readers: There are many advancements in technology coming in NLEs. Are those updates coming too fast and at an undesirable cost?
It is a constant challenge to maintain quality while improving a product. We use software practices like Agile, engage in usability tests and employ testing as robust as possible to minimize the effects of any changes in LumaFusion.

In the case of LumaFusion, we are consistently adding new features that support more powerful mobile video editing and features that support the growing and changing world around us. In fact, if we stopped developing so rapidly, the app would simply stop working with the latest operating system or wouldn’t be able to deliver solutions for the latest trends and workflows.

To put it all in perspective, I like to remind myself of the amount of effort it took to edit video 20 years ago compared to how much more efficient and fun it is to edit a video now. It gives me reason to forgive the constant changes in technology and software, and reason to embrace new workflows and methodologies.

Will we ever be at a point where an offline/online workflow will be completely gone?
Years ago, the difference in image quality provided a clear separation between offline and online. But today, online is differentiated by the ability to edit with dozens of tracks, specialized workflows, specific codecs, high-end effects and color. Even more importantly, online editing typically uses the specialized skills that a professional editor brings to a project.

Since you can now edit a complex timeline with six tracks of 4K video with audio and another six tracks of audio, basic color correction and multilayered titles straight from an iPad, for many projects you might find it unnecessary to move to an online situation. But there will always be times that you need more advanced features or the skills of a professional editor. Since not everybody wants to understand the complex world of post production, it is our challenge at Luma Touch to make more of these high-end features available without greatly limiting who can successfully use the product.

What are the trends you’re seeing in customer base from high-end post facility vs. independent editor/contractor?
High-end post facilities tend to have stationary workstations that employ skilled editor/operators. The professionals that find LumaFusion to be a valuable tool in their bag are often those who are responsible for the entire production and post production, including independent producers, journalists and high-end professionals who want the flexibility of starting to edit while on location or while traveling.

What are the biggest trends you’ve been seeing in product development?
In general, moving away from lengthy periods of development without user feedback. Moving toward getting feedback from users early and often is an Agile-based practice that really makes a difference in product development and greatly increases the joy that our team gets from developing LumaFusion. There’s nothing more satisfying than talking to real users and responding to their needs.

New development tools, languages and technologies are always welcome. At WWDC this year, Apple announced it would make it easier for third-party developers to port their iOS apps over to the desktop with Project Catalyst. This will likely be a viable option for LumaFusion.

You come from a high-end editing background, with deep experience editing at the workstation level. When you decided to branch off and do something on your own, why did you choose mobile?
Mobile offered a solution to some of the longest running wishes in professional video editing: to be liberated from the confines of an edit suite, to be able to start editing on location, to have a closer relationship to the production of the story in order to avoid the “fix it in post” mentality, and to take your editing suite with you anywhere.

It was only after starting to develop for mobile that we fully understood one of the most appealing benefits. Editing on an iPad or iPhone encourages experimentation, not only because you have your system with you when you have a good idea, but also because you experience a more direct relationship to your media when using the touch interface; it feels more natural and immersive. And experimentation equals creativity. From my own experience I know that the more you edit, the better you get at it. These are benefits that everyone can enjoy whether they are a professional or a novice.

Hecho Studios Editor Grant Lewis

What trends do you see in commercial editing? Good or bad.
Commercials are trending away from traditional, large-budget cinematic pieces to smaller, faster, budget-conscious ones. You’re starting to see it now more and more as big brands shy away from big commercial spectacles and pivot toward a more direct reflection of the culture itself.

Last year’s #CODNation work for the latest installment of the Call of Duty franchise exemplifies this by forgoing a traditional live-action cinematic trailer in favor of larger number of game-capture, meme-like films. This pivot away from more dialogue-driven narrative structures is changing what we think of as a commercial. For better or worse, I see commercial editing leaning more into the fast-paced, campy nature of meme culture.

What is it about commercial editing that attracted you and keeps attracting you?
What excites me most about commercial editing is that it runs the gamut of the editorial genre. Sometimes commercials are a music video; sometimes they are dramatic anthems; other times they are simple comedy sketches. Commercials have the flexibility to exist as a multitude of narrative genres, and that’s what keeps me attracted to commercial editing.

Can you talk about challenges specific to short form versus long form?
The most challenging thing about short-form editing is finding time for breath. In a 30-second piece, where do you find a moment of pause? There’s always so much information being packed into smaller timeframes; the real challenge is editing at a sprint, but still having it feel dynamic and articulate.

How have social media campaigns changed the way you edit, if at all? Can you talk about the variety of deliverables and how that affects things?
All campaigns will either live on social media or have specific social components now. I think the biggest thing that has changed is being tasked with telling a compelling narrative in 10 or even five or six seconds. Now, the 60-second and 90-second anthem film has to be able to work in six seconds as well. It is challenging to boil concepts down to just a few seconds and still maintain a sense of story.

#CODNation

All the deliverable aspect ratios editors are asked to make now is also a blossoming challenge. Unless a campaign is strictly shot for social, the DP probably shot for a traditional 16×9 framing. That means the editor is tasked with reframing all social content to work in all the different deliverable formats. This makes the editor act almost as the DP for social in the post process. Shorter deliverables and a multitude of aspect ratios have just become another layer to editing and demand a whole new editorial lens to view and process the project through.

What system do you edit on and what else other than editing are you asked to supply?
I currently cut in Adobe Premiere Pro. I’m often asked to supply graphics and motion graphic elements for offline cuts as well. That means being comfortable with the whole Adobe suite of tools, including Photoshop and After Effects. From type setting to motion tracking, editors are now asked to be well-versed in all tangential aspects of editorial.

What projects have you worked on recently?
I cut the launch film for Razer’s new Respawn energy drink. I also cut Toms Shoes’ most recent campaign, “Stand For Tomorrow.”

EditShare Head of Marketing Lee Griffin

What are the biggest trends you’ve been seeing in product development?
We see the need to produce more video content — and produce it faster than ever before — for social media channels. This means producing video in non-broadcast standards/formats and, more specifically, producing square video. To accommodate, editing tools need to offer user-defined options for manipulating size and aspect ratio.

What changes have you seen in terms of the way editors work and use your tools?
There are two distinct changes: One, productions are working with editors regardless of their location. Two, there is a wider level of participation in the content creation process.

In the past, the editor was physically located at the facility and was responsible for assembling, editing and finishing projects. However, with the growing demand for content production, directors and producers need options to tap into a much larger pool of talent, regardless of their location.

EditShare AirFlow and Flow Story enable editors to work remotely from any location. So today, we frequently see editors who use our Flow editorial tools working in different states and even on different continents.

With AI becoming a popular idea and term, at what point does it stop?
I think AI is quite exciting for the industry, and we do see its potential to significantly advance productions. However, AI is still in its infancy with regards to the content creation market. So from our point of view, the road to AI and its limits are yet to be defined. But we do have our own roadmap strategy for AI and will showcase some offerings integrated within our collaborative solutions at IBC 2019.

Will we ever be at a point where an offline/online workflow will be completely gone?
It depends on the production. Offline/online workflows are here to stay in the higher-end production environment. However, for fast turnaround productions, such as news, sports and programs (for example, soap operas and reality TV), there is no need for offline/online workflows.

What are the trends you’re seeing in customer base from high-end post facility vs, independent editor. How is that informing your decisions on products and pricing?
With the increase in the number of productions thanks to OTTs, high-end post facilities are tapping into independent editors more and more to manage the workload. Often the independent editor is remote, requiring the facility to have a media management foundation that can facilitate collaboration beyond the facility walls.

So we are seeing a fundamental shift in how facilities are structuring their media operations to support remote collaborations. The ability to expand and contract — with the same level of security they have within the facility — is paramount in architecting their “next-generation” infrastructure.

What do you see as untapped potential customer bases that didn’t exist 10 to 20 years ago, and how do you plan on attracting and nurturing them? What new markets are you seeing.
We are seeing major growth beyond the borders of the media and entertainment industry in many markets. From banks to real estate agencies to insurance companies, video has become one of the main ways for them to communicate to their media-savvy clientele.

While EditShare solutions were initially designed to support traditional broadcast deliverables, we have evolved them to accommodate these new customers. And today, these customers want simplicity coupled with speed. Our development methodology puts this at the forefront of our core products.

Puget Systems Senior Labs Technician Matt Bach

Have there been any talks between NLE providers about an open timeline. Essentially being able to go between Avid, Resolve, or Adobe with one file like an AAF or XML?
I have not heard anything on this topic from any developers, so keep in mind that this is pure conjecture, but the pessimistic side of me doesn’t see an “open timeline” being something that will happen anytime soon.

If you look at what many of the NLE developers are doing, they are moving more and more toward a pipeline that is completely contained within their ecosystem. Adobe has been pushing Dynamic Link in recent years in order to make it easier to move between Premiere Pro and After Effects. Blackmagic is going even a step further by integrating editing, color, VFX and audio all within DaVinci Resolve.

These examples are both great advancements that can really improve your workflow efficiency, but they are being done in order to keep the user within their specific ecosystem. As great as an open timeline would be, it seems to be counter to what Adobe, Blackmagic, and others are actively pursuing. We can still hold out hope, however!

With AI becoming a popular idea and term, at what point does it stop?
There are definitely limitations to what AI is capable of, but that line is moving year by year. For the foreseeable future, AI is going to take on a lot of the tedious tasks like tagging of footage, content-aware fill, shot matching, image enhancement and other similar tasks. These are all perfect use cases for artificial intelligence, and many (like content-aware fill) are already being implemented in the software we have available right now.

The creative side is where AI is going to take the longest time to become useful. I’m not sure if there is a point where AI will stop from a technical standpoint, but I personally believe that even if AI was perfect, there is value in the fact that an actual person made something. That may mean that the masses of videos that get published will be made by AI (or perhaps simply AI-assisted), but just like furniture, food, or even workstations, there will always be a market for high-quality items crafted by human hands.

I think the main thing to keep in mind with AI is that it is just a tool. Moving from black and white to color, or from film to digital, was something that at the time, people thought was going to destroy the industry. In reality, however, they ended up being a huge boon. Yes, AI will change how some jobs are approached — and may even eliminate some job roles entirely —but in the end, a computer is never going to be as creative and inventive as a real person.

There are many advancements in technology coming in NLEs seemingly daily, are those updates coming too fast and at an undesirable cost?
I agree that this is a problem right now, but it isn’t limited to just NLEs. We see the same thing all the time in other industries, and it even occurs on the hardware side where a new product will be launched simply because they could, not because there is an actual need for it.

The best thing you can do as an end-user is to provide feedback to the companies about what you actually want. Don’t just sit on those bugs, report them! Want a feature? Most companies have a feature request forum that you can post on.

In the end, these companies are doing what they believe will bring them the most users. If they think a flashy new feature will do it, that is what they will spend money on. But if they see a demand for less flashy, but more useful, improvements, they will make that a priority.

Will we ever be at a point where an offline/online workflow will be completely gone?
Unless we hit some point where camera technology stops advancing, I don’t think offline editing is ever going to fully go away. It is amazing what modern workstations can handle from a pure processing standpoint, but even if the systems themselves could handle online editing, you also need to have the storage infrastructure that can keep up. With the move from HD to 4K, and now to 8K, that is a lot of moving parts that need to come together in order to eliminate offline editing entirely.

With that said, I do feel like offline editing is going to be used less and less. We are starting to hit the point that people feel their footage is higher quality than they need without having to be on the bleeding edge. We can edit 4K ProRes or even Red RAW footage pretty easily with the technology that is currently available, and for most people that is more than enough for what they are going to need for the foreseeable future.

What are the trends you’re seeing in customer base from high-end post facility vs. independent editor, and how is that informing your decisions on products and pricing?
From a workstation side, there really is not too much of a difference beyond the fact that high-end post facilities tend to have larger budgets that allow them to get higher-end machines. Technology is becoming so accessible that even hobbyist YouTubers often end up getting workstations from us that are very similar to what high-end professionals use.

The biggest differences typically revolves not around the pure power or performance of the system itself, but rather how it interfaces with the other tools the editor is using. Things like whether the system has 10GB (or fiber) networking, or whether they need a video monitoring card in order to connect to a color calibrated display, are often what sets them apart.

What are the biggest trends you’ve been seeing in product development?
In general, the two big things that have come up over and over in recent years are GPU acceleration and artificial intelligence. GPU acceleration is a pretty straight-forward advancement that lets software developers get a lot more performance out of a system for tasks like color correction, noise reduction and other tasks that are very well suited for running on a GPU.

Artificial intelligence is a completely different beast. We do quite a bit of work with people that are on the forefront of AI and machine learning, and it is going to have a large impact on post production in the near future. It has been a topic at conferences like NAB for several years, but with platforms like Adobe Sensei starting to take off, it is going to become more important

However, I do feel that AI is going to be more of an enabling technology rather than one that replaces jobs. Yes, people are using AI to do crazy things like cut trailers without any human input, but I don’t think that is going to be the primary use of it anytime in the near future. It is going to be things like assisting with shot matching, tagging of footage, noise reduction, and image enhancement that is going to be where it is truly useful.

What do you see as untapped potential customer bases that didn’t exist 10-20 years ago, and how do you plan on attracting and nurturing them? What new markets are you seeing?
I don’t know if there are any customer bases that are completely untapped, but I do believe that there is going to be more overlap between industries in the next few years. One example is how much realtime raytracing has improved recently, which is spurring the use of video game engines in film. This has been done for previsualization for quite a while, but the quality is getting so good that there are some films already out that include footage straight from the game engine.

For us on the workstation side, we regularly work with customers doing post and customers who are game developers, so we already have the skills and technical knowledge to make this work. The biggest challenge is really on the communication side. Both groups have their own set of jargon and general language, so we often find ourselves having to be the “translator” when a post house is looking at integrating realtime visualization in their workflow.

This exact scenario is also likely to happen with VR/AR as well.

Lucky Post Editor Marc Stone

What trends do you see in commercial editing?
I’m seeing an increase in client awareness of the mobility of editing. It’s freeing knowing you can take the craft with you as needed, and for clients, it can save the ever-precious commodity of time. Mobility means we can be an even greater resource to our clients with a flexible approach.

I love editing at Lucky Post, but I’m happy to edit anywhere I am needed — be it on set or on location. I especially welcome it if it means you can have face-to-face interaction with the agency team or the project’s director.

What is it about commercial editing that attracted you and keeps attracting you?
The fact that I can work on many projects throughout the year, with a variety of genres, is really appealing. Cars, comedy, emotional PSAs — each has a unique creative challenge, and I welcome the opportunity to experience different styles and creative teams. I also love putting visuals together with music, and that’s a big part of what I do in 30-or 60-second… or even in a two-minute branded piece. That just wouldn’t be possible, to the same extent, in features or television.

Can you talk about challenges specific to short-form editing?
The biggest challenge is telling a story in 30 seconds. To communicate emotion and a sense of character and get people to care, all within a very short period of time. People outside of our industry are often surprised to hear that editors take hours and hours of footage and hone it down to a minute or less. The key is to make each moment count and to help make the piece something special.

Ram’s The Promise spot

How has social media campaigns changed the way you edit, if at all?
It hasn’t changed the way I edit, but it does allow some flexibility. Length isn’t constrained in the same way as broadcast, and you can conceive of things in a different way in part because of the engagement approach and goals. Social campaigns allow agencies to be more experimental with ideas, which can lead to some bold and exciting projects.

What system do you edit on, and what else other than editing are you asked to supply?
For years I worked on Avid Media Composer, and at Lucky Post I work in Adobe Premiere. As part of my editing process, I often weave sound design and music into the offline so I can feel if the edit is truly working. What I also like to do, when the opportunity presents, is to be able to meet with the agency creatives before the shoot to discuss style and mood ahead of time.

What projects have you worked on recently?
Over the last six months, I have worked on projects for Tazo, Ram and GameStop, and I am about to start a PSA for the Salvation Army. It gets back to the variety I spoke about earlier and the opportunity to work on interesting projects with great people.

Billboard Video Post Supervisor/Editor Zack Wolder

What trends do you see in editing? Good or bad.I’m noticing a lot of glitch transitions and RGB splits being used. Much flashier edits, probably for social content to quickly grab the viewers attention.

Can you talk about challenges specific to short-form editing versus long-form?
With short-form editing, the main goal is to squeeze the most amount of useful information into a short period of time while not overloading the viewer. How do you fit an hour-long conversation into a three-minute clip while hitting all the important talking points and not overloading the viewer? With long-form editing, the goal is to keep viewers’ attention over a long period of time while always surprising them with new and exciting info.

What is it about editing that attracted you and keeps attracting you?
I loved the fact that I could manipulate time. That hooked me right away. The fact that I could take a moment that lasts only a few seconds and drag it out for a few minutes was incredible.

Can you talk about the variety of deliverables for social media and how that affects things?
Social media formats have made me think differently about framing a shot or designing logos. Almost all the videos I create start in the standard 16×9 framing but will eventually be delivered as a vertical. All graphics and transitions I build need to easily work in a vertical frame. Working in a 4K space and shooting in 4K helps tremendously.

Rainn Wilson and Billie Eilish

What system do you edit on, and what else other than editing are you asked to supply?
I edit in Adobe Premiere Pro. I’m constantly asked to supply design ideas and mockups for logos and branding and then to animate those ideas.

What projects have you worked on recently?
Recently, I edited a video that featured Rainn Wilson — who played Dwight Schrute on The Office — quizzing singer Billie Eilish, who is a big-time fan of the show.

Main Image: AlphaDogs editor Herrianne Catolos


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Arvato to launch VPMS MediaEditor NLE at NAB

First seen as a technology preview at IBC 2018, Arvato’s MediaEditor is a browser-based desktop editor aimed at journalistic editing and content preparation workflows. MediaEditor projects can be easily exported and published in various formats, including square and vertical video, or can be opened in Adobe Premiere with VPMS EditMate for craft editing.

MediaEditor, which features a familiar editing interface, offers simple drag-and-drop transitions and effects, as well as basic color correction. Users can also record voiceovers directly into a sequence, and the system enables automatic mixing of audio tracks for quicker turnaround. Arvato will add motion graphics for captioning and pre-generated graphics in an upcoming version of MediaEditor.

MediaEditor is a part of Arvato Systems’ Video Production Management Suite (VPMS) enterprise MAM solution. Like other products in the suite, it can be independently deployed and scaled, or combined with other products for workflows across the media enterprise. MediaEditor can also be used with Vidispine-based systems, and VPMS and Vidispine clients can access their material through MediaEditor whether on-premise or via the cloud. MediaEditor takes advantage of the advanced VPMS streaming technology allowing users to work anywhere with high-quality, responsive video playback, even on lower-speed connections.

InSync intros frame rate converter plug-in for Mac-based Premiere users

InSync Technology’s FrameFormer motion compensated frame rate converter now is available as a plug-in for Adobe Premiere Pro users working on Macs. Simplifying and accelerating deployment through automated settings, FrameFormer provides conversion for all types of content from sub-QCIF up to 8K and beyond.

“Frame rate conversion is an essential requirement for monetizing content domestically and internationally, as well as for integrating mixed frame rate footage into a production,” reports managing director of InSync Technology Paola Hobson. “A high-quality motion compensated standards converter is the only solution for these applications, and we’re adding to our solutions for Mac users with our new FrameFormer plug-in for Adobe Premiere Pro for macOS.”

The FrameFormer Adobe Premiere Pro Mac plug-in complements InSync’s plug-ins for Final Cut Pro (Mac) and Adobe Premiere Pro (Windows), quickly and conveniently meeting any frame rate and format conversion requirements. Integrated seamlessly into Adobe Premiere Pro, the plug-in offers a simple user interface that allows users to select the required conversion and to preview in-progress results via on-screen thumbnails.

“In repurposing different frame rate material for integration into your media projects, attention to detail makes all the difference,” added Hobson. “Picture quality must be preserved at every step because even the smallest error introduced early in the process will propagate, resulting in highly visible defects down the line. Now our family of FrameFormer plug-ins gives Adobe Premiere Pro users working on both Mac and Windows systems confidence in the results of their frame rate conversion processes.”

FrameFormer is available in a standard edition that provides conversions for content up to HD resolution, with presets for common conversions, and in a professional edition that provides conversions for content up to UHD and beyond.

Review: Boris FX’s Continuum and Mocha Pro 2019

By Brady Betzel

I realize I might sound like a broken record, but if you are looking for the best plugin to help with object removals or masking, you should seriously consider the Mocha Pro plugin. And if you work inside of Avid Media Composer, you should also seriously consider Boris Continuum and/or Sapphire, which can use the power of Mocha.

As an online editor, I consistently use Continuum along with Mocha for tight blur and mask tracking. If you use After Effects, there is even a whittled-down version of Mocha built in for free. For those pros who don’t want to deal with Mocha inside of an app, it also comes as a standalone software solution where you can copy and paste tracking data between apps or even export the masks, object removals or insertions as self-contained files.

The latest releases of Continuum and Mocha Pro 2019 continue the evolution of Boris FX’s role in post production image restoration, keying and general VFX plugins, at least inside of NLEs like Media Composer and Adobe Premiere.

Mocha Pro

As an online editor I am alway calling on Continuum for its great Chroma Key Studio, Flicker Fixer and blurring. Because Mocha is built into Continuum, I am able to quickly track (backwards and forwards) difficult shapes and even erase shapes that the built-in Media Composer tools simply can’t do. But if you are lucky enough to own Mocha Pro you also get access to some amazing tools that go beyond planar tracking — such as automated object removal, object insertion, stabilizing and much more.

Boris FX’s latest updates to Boris Continuum and Mocha Pro go even further than what I’ve already mentioned and have resulted in a new version naming, this round we are at 2019 (think of it as Version 12). They have also created the new Application Manager, which makes it a little easier to find the latest downloads. You can find them here. This really helps when jumping between machines and you need to quickly activate and deactivate licenses.

Boris Continuum 2019
I often get offline edits effects from a variety plugins — lens flares, random edits, light flashes, whip transitions, and many more — so I need Continuum to be compatible with offline clients. I also need to use it for image repair and compositing.

In this latest version of Continuum, BorisFX has not only kept plugins like Primatte Studio, they have brought back Particle Illusion and updated Mocha and Title Studio. Overall, Continuum and Mocha Pro 2019 feel a lot snappier when applying and rendering effects, probably because of the overall GPU-acceleration improvements.

Particle Illusion has been brought back from the brink of death in Continuum 2019 for a 64-bit keyframe-able particle emitter system that can even be tracked and masked with Mocha. In this revamp of Particle Illusion there is an updated interface, realtime GPU-based particle generation, expanded and improved emitter library (complete with motion-blur-enabled particle systems) and even a standalone app that can design systems to be used in the host app — you cannot render systems inside of the standalone app.

While Particle Illusion is a part of the entire Continuum toolset that works with OFX apps like Blackmagic’s DaVinci Resolve, Media Composer, After Effects, and Premiere, it seems to work best in applications like After Effects, which can handle composites simply and naturally. Inside the Particle Illusion interface you can find all of the pre-built emitters. If you only have a handful make sure you download additional emitters, which you can find in the Boris FX App Manager.

       
Particle Illusion: Before and After

I had a hard time seeing my footage in a Media Composer timeline inside of Particle Illusion, but I could still pick my emitter, change specs like life and opacity, exit out and apply to my footage. I used Mocha to track some fire from Particle Illusion to a dumpster I had filmed. Once I dialed in the emitter, I launched Mocha and tracked the dumpster.

The first time I went into Mocha I didn’t see the preset tracks for the emitter or the world in which the emitter lives. The second time I launched Mocha, I saw track points. From there you can track where you want your emitter to track and be placed. Once you are done and happy with your track, jump back to your timeline where it should be reflected. In Media Composer I noticed that I had to go to the Mocha options and change the option from Mocha Shape to no shape. Essentially, the Mocha shape will act like a matte and cut off anything outside the matte.

If you are inside of After Effects, most parameters can now be keyframed and parented (aka pick-whipped) natively in the timeline. The Particle Illusion plugin is a quick, easy and good-looking tool to add sparks, Milky Way-like star trails or even fireworks to any scene. Check out @SurfacedStudio’s tutorial on Particle Illusion to get a good sense of how it works in Adobe Premiere Pro.

Continuum Title Studio
When inside of Media Composer (prior to the latest release 2018.12), there were very few ways to create titles that were higher resolution than HD (1920×1080) — the New Blue Titler was the only other option if you wanted to stay within Media Composer.

Title Studio within Media Composer

At first, the Continuum Title Studio interface appeared to be a mildly updated Boris Red interface — and I am allergic to the Boris Red interface. Some of the icons for the keyframing and the way properties are adjusted looks similar and threw me off. I tried really hard to jump into Title Studio and love it, but I really never got comfortable with it.

On the flip side, there are hundreds of presets that could help build quick titles that render a lot faster than New Blue Titler did. In some of the presets I noticed the text was placed outside of 16×9 Title Safety, which is odd since that is kind of a long standing rule in television. In the author’s defense, they are within Action Safety, but still.

If you need a quick way to make 4K titles, Title Studio might be what you want. The updated Title Studio includes realtime playback using the GPU instead of the CPU, new materials, new shaders and external monitoring support using Blackmagic hardware (AJA will be coming at some point). There are some great pre-sets including pre-built slates, lower thirds, kinetic text and even progress bars.

If you don’t have Mocha Pro, Continuum can still access and use Mocha to track shapes and masks. Almost every plugin can access Mocha and can track objects quickly and easily.
That brings me to the newly updated Mocha, which has some new features that are extremely helpful including a Magnetic Spline tool, prebuilt geometric shapes and more.

Mocha Pro 2019
If you loved the previous version of Mocha, you are really going to love Mocha Pro 2019. Not only do you get the Magnetic Lasso, pre-built geometric shapes, the Essentials interface and high-resolution display support, but BorisFX has rewritten the Remove Module code to use GPU video hardware. This increases render speeds about four to five times. In addition, there is no longer a separate Mocha VR software suite. All of the VR tools are included inside of Mocha Pro 2019.

If you are unfamiliar with what Mocha is, then I have a treat for you. Mocha is a standalone planar tracking app as well as a native plugin that works with Media Composer, Premiere and After Effects, or through OFX in Blackmagic’s Fusion, Foundry’s Nuke, Vegas Pro and Hitfilm.

Mocha tracking

In addition (and unofficially) it will work with Blackmagic DaVinci Resolve by way of importing the Mocha masks through Fusion. While I prefer to use After Effects for my work, importing Mocha masks is relatively painless. You can watch colorist Dan Harvey run through the process of importing Mocha masks to Resolve through Fusion, here.

But really, Mocha is a planar tracker, which means it tracks multiple points in a defined area that works best in flat surfaces or at least segmented surfaces, like the side of a face, ear, nose, mouth and forehead tracked separately instead of all at once. From blurs to mattes, Mocha tracks objects like glue and can be a great asset for an online editor or colorist.

If you have read any of my plugin reviews you probably are sick of me spouting off about Mocha, saying how it is probably the best plugin ever made. But really, it is amazing — especially when incorporated with plugins like Continuum and Sapphire. Also, thanks to the latest Media Composer with Symphony option you can incorporate the new Color Correction shapes with Mocha Pro to increase the effectiveness of your secondary color corrections.

Mocha Pro Remove module

So how fast is Mocha Pro 2019’s Remove Module these days? Well, it used to be a very slow process, taking lots of time to calculate an object’s removal. With the latest Mocha Pro 2019 release, including improved GPU support, the render time has been cut down tremendously. In my estimation, I would say three to four times the speed (that’s on the safe side). In Mocha Pro 2019 removal jobs that take under 30 seconds would have taken four to five minutes in previous versions. It’s quite a big improvement in render times.

There are a few changes in the new Mocha Pro, including interface changes and some amazing tool additions. There is a new drop-down tab that offers different workflow views once you are inside of Mocha: Essentials, Classic, Big Picture and Roto. I really wish the Essentials view was out when I first started using Mocha, because it gives you the basic tools you need to get a roto job done and nothing more.

For instance, just giving access to the track motion objects (Translation, Scale, Rotate, Skew and Perspective) with big shiny buttons helps to eliminate my need to watch YouTube videos on how to navigate the Mocha interface. However, if like me you are more than just a beginner, the Classic interface is still available and one I reach for most often — it’s literally the old interface. Big Screen hides the tools and gives you the most screen real estate for your roto work. My favorite after Classic is Roto. The Roto interface shows just the project window and the classic top toolbar. It’s the best of both worlds.

Mocha Pro 2019 Essentials Interface

Beyond the interface changes are some additional tools that will speed up any roto work. This has been one of the longest running user requests. I imagine the most requested feature that BorisFX gets for Mocha is the addition of basic shapes, such as rectangles and circles. In my work, I am often drawing rectangles around license plates or circles around faces with X-splines, so why not eliminate a few clicks and have that done already? Answering my need, Mocha now has elliptical and rectangular shapes ready to go in both X-splines and B-splines with one click.

I use Continuum and Mocha hand in hand. Inside of Media Composer I will use tools like Gaussian Blur or Remover, which typically need tracking and roto shapes created. Once I apply the Continuum effect, I launch Mocha from the Effect Editor and bam, I am inside Mocha. From here I track the objects I want to affect, as well as any objects I don’t want to affect (think of it like an erase track).

Summing Up
I can save tons of time and also improve the effectiveness of my work exponentially when working in Continuum 2019 and Mocha Pro 2019. It’s amazing how much more intuitive Mocha is to track with instead of the built-in Media Composer and Symphony trackers.

In the end, I can’t say enough great things about Continuum and especially Mocha Pro. Mocha saves me tons of time in my VFX and image restoration work. From removing camera people behind the main cast in the wilderness to blurring faces and license plates, using Mocha in tandem with Continuum is a match made in post production heaven.

Rendering in Continuum and Mocha Pro 2019 is a lot faster than previous versions, really giving me a leg up on efficiency. Time is money right?! On top of that, using Mocha Pro’s magic Object removal and Modules takes my image restoration work to the next level, separating me from other online editors who use standard paint and tracking tools.

In Continuum, Primatte Studio gives me the leg up on greenscreen keys with its exceptional ability to auto analyze a scene and perform 80% of the keying work before I dial-in the details. Whenever anyone asks me what tools I couldn’t live without, I without a doubt always say Mocha.
If you want a real Mocha Pro education you need to watch all of Mary Poplin’s tutorials. You can find them on YouTube. Check out this one on how to track and replace a logo using Mocha Pro 2019 in Adobe After Effects. You can also find great videos at Borisfx.com.

Mocha point parameter tracking

I always feel like there are tons of tools inside of the Mocha Pro toolset that go unused simply because I don’t know about them. One I recently learned about in a Surfaced Studio tutorial was the Quick Stabilize function. It essentially stabilizes the video around the object you are tracking allowing you to more easily rotoscope your object with it sitting still instead of moving all over the screen. It’s an amazing feature that I just didn’t know about.

As I was finishing up this review I saw that Boris FX came out with a training series, which I will be checking out. One thing I always wanted was a top-down set of tutorials like the ones on Mocha’s YouTube page but organized and sent along with practical footage to practice with.

You can check out Curious Turtle’s “More Than The Essentials: Mocha in After Effects” on their website where I found more Mocha training. There is even a great search parameter called Getting Started on BorisFX.com. Definitely check them out. You can never learn enough Mocha!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Picture Instruments’ plugin and app, Color Cone 2

By Brady Betzel

There are a lot of different ways to color correct an image. Typically, colorists will start by adjusting contrast and saturation followed by adjusting the lift, gamma and gain (a.k.a. shadows, midtones and highlights). For video, waveforms and vectorscopes are great ways of measuring color values and are about the only way to get the most accurate scientific facts on the colors you are manipulating.

Whether you are in Blackmagic Resolve, Avid Media Composer, Adobe Premiere Pro, Apple FCP X or any other nonlinear editor or color correction app, you usually have similar color correction tools across apps — whether you color based on curves, wheels, sliders or even interactively on screen. So when I heard about the way that Picture Instruments Color Cone 2 color corrects — via a Cone (or really a bicone) — I was immediately intrigued.

Color Cone 2 is a standalone app but also, more importantly, a plugin for Adobe After Effects, Adobe Premiere Pro and FCP X. In this review I am focusing on the Premiere Pro plugin, but keep in mind that the standalone version works on still images and allows you to export a 3dl or cube LUTs — a great way for a client to see what type of result you can get quickly from just a still image.

Color Cone 2 is literally a color corrector when used as a plugin for Adobe Premiere. There are no contrast and saturation adjustments, just the ability to select a color and transform it. For instance, you can select a blue sky and adjust the hue, chromanance (saturation) and/or luminance of the resulting color inside of the Color Cone plugin.

To get started you apply the Color Cone 2 plugin to your clip — the plugin is located under Picture Instruments in the Effects tab. Then you click the little square icon in the effect editor panel to open up the Color Cone 2 interface. The interface contains the bicone image representation of the color correction, presets to set up a split-tone color map or a three-point color correct, and the radius slider to adjust the effect your correction has on surrounding color.

Once you are set on a look you can jump out of the Color Cone interface and back into the effect editor inside of Premiere. There you can keyframe all of the parameters you adjusted in the Color Cone interface. This allows for a nice and easy way to transition from no color correction to color correction.

The Cone
The Cone itself is the most interesting part of this plugin. Think of the bicone as the 3D side view of a vectorscope. In other words, if the vectorscope view from a traditional scope is the top view — the bicone in Color Cone would be a side view. Moving your target color from the top cone to the bottom cone will adjust your lightness to darkness (or luminance). At the intersection of the cones is the saturation (or chromanance) and when moving from the center outwards saturation is increased. When a color is selected using the eye dropper you will see a square, which represents the source color selection, a circle representing the target color and an “x” with a line for reference on the middle section.

Additionally, there is a black circle on the saturation section in the middle that shows the boundaries of how far you can push your chromanance. There is a light circle that represents the radius of how surrounding colors are affected. Each video clip can have effects layered on them and one instance of the plugin can handle five colors. If you need more than five, you can add another instance of the plugin to the same clip.

If you are looking to export 3dl and Cube LUTs of your work you will need to use the standalone Color Cone 2 app. The one caveat to using the standalone app is that you can only apply color to still images. Once you do that you can export the LUT to be used in any modern NLE/color correction app.

Summing Up
To be honest, working in Color Cone 2 was a little weird for me. It’s not your usual color correction workflow, so I would need to sit with the plugin for a while to get used to its setup. That being said, it has some interesting components that I wish other color correction apps would use, such as the Cone view. The bicone is a phenomenal way to visualize color correction in realtime.

In my opinion, if Picture Instruments would sell just the Cone as a color measurement tool to work in conjunction with Lumetri, they would have another solid tool. Color Cone 2 has a very unique and interesting way to color correct in Premiere that acts as an advanced secondary color correct tool to the Lumetri color correction tools.

The Color Cone 2 standalone app and plugin costs $139 when purchased together, or $88 individually. In my opinion, video people should probably just stick to the plugin version. Check out Picture Instrument’s website for more info on Color Cone 2 as well as their other products. And check them out on Twitter @Pic_instruments.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Puget Systems Genesis I custom workstation

By Brady Betzel

With so many companies building custom Windows-based PCs these days, what really makes for a great build? What would make me want to pay someone to build me a PC versus building it myself? In this review, I will be going through a custom-built PC sent to me to review by Puget Systems. In my opinion, besides the physical components, Puget Systems is the cream of the crop of custom -built PCs. Over the next few paragraphs I will focus on how Puget Systems identified the right custom-built PC solution for me (specifically for post), how my experience was before, during and after receiving the system and, finally, specs and benchmarks of the system itself.

While quality components are definitely a high priority when building a new workstation, the big thing that sets Puget Systems’ apart from the rest of the custom-built PC pack is the personal and highly thorough support. I usually don’t get the full customer experience when reviewing custom builds. Typically, I am sent a workstation and maybe a one-sheet to accompany the system. To Puget System’s credit they went from top to tail when helping me put together the system I would test. Not only did I receive a completely newly built and tested system, but I talked to a customer service rep, Jeff Stubbers, who followed up with me along the way.

First, I spoke with Jeff over the phone. We talked about my price range and what I was looking to do with the system. I usually get told what I should buy — by the way, I am not a person that likes to be told what I want. I have a lot of experience not only working on high-end workstations but have been building and supporting them essentially my entire life. I actively research the latest and greatest technology. Jeff from Puget Systems definitely took the correct approach; he started by asking which apps I use and how I use them. When using After Effects, am I doing more 3D work or simple lower thirds and titles. Do I use and do I plan to continue using Avid Media Composer, Adobe Premiere Pro or Blackmagic’s DaVinci Resolve the most?

Essentially, my answers were that I use After Effects sparingly, but I do use it. I use Avid Media Composer professionally more than Premiere, but I see more and more Premiere projects coming my way. However, I think Resolve is the future, so I would love to tailor my system toward that. Oh and I dabble in Maxon Cinema 4D as well. So in theory, I need a system that does everything, which is kind of a tall order.

I told Jeff that I would love to stay below $10,000, but need the system to last a few years. Essentially, I was taking the angle of a freelance editor/colorist buying an above mid-range system. After we configured the system, Jeff continued to detail benchmarks that Puget Systems performs on a continuing basis and why two GTX 1080ti cards are going to benefit me instead of just one, as well as why an Intel i9 processor would specifically benefit my work in Resolve.

After we finished on the phone I received an email from Jeff that contained a link to webpage that continually would update me on the details and how my workstation was being built — complete with pictures of my actual system. There are also some links to very interesting articles and benchmarks on the Puget System’s website. They perform more pertinent benchmarks for post production pros than I have seen from any other company. Usually you see a few generic Premiere or Resolve benchmarks, but nothing like Puget System’s, even if you don’t buy a system from them you should read their benchmarks.

While my system went through the build and ship process, I saw pictures and comments about who did what in the process over at Puget Systems. Beth was my installer. She finished and sent the system to Kyle who ran benchmarks. Kyle then sent it to Josh for quality control. Josh discovered the second GTX 1080ti was installed in a reduced bandwidth PCIe slot and would be sent back to Beth for correction. I love seeing this transparency! It not only gives me the feeling that Puget Systems is telling me the truth, but that they have nothing to hide. This really goes a long way with me. Once my system was run through a second quality control pass, it was shipped to me in four days. From start to finish, I received my system in 12 days. Not a short amount of time, but for what Puget Systems put the system through, it was worth it.

Opening the Box
I received the Genesis I workstation in a double box. A nice large box with sturdy foam corners encasing the Fractal Design case box. There was also an accessories box. Within the accessories box were a few cables and an awesome three-ring binder filled with details of my system, the same pictures of my system, including thermal imaging pictures from the website, all of the benchmarks performed on my system (real-world benchmarks like Cinebench and even processing in Adobe Premiere) and a recovery USB 3.0 drive. Something I really appreciated was that I wasn’t given all of the third-party manuals and cables I didn’t need, only what I needed. I’ve received other custom-built PCs where the company just threw all of the manuals and cables into a Ziploc and called it a day.

I immediately hooked the system up and turned it on… it was silent. Incredibly silent. The Fractal Design Define R5 Titanium case was lined with a sound-deadening material that took whatever little sound was there and made it zero.

Here are the specs of the Puget System’s Genesis I I was sent:
– Gigabyte X299 Designare EX motherboard
– Intel Core i9 7940X 3.1GHz 14 Core 19.25MB 165W CPU
– Eight Crucial DDR4-2666 16GB RAM
– EVGA GeForce GTX 1080 TI 11GB gaming video card
– Onboard sound card
– Integrated WiFi+Bluetooth networking
– Samsung 860 Pro 512GB SATA3 2.5-inch SSD hard drive — primary drive
– Samsung 970 Pro 1TB M.2 SSD hard drive — secondary drive.
– Asus 24x DVD-RW SATA (Black) CD / DVD-ROM
– Fractal Design Define R5 titanium case
– EVGA SuperNova 1200W P2 power supply
– Noctua NH-U12DX i4 CPU cooling
– Arctic Cooling MX-2 thermal compound
– Windows 10 Pro 64-bit operating system
– Warranty: Lifetime labor and tech support, one-year parts warranty
– LibreOffice software: courtesy install
– Chrome software: courtesy install
– Adobe Creative Cloud Desktop App software: courtesy Install
– Resolve 1-3 GPU

System subtotal: $8,358.38. The price is right in my opinion, and mixed with the support and build detail it’s a bargain.

System Performance
I ran some system benchmarks and tests that I find helpful as a video editor and colorist who uses plugins and other tools on a daily basis. I am becoming a big fan of Resolve, so I knew I needed to test this system inside of Blackmagic’s Resolve 15. I used a similar sequence between Adobe Premiere and Resolve 15: a 10-minute, 23.98fps, UHD/3840×2160 sequence with mixed format footage from 4K and 8K Red, ARRI Raw UHD and ProRes4444. I added some Temporal Noise Reduction to half of the clips, including the 8K Red footage, resizes to all clips, all on top of a simple base grade.

First, I did a simple Smart User cache test by enabling the User Cache at DNxHR HQX 10-bit to the secondary Samsung 1TB drive. It took about four minutes and 34 seconds. From there I tried to playback the media un-cached, and I was able to playback everything except the 8K media in realtime. I was able to playback the 8K Red media at Quarter Res Good (Half Res would go between 18-20fps playback). The sequence played back well. I also wanted to test the export speeds. The first test was an H.264 export without cache on the same sequence. I set the H.264 output in Resolve to 23.98fps, UHD, auto-quality, no frame reordering, force highest quality debayer/resizes and encoding profile: main. The file took 11 minutes and 57 seconds. The second test was a DNxHR HQX 10-bit QuickTime with the same sequence, it took seven minutes and 44 seconds.

To compare these numbers I recently ran a similar test on an Intel i9-based MacBook Pro and with the Blackmagic eGPU with Radeon Pro 580 attached, the H.264 export took 16 minutes and 21 seconds, while a ProRes4444 took 22 minutes and 57 seconds. While not comparing apples to apples, this is still a good comparison in terms of a speed increase you can have with a desktop system and a pair of Nvidia GTX 1080ti graphics cards. With the impending release of the Nvidia GTX 2080 cards, you may want to consider getting those instead.

While in Premiere I ran similar tests with a very similar sequence. To export an H.264 (23.98fps, UHD, no cache used during export, VBR 10Mb/s target rate, no frame reordering) it took nine minutes and 15 seconds. Going a step further it took 47 minutes to export an H.265. Similarly, doing a DNxHR HQX 10-bit QuickTime export took 24 minutes.

I also ran the AJA System test on the 1TB spare drive (UHD, 16GB test file size, ProRes HQ). The read speed was 2951MB/sec and the write speed was 2569MB/sec. Those are some very respectable drive speeds, especially for a cache or project drive. If possible you would probably want to add another drive for exports or to have your RAW media stored on in order to maximize input/output speeds.

Up next was Cinebench R15: OpenGL — 153.02fps, Ref. Match 99.6%, CPU — 2905 cb, CPU (single core) — 193cb and MP Ratio 15.03x. Lastly, I ran a test that I recently stumbled upon: the Superposition Benchmark from Unigine. While it is more of a gaming benchmark, I think a lot of people use this and might glean some useful information from it. The overall score was 7653 (fps: min 45.58, avg 57.24, max 72.11, GPU degrees Celsius: min 36, max 85, GPU use: max 98%.

Summing Up
In the end, I am very skeptical of custom-build PC shops. Typically, I don’t see the value in the premium they set when you can probably build it yourself with parts you choose from PCpartpicker.com. However, Puget Systems is the exception — their support and build-quality are top notch. From the initial phone conversation to the up-to-the minute images and custom-build updates online, to the final delivery, and even follow-up conversations, Puget Systems is by far the most thorough and worthwhile custom-build PC maker I have encountered.

Check out their high-end custom build PCs and tons of benchmark testing and recommendations on their website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Blackmagic’s eGPU and Intel i9 MacBook Pro 2018

By Brady Betzel

Blackmagic’s eGPU is worth the $699 price tag. You can buy it from Apple’s website, where it is being sold exclusively for the time being. Wait? What? You wanted some actual evidence as to why you should buy the BMD eGPU?

Ok, here you go…

MacBook Pro With Intel i9
First, I want to go over the latest Apple MacBook Pro, which was released (or really just updated) this past July. With some controversial fanfare, the 2018 MacBook Pro can now be purchased with the blazingly fast Intel i9, 2.6GHz (Turbo Boost up to 4.3GHz) six-core processor. In addition, you can add up to 32GB of 2400MHz DDR4 onboard memory. The Radeon Pro 560x GPU with 4GB of GDDR5 memory and even a 4TB SSD storage drive. It has four Thunderbolt 3 ports and, for some reason, a headphone jack. Apple is also touting its improved butterfly keyboard switches as well as its True Tone display technology. If you want to read more about that glossy info head over to Apple’s site.

The 2018 MacBook Pro is a beast. I am a big advocate for the ability to upgrade and repair computers, so Apple’s venture to create what is essentially a leased computer ecosystem that needs to be upgraded every year or two usually puts a bad taste in my mouth.

However, the latest MacBook Pros are really amazing… and really expensive. The top-of-the-line MacBook Pro I was provided with for this review would cost $6,699! Yikes! If I was serious, I would purchase everything but the $2,000 upgrade from the 2TB SSD drive to the 4TB, and it would still cost $4,699. But I suppose that’s not a terrible price for such an intense processor (albeit not technically workstation-class).

Overall, the MacBook Pro is a workhorse that I put through its video editing and color correcting paces using three of the top four professional nonlinear editors: Adobe Premiere, Apple FCP X and Blackmagic’s Resolve 15 (the official release). More on those results in a bit, but for now, I’ll just say a few things: I love how light and thin it is. I don’t like how hot it can get. I love how fast it charges. I don’t like how fast it loses charge when doing things like transcoding or exporting clips. A 15-minute export can drain the battery over 40% while playing Spotify for eight hours will hardly drain the battery at all (maybe 20%).

Blackmagic’s eGPU with Radeon Pro 580 GPU
One of the more surprising releases from Blackmagic has been this eGPU offering. I would never have guessed they would have gone into this area, and certainly would never have guessed they would have gone with a Radeon card, but here we are.

Once you step back from the initial, “Why in the hell wouldn’t they let it be user-replaceable and also not brand dependent” shock, it actually makes sense. If you are Mac OS user, you probably can do a lot in terms of external GPU power already. When you buy a new iMac, iMac Pro or MacBook Pro, you are expecting it to work, full stop.

However, if you are a DIT or colorist that is more mobile than that sweet million-dollar color bay you dream of, you need more. This is where the BMD eGPU falls nicely into place. You plug it in and instantly see it populate in the menu bar. In addition, the eGPU acts as a dock with four USB 3 ports, two Thunderbolt 3 ports and an HDMI port. The MacBook Pro will charge off of the eGPU as well, which eliminates the need for your charger at your docking point.

On the go, the most decked out MacBook Pro can handle its own. So it’s no surprise that FCP X runs remarkably fast… faster than everything else. However, you have to be invested in an FCP X workflow and paradigm — and while I’m not there yet, maybe the future will prove me wrong. Recently, I saw someone on Twitter who developed an online collaboration workflow, so people are excited about it.

Anyway, many of the nonlinear editors I work with can also play on the MacBook Pro, even with 4K Red, ARRI and, especially, ProRes footage. Keep in mind though, with the 2K, 4K, or whatever K footage, you will need to set the debayer to around “half good” if you want a fluid timeline. Even with the 4GB Radeon 560x I couldn’t quite play realtime 4K footage without some sort of compromise in quality.

But with the Blackmagic eGPU, I significantly improved my playback capabilities — and not just in Resolve 15. I did try and plug the eGPU into a PC with Windows 10 I was reviewing at the same time and it was recognized, but I couldn’t get all the drivers sorted out. So it’s possible it will work in Windows, but I couldn’t get it there.

Before I get to the Resolve testing, I did some benchmarking. First I ran Cinebench R15 without the eGPU attached and got the following scores: OpenGL – 99.21fps, reference match 99.5%, CPU – 947cb, CPU (single core) 190cb and MP ratio of 5.00x. With the GPU attached: Open GL — 60.26fps, reference match 99.5%, CPU — 1057 cb, CPU (single core) 186cb and MP ratio of 5.69x. Then I ran Unigine’s Valley Benchmark 1.0 without the eGPU, which got 21.3fps and a score of 890 (minimum 12.4fps/maximum 36.2fps). With the eGPU it got 25.6fps and a score of 1073 (minimum 19.2 fps/max 37.1fps)

Resolve 15 Test
I based all of my tests on a similar (although not exact for the different editing applications) 10-minute timeline, 23.98fps, 3840×2160, 4K and 8K RAW Red footage (R3D files) and Alexa (.ari and ProRes444XQ) UHD footage, all with edit page resizes, simple color correction and intermittent sharpening and temporal noise reduction (three frames, better, medium, 10, 10 and 5).

Playback: Without the eGPU I couldn’t play 23.98fps, 4K Red R3D without being set to half-res. With the eGPU I could playback at full-res in realtime (this is what I was talking about in sentence one of this review). The ARRI footage would play at full res, but would go between 1fps and 7fps at full res. The 8K Red footage would play in realtime when set to quarter-res.

One of the most re-assuring things I noticed when watching my Activity Monitor’s GPU history readout was that Resolve uses both GPUs at once. Not all of the apps did.

Resolve 15 Export Tests
In the following tests, I disabled all cache or optimized media options, including Performance Mode.

Test 1: H.264 at 23.98fps, UHD, auto-quality, no frame reordering, force highest-quality debayer/resizes and encoding profile Main)
a. Without eGPU (Radeon Pro 560x): 22 minutes, 16 seconds
b. With BMD eGPU (Radeon Pro 580): 16 minutes and 21 seconds

Test 2: H.265 10-bit, 23.98/UHD, auto quality, no frame reordering, force highest-quality debayer/resizes)
a. Without eGPU: stopped rendering after 10 frames
b. With BMD eGPU: same result

Test 3:
ProRes4444 at 23.98/UHD
a. Without eGPU: 27 min and 29 seconds
b. With BMD eGPU: 22 minutes and 57 seconds

Test 4:
– Edit page cache – enabled Smart User Cache at ProResHQ
a. Without eGPU: 17 minutes and 28 seconds
b. With BMD eGPU: 12 minutes and 22 seconds

Adobe Premiere Pro v.12.1.2
I performed similar testing in Adobe Premiere Pro using a 10-minute timeline at 23.98fps, 3840×2160, 4K and 8K RAW Red footage (R3D files) and Alexa (DNxHR SQ 8-bit) UHD footage, all with Effect Control tab resizes and simple Lumetri color correction, including sharpening and intermittent denoise (16) under the HSL Secondary tab in Lumetri applied to shadows only.

In order to ensure your eGPU will be used inside of Adobe Premiere, you must use Metal as your encoder. To enable it go to File > Project Settings > General and change the renderer to Mercury Playback Engine GPU acceleration Metal — (OpenCL will only use the internal GPU for processing.)

Premiere did not handle the high-resolution media as aptly as Resolve had, but it did help a little. However, I really wanted to test the export power with the added eGPU horsepower. I almost always send my Premiere sequences to Adobe Media Encoder to do the processing, so that is where my exports were processed.

Adobe Media Encoder
Test 1: H.264 (No render used during exports: 23.98/UHD, 80Mb/s, software encoding doesn’t allow for profile setup)
a. Open CL with no eGPU: about 140 minutes (sorry had to chase the kids around and couldn’t watch this snail crawl)
b. Metal no eGPU: about 137 minutes (chased the kids around again, and couldn’t watch this snail crawl, either)
c. Open CL with eGPU: wont work, Metal only
d. Metal with eGPU: one hour

Test 2: H.265
a. Without eGPU: failed (interesting result)
b. With eGPU: 40 minutes

Test 3: ProRes4444
a. Without eGPU: three hours
b. With eGPU: one hour and 14 minutes

FCP X
FCP X is an interesting editing app, and it is blazing fast at handling ProRes media. As I mentioned earlier, it hasn’t been in my world too much, but that isn’t because I don’t like it. It’s because professionally I haven’t run into it. I love the idea of roles, and would really love to see that playout in other NLEs. However, my results speak for themselves.

One caveat to using the eGPU in FCP X is that you must force it to work inside of the NLE. At first, I couldn’t get it to work. The Activity Monitor would show no activity on the eGPU. However, thanks to a Twitter post, James Wells (@9voltDC) sent me to this, which allows you to force FCP X to use the eGPU. It took a few tries but I did get it to work, and funny enough I saw times when all three GPUs were being used inside of FCP X, which was pretty good to see. This is one of those use-at-your-own risk things, but it worked for me and is pretty slick… if you are ok with using Terminal commands. This also allows you to force the eGPU onto other apps like Cinebench.

Anyways here are my results with the BMD eGPU exporting from FCP X:

Test 1: H.264
a. Without eGPU: eight minutes
b. With eGPU: eight minutes and 30 seconds

Test 2: H.265: Not an option

Test 3: ProRes4444
a. Without eGPU: nine minutes
b. With eGPU: six minutes and 30 seconds

Summing Up
In the end, the Blackmagic eGPU with Radeon Pro 580 GPU is a must buy if you use your MacBook Pro with Resolve 15. There are other options out there though, like the Razer Core v2 or the Akitio Node Pro.

From this review I can tell you that the Blackmagic eGPU is silent even when processing 8K Red RAW footage (even when the MacBook Pro fans are going at full speed), and it just works. Plug it in and you are running, no settings, no drivers, no cards to install… it just runs. And sometimes when I have three little boys running around my house, I just want that peace of mind and I want things to just work like the Blackmagic eGPU.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Adobe updates Creative Cloud

By Brady Betzel

You know it’s almost fall when when pumpkin spice lattes are  back and Adobe announces its annual updates. At this year’s IBC, Adobe had a variety of updates to its Creative Cloud line of apps. From more info on their new editing platform Project Rush to the addition of Characterizer to Character Animator — there are a lot of updates so I’m going to focus on a select few that I think really stand out.

Project Rush

I use Adobe Premiere quite a lot these days; it’s quick and relatively easy to use and will work with pretty much every codec in the universe. In addition, the Dynamic Link between Adobe Premiere Pro and Adobe After Effects is an indispensible feature in my world.

With the 2018 fall updates, Adobe Premiere will be closer to a color tool like Blackmagic’s Resolve with the addition of new hue saturation curves in the Lumetri Color toolset. In Resolve these are some of the most important aspects of the color corrector, and I think that will be the same for Premiere. From Hue vs. Sat, which can help isolate a specific color and desaturate it to Hue vs. Luma, which can help add or subtract brightness values from specific hues and hue ranges — these new color correcting tools further Premiere’s venture into true professional color correction. These new curves will also be available inside of After Effects.

After Effects features many updates, but my favorites are the ability to access depth matte data of 3D elements and the addition of the new JavaScript engine for building expressions.

There is one update that runs across both Premiere and After Effects that seems to be a sleeper update. The improvements to motion graphics templates, if implemented correctly, could be a time and creativity saver for both artists and editors.

AI
Adobe, like many other companies, seem to be diving heavily into the “AI” pool, which is amazing, but… with great power comes great responsibility. While I feel this way and realize others might not, sometimes I don’t want all the work done for me. With new features like Auto Lip Sync and Color Match, editors and creators of all kinds should not lose the forest for the trees. I’m not telling people to ignore these features, but asking that they put a few minutes into discovering how the color of a shot was matched, so you can fix something if it goes wrong. You don’t want to be the editor who says, “Premiere did it” and not have a great solution to fix something when it goes wrong.

What Else?
I would love to see Adobe take a stab at digging up the bones of SpeedGrade and integrating that into the Premiere Pro world as a new tab. Call it Lumetri Grade, or whatever? A page with a more traditional colorist layout and clip organization would go a long way.

In the end, there are plenty of other updates to Adobe’s 2018 Creative Cloud apps, and you can read their blog to find out about other updates.

LACPUG hosting FCP and Premiere creator Randy Ubillos

The Los Angeles Creative Pro User Group (LACPUG) is celebrating its 18th anniversary on June 27 by presenting the official debut of Bradley Olsen’s Off the Tracks, a documentary about Final Cut Pro X. Also on the night’s agenda is a trip down memory lane with Randy Ubillos, the creator of Final Cut Pro, Adobe Premiere, Aperture, iMovie 08 and Final Cut Pro X.

The event will take place at the Gallery Theater in Hollywood. Start time is 6:45pm. Scheduled to be in the audience and perhaps on stage, depending on availability, will be members of the original FCP team: Michael Wohl, Tim Serda and Paul Saccone. Also on hand will be Ramy Katrib of DigitalFilm Tree and editor and digital consultant Dan Fort. “Many other invites to the ‘superstars’ of the digital revolution and FCP have been sent out,” says Michael Horton, founder and head of LACPUG.

The night will also include food and drinks, time for questions and the group’s “World Famous Raffle.”
Tickets are on sale now on the LACPUG website for $10 each, plus a ticket fee of $2.24.

The Los Angeles Creative Pro User Group, formerly the LA Final Cut Pro User Group, was established in June of 2000 and hosts a membership of over 6,000 worldwide.

Review: The PNY PrevailPro mobile workstation

By Mike McCarthy

PNY, a company best known in the media and entertainment industry as the manufacturer of Nvidia’s Quadro line of professional graphics cards, is now offering a powerful mobile workstation. While PNY makes a variety of other products, mostly centered around memory and graphics cards, the PrevailPro is their first move into offering complete systems.

Let’s take a look at what’s inside. The PrevailPro is based on Intel’s 7th generation Core i7 7700HQ Quad-Core Hyperthreaded CPU, running at 2.8-3.8GHz. It has an HM175 chipset and 32GB of dual-channel DDR4 RAM. At less than ¾-inch thick and 4.8 pounds, it also has an SD card slot, fingerprint reader, five USB ports, Gigabit Ethernet, Intel 8265 WiFi, and audio I/O. It might not be the lightest 15-inch laptop, but it is one of the most powerful. At 107 cubic inches, it has half the volume of my 17-inch Lenovo P71.

The model I am reviewing is their top option, with a 512GB NVMe SSD, as well as a 2TB HDD for storage. The display is a 15.6-inch UHD panel, driven by the headline feature, a Quadro P4000 GPU in Max-Q configuration. With 1792 CUDA cores, and 8GB of GDDR memory, the GPU retains 80% of the power of the desktop version of the P4000, at 4.4 TFlops. Someone I showed the system to joked that it was a PNY Quadro graphics card with a screen, which isn’t necessarily inaccurate. The Nvidia Pascal-based Quadro P4000 Max-Q GPU is the key unique feature of the product, being the only system I am aware of in its class — 15-inch workstations — with that much graphics horsepower.

Display Connectivity
This top-end PrevailPro system is ProVR certified by Nvidia and comes with a full complement of ports, offering more display options than any other system its size. It can drive three external 4K displays plus its attached UHD panel, an 8K monitor at 60Hz or anything in between. I originally requested to review this unit when it was announced last fall because I was working on a number of Barco Escape three-screen cinema projects. The system’s set of display outputs would allow me to natively drive the three TVs or projectors required for live editing and playback at a theater, without having to lug my full-sized workstation to the site. This is less of an issue now that the Escape format has been discontinued, but there are many other applications that involve multi-screen content creation, usually related to advertising as opposed to cinema.

I had also been looking for a more portable device to drive my 8K monitor — I wanted to do some on-set tests, reviewing footage from 8K cameras, without dragging my 50-pound workstation around with me — even my 17-inch P71 didn’t support it. Its DisplayPort connection is limited to Version 1.2, due to being attached to the Intel side of the hybrid graphics system. Dell’s Precision mobile workstations can drive their 8K display at 30Hz, but none of the other major manufacturers have implemented DisplayPort 1.3, favoring the power savings of using Intel’s 1.2 port in the chipset. The PrevailPro by comparison has dual mini-DisplayPort 01.3 ports, connected directly to the Nvidia GPU, which can be used together to drive an 8K monitor at 60Hz for the ultimate high-res viewing experience. It also has an HDMI 2.0 port supporting 4Kp60 with HDCP to connect your 4K TV.

It can connect three external displays, or a fourth with MST if you turn off the integrated panel. The one feature that is missing is Thunderbolt, which may be related to the DisplayPort issue. (Thunderbolt 3 was officially limited to DisplayPort 1.2) This doesn’t affect me personally, and USB 3.1 has much of the same functionality, but it will be an issue for many users in the M&E space — it limits its flexibility.

User Experience
The integrated display is a UHD LCD panel with a matte finish. It seems middle of the line. There is nothing wrong with it, and it appears to be accurate, but it doesn’t really pop the way some nicer displays do, possibly due to the blacks not being as dark as they could be.

The audio performance is not too impressive either. The speaker located at the top of the keyboard aren’t very loud, even at maximum volume, and they occasionally crackle a bit. This is probably the system’s most serious deficiency, although a decent pair of headphones can improve that experience significantly. The keyboard is well laid out, and felt natural to use, and the trackpad worked great for me. Switching between laptops frequently, I sometimes have difficulty adjusting to changes in the function and arrow key positioning, but everything was where my fingers expected them to be.

Performance wise, I am not comparing it to other 15-inch laptops, because I don’t have any to test it against, and that is not the point of this article. The users who need this kind of performance have previously been limited to 17-inch systems, and this one might allow them to lighten their load — more portable without sacrificing much performance. I will be comparing it to my 17-inch and 13-inch laptops, for context, as well as my 20-core Dell workstation.

Storage Performance
First off, with synthetic benchmarks, the SSD reports 1400MB/s write and 2000MB/s read performance, but the write is throttled to half of that over sustained periods. This is slower than some new SSDs, but probably sufficient because without Thunderbolt there is no way to feed the system data any faster than that. (USB 3.1 tops out around 800MB/s in the real world.)

The read speed allowed me to playback 6K DPX files in Adobe Premiere, and that is nothing to scoff at. The HDD tops out at 125MB/s as should be expected for a 2.5-inch SATA drive, so it will perform just like any other system. The spinning disk seems out of place in a device like this, where a second M.2 slot would have allowed the same capacity, at higher speeds, with size and power savings.

Here are its Cinebench scores, compared to my other systems:
System OpenGL CPU
PNY PrevailPro (P4000) 109.94 738
Lenovo P71 (P5000) 153.34 859
Dell 7910 Desktop (P6000) 179.98 3060Aorus X3 Plus (GF870) 47.00 520

The P4000 is a VR-certified solution, so I hooked up my Lenovo Explorer HMD and tried editing some 360 video in Premiere Pro 12.1. Everything works as expected, and I was able to get my GoPro Fusion footage to play back 3Kp60 at full resolution, and 5Kp30 at half resolution. Playing back exported clips in WMR worked in full resolution, even at 5K.

8K Playback
One of the unique features of this system is its support for an 8K display. Now, that makes for an awfully nice UI monitor, but most people buying it to drive an 8K display will probably want to view 8K content on it. To that end, 8K playback was one of the first things I tested. Within Premiere, DNxHR-LB files were the only ones I could get to play without dropping frames at full resolution, and even then only when they were scope aspect ratio. The fewer pixels to process due to the letterboxing works in its favor. All of the other options wouldn’t playback at full resolution, which defeats the purpose of an 8K display. The Windows 10 media player did playback 8K HEVC files at full resolution without issue, due to the hardware decoder on the Quadro GPU, which explicitly supports 8K playback. So that is probably the best way to experience 8K media on a system like this.

Now obviously 8K is pushing our luck with a laptop in the first place. My 6K Red files play back at quarter res, and most of my other 4K and 6K test assets play smoothly. I rendered a complex 5K comp in Adobe After Effects, and at 28 minutes, it was four minutes slower than my larger 17-inch system, and twice as fast as my 13-inch gaming notebook. Encoding a 10-minute file in DCP-O-Matic took 47 minutes in 2K, and 189 minutes in 4K, which is 15% slower than my 17-inch laptop.

Conclusion
The new 15-inch PrevailPro is not as fast as my huge 17-inch P71, as to be expected, but it is close in most tests, and many users would never notice the difference. It supports 8K monitors and takes up half the space in my bag. It blows my 13-inch gaming notebook out of the water and does many media tasks just as fast as my desktop workstation. It seems like an ideal choice for a power user who needs strong graphics performance but doesn’t want to lug around a 17-inch monster of a system.

The steps to improve it would be the addition of Thunderbolt support, better speakers, and an upgrade to Intel’s new 8th Gen CPUs. If I was still working on multi-screen theatrical projects, this would be the perfect system for taking my projects with me — same if I was working in VR more. I believe the configuration I tested has an MSRP of $4,500, but I find it online for around $4100. So it is clearly not the cheap option, but it is one of the most powerful 15-inch laptops available, especially if your processing needs are GPU intense. It is a well-balanced solution, for demanding users who need performance, but want to limit size and weight.

Update-September 27, 2018
I have had the opportunity to use the PrevailPro as my primary workstation while on the road for the last three months, and I have been very happy with the performance. The Wi-Fi range and battery life are significantly better than my previous system, although I wouldn’t bank on more than two hours of serious media editing work before needing to plug in.

I was able to process 7K R3D test shoot files for my next project in Adobe Media Encoder, and it converts them in full quality at around a quarter of realtime, so four minutes to convert one minute of footage, which is fast enough for my mobile needs. (So it could theoretically export six hours of dailies per day, but I wouldn’t usually recommend using a laptop for that kind of processing.) It renders my edited 5K project assets to H.264 faster than realtime, and the UHD screen has been great for all of my Photoshop work.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB: Adobe’s spring updates for Creative Cloud

By Brady Betzel

Adobe has had a tradition of releasing Creative Cloud updates prior to NAB, and this year is no different. The company has been focused on improving existing workflows and adding new features, some based on Adobe’s Sensei technology, as well as improved VR enhancements.

In this release, Adobe has announced a handful of Premiere Pro CC updates. While I personally don’t think that they are game changing, many users will appreciate the direction Adobe is going. If you are color correcting, Adobe has added the Shot Match function that allows you to match color between two shots. Powered by Adobe’s Sensei technology, Shot Match analyzes one image and tries to apply the same look to another image. Included in this update is the long-requested split screen to compare before and after color corrections.

Motion graphic templates have been improved with new adjustments like 2D position, rotation and scale. Automatic audio ducking has been included in this release as well. You can find this feature in the Essential Sound panel, and once applied it will essentially dip the music in your scene based on dialogue waveforms that you identify.

Still inside of Adobe Premiere Pro CC, but also applicable in After Effects, is Adobe’s enhanced Immersive Environment. This update is for people who use VR headsets to edit and or process VFX. Team Project workflows have been updated with better version tracking and indicators of who is using bins and sequences in realtime.

New Timecode Panel
Overall, while these updates are helpful, none are barn burners, the thing that does have me excited is the new Timecode Panel — it’s the biggest new update to the Premiere Pro CC app. For years now, editors have been clamoring for more than just one timecode view. You can view sequence timecodes, source media timecodes from the clips on the different video layers in your timeline, and you can even view the same sequence timecode in a different frame rate (great for editing those 23.98 shows to a 29.97/59.94 clock!). And one of my unexpected favorites is the clip name in the timecode window.

I was testing this feature in a pre-release version of Premiere Pro, and it was a little wonky. First, I couldn’t dock the timecode window. While I could add lines and access the different menus, my changes wouldn’t apply to the row I had selected. In addition, I could only right click and try to change the first row of contents, but it would choose a random row to change. I am assuming the final release has this all fixed. If it the wonkiness gets flushed out, this is a phenomenal (and necessary) addition to Premiere Pro.

Codecs, Master Property, Puppet Tool, more
There have been some compatible codec updates, specifically Raw Sony X-OCN (Venice), Canon Cinema Raw Light (C200) and Red IPP2.

After Effects CC has also been updated with Master Property controls. Adobe said it best during their announcement: “Add layer properties, such as position, color or text, in the Essential Graphics panel and control them in the parent composition’s timeline. Use Master Property to push individual values to all versions of the composition or pull selected changes back to the master.”

The Puppet Tool has been given some love with a new Advanced Puppet Engine, giving access to improving mesh and starch workflows to animate static objects. Beyond updates to Add Grain, Remove Grain and Match Grain effects, making them multi-threaded, enhanced disk caching and project management improvements have been added.

My favorite update for After Effects CC is the addition of data-driven graphics. You can drop a CSV or JSON data file and pick-whip data to layer properties to control them. In addition, you can drag and drop data right onto your comp to use the actual numerical value. Data-driven graphics is a definite game changer for After Effects.

Audition
While Adobe Audition is an audio mixing application, it has some updates that will directly help anyone looking to mix their edit in Audition. In the past, to get audio to a mixing program like Audition, Pro Tools or Fairlight you would have to export an AAF (or if you are old like me possibly an OMF). In the latest Audition update you can simply open your Premiere Pro projects directly into Audition, re-link video and audio and begin mixing.

I asked Adobe whether you could go back and forth between Audition and Premiere, but it seems like it is a one-way trip. They must be expecting you to export individual audio stems once done in Audition for final output. In the future, I would love to see back and forth capabilities between apps like Premiere Pro and Audition, much like the Fairlight tab in Blackmagic’s Resolve. There are some other updates like larger tracks and under-the-hood updates which you can find more info about on: https://theblog.adobe.com/creative-cloud/.

Adobe Character Animator has some cool updates like overall character building updates, but I am not too involved with Character Animator so you should definitely read about things like the Trigger Improvements on their blog.

Summing Up
In the end, it is great to see Adobe moving forward on updates to its Creative Cloud video offerings. Data-driven animation inside of After Effects is a game-changer. Shot color matching in Premiere Pro is a nice step toward a professional color correction application. Importing Premiere Pro projects directly into Audition is definitely a workflow improvement.

I do have a wishlist though: I would love for Premiere Pro to concentrate on tried-and-true solutions before adding fancy updates like audio ducking. For example, I often hear people complain about how hard it is to export a QuickTime out of Premiere with either stereo or mono/discrete tracks. You need to set up the sequence correctly from the jump, adjust the pan on the tracks, as well as adjust the audio settings and export settings. Doesn’t sound streamlined to me.

In addition, while shot color matching is great, let’s get an Adobe SpeedGrade-style view tab into Premiere Pro so it works like a professional color correction app… maybe Lumetri Pro? I know if the color correction setup was improved I would be way more apt to stay inside of Premiere Pro to finish something instead of going to an app like Resolve.

Finally, consolidating and transcoding used clips with handles is hit or miss inside of Premiere Pro. Can we get a rock-solid consolidate and transcode feature inside of Premiere Pro? Regardless of some of the few negatives, Premiere Pro is an industry staple and it works very well.

Check out Adobe’s NAB 2018 update video playlist for details on each and every update.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Digital Anarchy’s Transcriptive plugin for Adobe Premiere

By Brady Betzel

One of the most time consuming parts of editing can be dealing with the pre-post, including organizing scripts and transcriptions of interviews. In the past, I have used and loved Avid’s ScriptSync and Phrase Find. These days, with people becoming more comfortable with other NLEs such as Adobe Premiere Pro, Apple FCP X and Blackmagic Resolve, there is a need for similar technology inside  those apps, and that is where Digital Anarchy’s Transcriptive plug-in comes in.

Transcriptive is a Windows- and Mac OS-compatible plugin for Premiere Pro CC 2015.3 and above. Transcriptive allows the editor to have a sequence or multiple clips transcribed in the cloud by either IBM Watson or Speechmatics, a script downloaded to your system and in sync with the clips and sequences for a price. From there you can search for specific words, sort by person speaking, including labelling each speaker, or just follow an interview along with a transcript.

Avid’s ScriptSync is an invaluable plugin, in my opinion, when working on shows with interviews, especially when combining multiple responses into one cohesive answer being covered by b-roll — often referred to as a Frankenbite. Transcriptive comes close to Avid’s ScriptSync within Premiere Pro, but has a few differences, and is priced at $299, plus the per-minute cost of transcription.

A Deeper Look
Within Premiere, Transcriptive lives under the Windows menu > Extension > Transcriptive. To get access to the online AI transcription services you will obviously need an Internet connection as well as an account with Speechmatics and/or IBM’s Watson. You’ll really want to follow along with the manual, which can be found here. It walks you step by step through setting up the Transcriptive plugin.

It is a little convoluted to get it all set up, but once you do you are ready to upload a clip and get transcribing. IBM’s Watson will get you going with 1,000 free minutes of transcription a month, and from there it goes from $.02/minute down to $.01/minute, depending on how much you need transcribed. If you need additional languages transcribed it will be up-charged $.03/minute. Speechmatics is another transcription service that runs roughly $.08 a minute (I say roughly because the price is in pounds and has fluctuated in the past) and it will go down if you do more than 1,000 minutes a month.

Your first question should be why the disparity in price, and in this instance you get what you pay for. If you aren’t as strict on accuracy, then Watson is for you — it doesn’t quite get everything correct and can sometimes fail to see when a new person is talking, even on a very clear recording. Speechmatics was faster during my testing and more accurate. If free is a good price for you then Watson might do the job, and you should try it first. But in my opinion Speechmatics is where you need to be.

When editing interviews, accuracy is extremely important, especially when searching specific key words, and this is where Speechmatics came through. Neither service has complete accuracy, and if something is wrong you can’t kick it back like you could a traditional, human-based transcription service.

The Test
To test Transcriptive I downloaded a CNN interview between Anderson Cooper and Hillary Clinton, which in theory should have perfect audio. Even with “perfect audio” Watson had some trouble when one person would talk over the other. Speechmatics seemed to get each person labeled correctly when they spoke, I would guess it missed only about 5% of the words, so about 95% accurate — Watson seemed to be about 70% accurate.

To get your file to these services you will either send your media from a sequence, multiple clips or a folder of clips. I seem to favor a specific folder of clips to transcode as it forces some organization and my OCD assistant editor brain feels a little more at home.

As a plugin, Transcriptive is an extension inside of Premiere Pro, as alluded to earlier. Inside Premiere you have to have the Transcriptive window active when doing edits or simply playing down a clip, otherwise you will be affecting the timeline (meaning if you hit undo you will be undoing your timeline work, so be careful). When working with transcriptions between clips and sequences your transcription will load differently. If you transcribe individual clips using the Batch Files command, the transcription will be loaded into the infamous Speech Analysis field of the files metadata. In this instance you can now search in the metadata field instead of the Transcriptive window.

One feature I really like is the ability to export a transcript as markers to be placed on the timeline. In addition, you can export many different closed captioning file types such as SMPTE-TT (XML file), which can be used inside of Premiere with its built-in caption integration. SRT and VTT are captioning file types to be uploaded alongside your video to services like YouTube, and JSON files allow you to send transcripts to other machines using the Transcriptive plugin. Besides searching inside of Transcriptive for any lines or speakers you want, you can also edit the transcript. This can be extremely useful if two speakers are combined or if there are some missed words that need to be corrected.

To really explain how Transcriptive works, it is easiest to compare it to Avid’s ScriptSync. If you have used Avid’s ScriptSync and then gave Transcriptive a try, you likely noticed some features that Transcriptive desperately needs in order to be the powerhouse that ScriptSync is — but Transcriptive has the added ability to upload your files and process them in the cloud.

ScriptSync allows the editor or assistant editor to take a bunch of transcriptions, line them up, then, for example, have every clip from a particular person in one transcription file that could be searched or edited from. In addition, there is a physical representation of the transcriptions that can be organized in bins and accessed separately from the clips. These functions would be a huge upgrade to Transcriptive in the future, especially for editors who work on unscripted or documentary projects with multiple interviews from the same people. If you use an external transcription file and want to align with clips you have in the system you must use (and pay) Speechmatics, which for a lower price per minute will align the two files.

Updates Are Coming
After I had finished my initial review, Jim Tierney, president of Digital Anarchy, was kind enough to email me about some updates that were coming to Transcriptive as well as a really handy transcription workflow that I had missed my first time around.

He mentioned that they are working on a Power Search function that will allow for a search of all clips and transcripts inside the project. A window will then show all the search results and can be clicked on to open the corresponding clips in the source window or sequence in the record window. Once that update rolls in, Transcriptive will be much more powerful and easier to use.

The only thing that will be hard to differentiate is if you have multiple interviews from multiple people. For instance, if I wanted to limit the search to only my interviews and for a specific phrase. In the future, a way to Power Search a select folder of clips or sequences may be a great way to search isolated clips or sequences, at least easier than searching all clips and sequences.

The other tidbit Jim mentioned was using YouTube’s built-in transcriptions in your own videos. Before you watch the tutorial keep in mind that this process isn’t flawless. While you can upload your video to YouTube in private mode, the uploading part may still turn away a few people who have security concerns. In addition, you will need to export a low-res proxy version of your clip to transcode, which can take time.

If you have the time, or have an assistant editor with time, this process through YouTube might be your saving grace. My two cents is that with some upfront bookkeeping like tape naming, and after transcribing corrections, this could be one of the best solutions if you aren’t worried about security.

Regardless, check out the tutorial if you want a way to get supposedly very accurate transcriptions via YouTube’s transcriber. In the end it will produce a VTT transcription file that you will import back into Transcriptive, where you will need to either leave alone or spend adjusting since VTT files will not allow for punctuation. The main benefit to the VTT file from YouTube is the timecode is carried back to Transcriptive and enables each word to be clicked on and the video will line up to it.

Summing Up
All in all, there are only a few options when working with transcriptions inside of Premiere. Transcriptive did a good job at what it did: uploading my file to one of the transcription services, acquiring the transcript and aligning the clip to the timecoded transcript with identifying markers for speakers that can be changed if needed. Once the Power Search gets ironed out and put into a proper release, Transcriptive will get even closer to being the transcription powerhouse you need for Premiere editing.

If you work with tons of interviews or just want clips transcribed for easy search you should definitely download Digital Anarchy’s Transcriptive demo and give it a whirl.

You can also find a ton of good video tutorials on their site. Keep in mind that the Transcriptive plugin runs $299 and you have some free transcriptions available to you through IBM’s Watson, but if you want very accurate transcriptions you will need to pay for Speechmatics or you can try YouTube’s built-in transcription service that charges nothing.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Editing 360 Video in VR (Part 2)

By Mike McCarthy

In the last article I wrote on this topic, I looked at the options for shooting 360-degree video footage, and what it takes to get footage recorded on a Gear 360 ready to review and edit on a VR-enabled system. The remaining steps in the workflow will be similar regardless of which camera you are using.

Previewing your work is important so, if you have a VR headset you will want to make sure it is installed and functioning with your editing software. I will be basing this article on using an Oculus Rift to view my work in Adobe Premiere Pro 11.1.2 on a Thinkpad P71 with an Nvidia Quadro P5000 GPU. Premiere requires an extra set of plugins to interface to the Rift headset. Adobe acquired Mettle’s Skybox VR Player plugin back in June, and has made it available to Creative Cloud users upon request, which you can do here.

Skybox VR player

Skybox can project the Adobe UI to the Rift, as well as the output, so you could leave the headset on when making adjustments, but I have not found that to be as useful as I had hoped. Another option is to use the GoPro VR Player plugin to send the Adobe Transmit output to the Rift, which can be downloaded for free here (use the 3.0 version or above). I found this to have slightly better playback performance, but fewer options (no UI projection, for example). Adobe is expected to integrate much of this functionality into the next release of Premiere, which should remove the need for most of the current plugins and increase the overall functionality.

Once our VR editing system is ready to go, we need to look at the footage we have. In the case of the Gear 360, the dual spherical image file recorded by the camera is not directly usable in most applications and needs to be processed to generate a single equirectangular projection, stitching the images from both cameras into a single continuous view.

There are a number of ways to do this. One option is to use the application Samsung packages with the camera: Action Director 360. You can download the original version here, but will need the activation code that came with the camera in order to use it. Upon import, the software automatically processes the original stills and video into equirectangular 2:1 H.264 files. Instead of exporting from that application, I pull the temp files that it generates on media import, and use them in Premiere. (C:\Users\[Username]\Documents\CyberLink\ActionDirector\1.0\360) is where they should be located by default. While this is the simplest solution for PC users, it introduces an extra transcoding step to H.264 (after the initial H.265 recording), and I frequently encountered an issue where there was a black hexagon in the middle of the stitched image.

Action Director

Activating Automatic Angle Compensation in the Preferences->Editing panel gets around this bug, while trying to stabilize your footage to some degree. I later discovered that Samsung had released a separate Version 2 of Action Director available for Windows or Mac, which solves this issue. But I couldn’t get the stitched files to work directly in the Adobe apps, so I had to export them, which was yet another layer of video compression. You will need a Samsung activation code that came with the Gear 360 to use any of the versions, and both versions took twice as long to stitch a clip as its run time on my P71 laptop.

An option that gives you more control over the stitching process is to do it in After Effects. Adobe’s recent acquisition of Mettle’s SkyBox VR toolset makes this much easier, but it is still a process. Currently you have to manually request and install your copy of the plugins as a Creative Cloud subscriber. There are three separate installers, and while this stitching process only requires Skybox Suite AE, I would install both the AE and Premiere Pro versions for use in later steps, as well as the Skybox VR player if you have an HMD to preview with. Once you have them installed, you can use the Skybox Converter effect in After Effects to convert from the Gear 360’s fisheye files to the equirectangular assets that Premiere requires for editing VR.

Unfortunately, Samsung’s format is not one of the default conversions supported by the effect, so it requires a little more creativity. The two sensor images have to be cropped into separate comps and with plugin applied to each of them. Setting the Input to fisheye and the output to equirectangular for each image will give the desired distortion. A feathered mask applied to the circle to adjust the seam, and the overlap can be adjusted with the FOV and re-orient camera values.

Since this can be challenging to setup, I have posted an AE template that is already configured for footage from the Gear 360. The included directions should be easy to follow, and the projection, overlap and stitch can be further tweaked by adjusting the position, rotation and mask settings in the sub-comps, and the re-orientation values in the Skybox Converter effects. Hopefully, once you find the correct adjustments for your individual camera, they should remain the same for all of your footage, unless you want to mask around an object crossing the stitch boundary. More info on those types of fixes can be found here. It took me five minutes to export 60 seconds of 360 video using this approach, and there is no stabilization or other automatic image analysis.

Video Stitch Studio

Orah makes Video-Stitch Studio, which is a similar product but with a slightly different feature set and approach. One limitation I couldn’t find a way around is that the program expects the various fisheye source images to be in separate files, and unlike AVP I couldn’t get the source cropping tool to work without rendering the dual fisheye images into separate square video source files. There should be a way to avoid that step, but I couldn’t find one. (You can use the crop effect to remove 1920 pixels on one side or the other to make the conversions in Media Encoder relatively quickly.) Splitting the source file and rendering separate fisheye spheres adds a workflow step and render time, and my one-minute clip took 11 minutes to export. This is a slower option, which might be significant if you have hours of footage to process instead of minutes.

Clearly, there are a variety of ways to get your raw footage stitched for editing. The results vary greatly between the different programs, so I made video to compare the different stitching options on the same source clip. My first attempt was with a locked-off shot in the park, but that shot was too simple to see the differences, and it didn’t allow for comparison of the stabilization options available in some of the programs. I shot some footage from a moving vehicle to see how well the motion and shake would be handled by the various programs. The result is now available on YouTube, fading between each of the five labeled options over the course of the minute long clip. I would categorize this as testing how well the various applications can handle non-ideal source footage, which happens a lot in the real world.

I didn’t feel that any of the stitching options were perfect solutions, so hopefully we will see further developments in that regard in the future. You may want to explore them yourself to determine which one best meets your needs. Once your footage is correctly mapped to equirectangular projection, ideally in a 2:1 aspect ratio, and the projects are rendered and exported (I recommend Cineform or DNxHR), you are ready to edit your processed footage.

Launch Premiere Pro and import your footage as you normally would. If you are using the Skybox Player plugin, turn on Adobe Transmit with the HMD selected as the only dedicated output (in the Skybox VR configuration window, I recommend setting the hot corner to top left, to avoid accidentally hitting the start menu, desktop hide or application close buttons during preview). In the playback monitor, you may want to right click the wrench icon and select Enable VR to preview a pan-able perspective of the video, instead of the entire distorted equirectangular source frame. You can cut, trim and stack your footage as usual, and apply color corrections and other non-geometry-based effects.

In version 11.1.2 of Premiere, there is basically one VR effect (VR Projection), which allows you to rotate the video sphere along all three axis. If you have the Skybox Suite for Premiere installed, you will have some extra VR effects. The Skybox Rotate Sphere effect is basically the same. You can add titles and graphics and use the Skybox Project 2D effect to project them into the sphere where you want. Skybox also includes other effects for blurring and sharpening the spherical video, as well as denoise and glow. If you have Kolor AVP installed that adds two new effects as well. GoPro VR Horizon is similar to the other sphere rotation ones, but allows you to drag the image around in the monitor window to rotate it, instead of manually adjusting the axis values, so it is faster and more intuitive. The GoPro VR Reframe effect is applied to equirectangular footage, to extract a flat perspective from within it. The field of view can be adjusted and rotated around all three axis.

Most of the effects are pretty easy to figure out, but Skybox Project 2D may require some experimentation to get the desired results. Avoid placing objects near the edges of the 2D frame that you apply it to, to keep them facing toward the viewer. The rotate projection values control where the object is placed relative to the viewer. The rotate source values rotate the object at the location it is projected to. Personally, I think they should be placed in the reverse order in the effects panel.

Encoding the final output is not difficult, just send it to Adobe Media Encoder using either H.264 or H.265 formats. Make sure the “Video is VR” box is checked at the bottom of the Video Settings pane, and in this case that the frame layout is set to monoscopic. There are presets for some of the common framesizes, but I would recommend lowering the bitrates, at least if you are using Gear 360 footage. Also, if you have ambisonic audio set channels to 4.0 in the audio pane.

Once the video is encoded, you can upload it directly to Facebook. If you want to upload to YouTube, exports from AME with the VR box checked should work fine, but for videos from other sources you will need to modify the metadata with this app here.  Once your video is uploaded to YouTube, you can embed it on any webpage that supports 2D web videos. And YouTube videos can be streamed directly to your Rift headset using the free DeoVR video player.

That should give you a 360-video production workflow from start to finish. I will post more updated articles as new software tools are developed, and as I get new 360 cameras with which to test and experiment.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Michael Kammes’ 5 Things – Video editing software

By Randi Altman

Technologist Michael Kammes is back with a new episode of 5 Things, which focuses on simplifying film, TV and media technology. The web series answers, according to Kammes, the “five burning tech questions” people might have about technologies and workflows in the media creation space. This episode tackles professional video editing software being used (or not used) in Hollywood.

Why is now the time to address this segment of the industry? “The market for NLEs is now more crowded than it has been in over 20 years,” explains Kammes. “Not since the dawn of modern NLEs have there been this many questions over what tools should be used. In addition, the massive price drop of NLEs, coupled with the pricing shift (monthly/yearly, as opposed to outright) has created more confusion in the market.”

In his video, Kammes focuses on Avid Media Composer, Adobe Premiere, Apple Final Cut Pro, Lightworks, Blackmagic Resolve and others.

Considering its history and use on some major motion pictures, (such as The Wolf of Wall Street), why hasn’t Lightworks made more strides in the Hollywood community? “I think Lightworks has had massive product development and marketing issues,” shares Kammes. “I rarely see the product pushed online, at user groups or in forums.  EditShare, the parent company of LightWorks, also deals heavily in storage, so one can only assume the marketing dollars are being spent on larger ticket items like professional and enterprise storage over a desktop application.”

What about Resolve, considering its updated NLE tools and the acquisition of audio company Fairlight? Should we expect to see more Resolve being used as a traditional NLE? “I think in Hollywood, adoption will be very, very slow for creative editorial, and unless something drastic happens to Avid and Adobe, Resolve will remain in the minority. For dailies, transcodes or grading, I can see it only getting bigger, but I don’t see larger facilities adopting Resolve for creative editorial. Outside of Hollywood, I see it gaining more traction. Those outlets have more flexibility to pivot and try different tools without the locked-in TV and feature film machine in Hollywood.”

Check it out:

Jimmy Helm upped to editor at The Colonie

The Colonie, the Chicago-based editorial, visual effects and motion graphics shop, has promoted Jimmy Helm to editor. Helm has honed his craft over the past seven years, working with The Colonie’s senior editors on a wide range of projects. Most recently, he has been managing ongoing social media work with Facebook and conceptualizing and editing short format ads. Some clients he has collaborated with include Lyft, Dos Equis, Capital One, Heineken and Microsoft. He works on both Avid Media Composer and Adobe Premiere.

A filmmaking major at Columbia College Chicago, Helm applied for an internship at The Colonie in 2010. Six months later he was offered a full-time position as an assistant editor, working alongside veteran cutter Tom Pastorelle on commercials for McDonald’s, Kellogg’s, Quaker and Wrangler. During this time, Helm edited numerous projects on his own, including broadcast commercials for Centrum and Kay Jewelers.

“Tom is incredible to work with,” says Helm. “Not only is he a great editor but a great person. He shared his editorial methods and taught me the importance of bringing your instinctual creativity to the process. I feel fortunate to have had him as a mentor.”

In 2014, Helm was promoted to senior assistant editor and continued to hone his editing skills while taking on a leadership role.

“My passion for visual storytelling began when I was young,” says Helm “Growing up in Memphis, I spent a great deal of time watching classic films by great directors. I realize now that I was doing more than watching — I was studying their techniques and, particularly, their editing styles. When you’re editing a scene, there’s something addictive about the rhythm you create and the drama you build. I love that I get to do it every day.”

Helm joins The Colonie’s editorial team, comprised of Joe Clear, Keith Kristinat, Pastorelle and Brian Salazar, along with editors and partners Bob Ackerman and Brian Sepanik.

 

 

Quick Chat: Lucky Post’s Sai Selvarajan on editing Don’t Fear The Fin

Costa, makers of polarized sunglasses, has teamed up with Ocearch, a group of explorers and scientists dedicated to generating data on the movement, biology and health of sharks, in order to educate people on how saving the sharks will save our oceans. In a 2.5-minute video, three shark attack survivors — Mike Coots, Paul de Gelder, and Lisa Mondy — explain why they are now on a quest to help save the very thing that attacked them, took their limbs and almost their lives.

The video edited by Lucky Post’s Sai Selvarajan for agency McGarrah Jessee and Rabbit Food Studios, tells the viewer that the number of sharks killed by long-lining, illegal fishing and the shark finning trade exceeds human shark attacks by millions. And as go the sharks, so go our oceans.

For editor Selvarajan, the goal was to strike a balance with the intimate stories and the global message, from striking footage filmed in Hawaii’s surf mecca, the North Shore. “Stories inside stories,” describes Selvarajan, who reveres the subjects’ dedication to saving the misunderstood creatures, despite having their life-changing encounters.

We spoke with the Texas-based editor to find out more about this project.

How early on did you become involved in the project?
I got a call when the project was greenlit and Jeff Bednarz the creative head at Rabbit Foot walked me through the concept. He wanted to showcase the whole teamwork aspect of Costa, Ocearch and shark survivors all coming together and using their skills to save sharks.

Did working on Don’t Fear The Fin change your perception of sharks?
Yes it did.  Before working on the project I had no idea that sharks were in trouble. After working on Don’t Fear the Fin, I’m totally for shark conservation, and I admire anyone who is out there fighting for the species.

What equipment did you use for the edit?
Adobe Premiere on Mac Tower.

What were the biggest creative challenges?
The biggest creative challenge was how to tell the shark survivors’ stories and then the shark’s story, and then Ocearch/Costa’s mission story. It was stories inside stories, which made it very dense and challenging to cut into a three-minute story. I had to do justice to all the stories and weave them into each other. The footage was gorgeous, but there had to be a sense of gravity to it all, so I used pacing and score to give us that gravity.

What do you think of the fact that sharks are not shown much in the film?
We made a conscious effort to show sharks and people in the same shot. The biggest misconception is that sharks are these big man-eating monsters. Seeing people diving with the sharks tied them to our story and the mission of the project.

What’s your biggest fear, and how would/can you overcome it?
Snakes are my biggest fear. I’m not sure how I’ll ever overcome it. I respect snakes and keep a safe distance. Living in Texas, I’ve read up on which ones are poisonous, so I know which ones to stay away from. But if I came across a rat snake in the wild, I’m sure to jump 20 feet in the air.

Check out the full video below…

 

Adobe acquires Mettle’s SkyBox tools for 360/VR editing, VFX

Adobe has acquired all SkyBox technology from Mettle, a developer of 360-degree and virtual reality software. As more media and entertainment companies embrace 360/VR, there is a need for seamless, end-to-end workflows for this new and immersive medium.

The Skybox toolset is designed exclusively for post production in Adobe Premiere Pro CC and Adobe After Effects CC and complements Adobe Creative Cloud’s existing 360/VR cinematic production technology. Adobe will integrate SkyBox plugin functionality natively into future releases of Premiere Pro and After Effects.

To further strengthen Adobe’s leadership in 360-degree and virtual reality, Mettle co-founder Chris Bobotis will join Adobe, bringing more than 25 years of production experience to his new role.

“We believe making virtual reality content should be as easy as possible for creators. The acquisition of Mettle SkyBox technology allows us to deliver a more highly integrated VR editing and effects experience to the film and video community,” says Steven Warner, VP of digital video and audio at Adobe. “Editing in 360/VR requires specialized technology, and as such, this is a critical area of investment for Adobe, and we’re thrilled Chris Bobotis has joined us to help lead the charge forward.”

“Our relationship started with Adobe in 2010 when we created FreeForm for After Effects, and has been evolving ever since. This is the next big step in our partnership,” says Bobotis, now director, professional video at Adobe. “I’ve always believed in developing software for artists, by artists, and I’m looking forward to bringing new technology and integration that will empower creators with the digital tools they need to bring their creative vision to life.”

Introduced in April 2015, SkyBox was the first plugin to leverage Mettle’s proprietary 3DNAE technology, and its success quickly led to additional development of 360/VR plugins for Premiere Pro and After Effects.

Today, Mettle’s plugins have been adopted by companies such as The New York Times, CNN, HBO, Google, YouTube, Discovery VR, DreamWorks TV, National Geographic, Washington Post, Apple and Facebook, as well as independent filmmakers and YouTubers.

Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.

Frame.io 2.0 offers 100 new features, improvements for collaboration

Frame.io, developers of the video review and collaboration platform for content creators, has unveiled Frame.io 2.0 , an upgrade offering over 100 new features and improvements. This new version features new client Review Pages, which expands content review and sharing. In addition, the new release offers deeper workflow integration with Final Cut Pro X and Avid Media Composer, plus a completely re-engineered player.

“Frame.io 2 is based on everything we’ve learned from our customers over the past two years and includes our most-requested features,” says Emery Wells, CEO of Frame.io.

Just as internal teams can collaborate using Frame.io’s comprehensive annotation and feedback tools, clients can now provide detailed feedback on projects with Review Pages, which is designed to make the sharing experience simple, with no log-in required.

Review Pages give clients the same commenting ability as collaborators, without exposing them to the full Frame.io interface. Settings are highly configurable to meet specific customer needs, including workflow controls (approvals), security (password protection, setting expiration date) and communication (including a personalized message for the client).

The Review Pages workflow simplifies the exchange of ideas, consolidating feedback in a succinct manner. For those using Adobe Premiere or After Effects, those thoughts flow directly into the timeline, where you can immediately take action and upload a new version. Client Review Pages are also now available in the Frame.io iOS app, allowing collaboration via iPhones and iPads.

Exporting and importing comments and annotations into Final Cut Pro X and Media Composer has gotten easier with the upgraded, free desktop companion app, which allows users to open downloaded comment files and bring them into the editor as markers. There is now no need to toggle between Frame.io and the NLE.

Users can also now copy and paste comments from one version to another. The information is exportable in a variety of formats, whether that’s a PDF containing a thumbnail, timecode, comment, annotation and completion status that can be shared and reviewed with the team or as a .csv or .xml file containing tons of additional data for further processing.

Also new to Frame.io 2.0 is a SMPTE-compliant source timecode display that works with both non-drop and drop-frame timecode. Users can now download proxies straight from Frame.io.

The Frame.io 2.0 player page now offers better navigation, efficiency and accountability. New “comment heads” allow artists to visually see who left a comment and where so they can quickly find and prioritize feedback on any given project. Users can also preview the next comment, saving them time when one comment affects another.

The new looping feature, targeting motion and VFX artists, lets users watch the same short clip on loop. You can even select a range within a clip to really dive in deep. Frame.io 2.0’s asset slider makes it easy to navigate between assets from the player page.

The new Frame.io 2.0 dashboard has been redesigned for speed and simplicity. Users can manage collaborators for any given project from the new collaborator panel, where adding an entire team to a project takes one click. A simple search in the project search bar makes it easy to bring up a project. The breadcrumb navigation bar tracks every move deeper into a sub-sub-subfolder, helping artists stay oriented when getting lost in their work. The new list view option with mini-scrub gives users the birds-eye view of everything happening in Frame.io 2.0.

Copying and moving assets between projects takes up no additional storage, even when users make thousands of copies of a clip or project. Frame.io 2.0 also now offers the ability to publish direct to Vimeo, with full control over publishing options, so pros can create the description and set privacy permissions, right then and there.

Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.

Review: Apple’s new MacBook Pro

By Brady Betzel

What do you need to know about the latest pro laptop from Apple? Well, the MacBook Pro is fast and light; the new Touch Bar is handy and sharp but not fully realized; the updated keys on the keyboard are surprisingly great; and working with ProRes QuickTime files in resolutions higher than 1920×1080 inside of FCP X, or any NLE for that matter, is blazing fast.

When I was tasked with reviewing the new MacBook Pro, I came into it with an open mind. After all, I did read a few other reviews that weren’t exactly glowing, but I love speed and innovation among professional workstation computers, so I was eager to test it myself.

I am pretty open-minded when it comes to operating systems and hardware. I love Apple products and I love Windows-based PCs. I think both have their place in our industry, and to be quite honest it’s really a bonus for me that I don’t rely heavily on one OS or get too tricked by the Command Key vs. Windows/Alt Key.

Let’s start with the call I had with the Apple folks as they gave me the lowdown on the new MacBook Pro. The Apple reps were nice, energetic, knowledgeable and extremely helpful. While I love Apple products, including this laptop, it’s not the be-all-end-all.

The Touch Bar is nice, but not a revolution. It feels like the first step in an evolution, a version 1 of an innovation that I am excited to see more of in later iterations. When I talked with the Apple folks they briefed me on what Tim Cook showed off in the reveal: emoji buttons, wide gamut display, new speakers and USB-C/Thunderbolt 3 connectivity.

NLEs
They had an FCPX expert on the call, which was nice considering I planned on reviewing the MacBook Pro with a focus on the use of nonlinear editing apps, such as Adobe Premiere Pro, Avid Media Composer and Blackmagic’s Resolve. Don’t get me wrong, FCPX is growing on me — it’s snappy jumping around the timeline with ProRes 5K footage; assigning roles are something I wish every other app would pick up on; and the timeline is more of a breeze to use with the latest update.

The other side to this is that in my 13 years of working in television post I have never worked on a show that primarily used FCP or FCPX to edit or finish on. This doesn’t mean I don’t like the NLE, it simply means I haven’t relied on it in a professional working environment. Like I said, I really like the road it’s heading down, and if they work their way into mainstream broadcast or streaming platforms a little more I am sure I will see it more frequently.

Furthermore, with the ever-growing reduction in reliance on groups of editors and finishing artists apps like FCPX are poised to shine with their innovation. After all that blabbering, in this review I will touch on FCPX, but I really wanted to see how the MacBook Pro performed with the pro NLEs I encounter the most.

Specs
Let’s jump into the specs. I was sent a top-of-the-line 15-inch MacBook Pro with Touch Bar, which costs $3,499 if configured online. It comes with a quad/-core Intel Core i7 2.9GHz (up to 3.8 GHz using Turbo Boost) processor, 16GB of 2133MHz memory, 1TB PCI-e SSD hard drive and Radeon Pro 460 with 4GB of memory. It’s loaded. I think the only thing that can actually be upgraded beyond this configuration would be to include a 2TB hard drive, which would add another $800 to the price tag.

Physically, the MacBook Pro is awesome — very sturdy, very thin and very light. It feels great when holding it and carrying it around. Apple even sent along a Thunderbolt 3 (USB-C) to Thunderbolt 2 adapter, which costs an extra $29 and a USB-C to Lightning Cable that costs an extra $29.

So yes, it feels great. Apple has made a great new MacBook Pro. Is it worth upgrading if you have a new-ish MacBook Pro at home already? Probably not, unless the Touch Bar really gets you going. The speed is not too far off from the previous version. However, if you have a lot of Thunderbolt 3/USB-C-connected peripherals, or plan on moving to them, then it is a good upgrade.

Testing
I ran some processor/graphics card intensive tests while I had the new MacBook Pro and came to the conclusion that FCPX is not that much faster than Adobe Premiere Pro CC 2017 when working with non-ProRes-based media. Yes, FCPX tears through ProRes QuickTimes if you already have your media in that format. What about if you shoot on a camera like the Red and don’t want to transcode to a more edit-friendly codec? Well, that is another story. To test out my NLEs, I grabbed a sample Red 6K 6144×3160 23.98fps clip from the Red sample footage page, strung out a 10-minute-long sequence in all the NLEs and exported both a color-graded version and a non-color-graded version as ProRes HQ QuickTimes files matching the source file’s specs.

In order to work with Red media in some of the NLEs, you must download a few patches: for FCPX you must install the Red Apple workflow installer and for Media Composer you must install the Red AMA plug-in. Premiere doesn’t need anything extra.

Test 1: Red 6K 6144×3160 23.98fps R3D — 10-minute sequence (no color grade or FX) exported as ProRes HQ matching the source file’s specs. Premiere > Media Encoder = 1 hour, 55 minutes. FCPX = 1 hour, 57 minutes. Media Composer = two hours, 42 minutes (Good news, Media Composer’s interface and fonts display correctly on the new display).

You’ll notice that Resolve is missing from this list and that is because I installed Resolve 12.5.4 Studio but then realized my USB dongle won’t fit into the USB-C port — and I am not buying an adapter for a laptop I do not get to keep. So, unfortunately, I didn’t test a true 6K ProRes HQ export from Resolve but in the last test you will see some Resolve results.

Overall, there was not much difference in speeds. In fact, I felt that Premiere Pro CC 2017 played the Red file a little smoother and at a higher frames-per-second count. FCPX struggled a little. Granted a 6K Red file is one of the harder files for a CPU to process with no debayer settings enabled, but Apple touts this as a MacPro semi-replacement for the time being and I am holding them to their word.

Test 2: Red 6K 6144×3160 23.98fps R3D — 10-minute color-graded sequence exported as ProRes HQ matching the source files specs. Premiere > Media Encoder = one hour, 55 minutes. FCPX = one hour, 58 minutes. Media Composer = two hours, 34 minutes.

It’s important to note that the GPU definitely helped out in both Adobe Premiere and FCPX. Little to no extra time was added on the ProRes HQ export. I was really excited to see this as sometimes without a good GPU — resizing, GPU-accelerated effects like color correction and other effects will slow your system to a snail’s pace if it doesn’t fully crash. Media Composer surprisingly speed up its export when I added the color grade as a new color layer in the timeline. By adding the color correction layer to another layer Avid might have forced the Radeon to kick in and help push the file out. Not really sure what that is about to be honest.

Test 3: Red 6K 6144×3160 23.98fps R3D — 10-minute color-graded sequence resized to 1920×1080 on export as ProRes HQ. Premiere > Media Encoder = one hour, 16 minutes. FCPX = one hour, 14 minutes. Media Composer = one hour, 48 minutes. Resolve = one hour, 16 minutes

So after these tests, it seems that exporting and transcoding are all about the same. It doesn’t really come as too big of a surprise that all the NLEs, except for Media Composer, processed the Red file in the same amount of time. Regardless of the NLE, you would need to knock the debayering down to a half or more to start playing these clips at realtime in a timeline. If you have the time to transcode to ProRes you will get much better playback and rendering speed results. Obviously, transcoding all of your files to a codec, like ProRes or Avid DNX, takes way more time up front but could be worth it if you crunched for time on the back end.

In addition to Red 6K files, I also tested ProRes HQ 4K files inside of Premiere and FCPX, and both played them extremely smoothly without hiccups, which is pretty amazing. Just a few years ago I was having trouble playing down 10:1 compressed files in Media Composer and now I can playback superb-quality 4K files without a problem, a tremendous tip of the hat to technology and, specifically, Apple for putting so much power in a thin and light package.

While I was in the mood to test speeds, I hooked up a Thunderbolt 2 SSD RAID (OWC Thunderbay 4 mini) configured in RAID-0 to see what kind of read/write bandwidth I would get running through the Apple Thunderbolt 3 to Thunderbolt 2 adapter. I used both AJA System Test as well as the Blackmagic Disk Speed Test. The AJA test reported a write speed of 929MB/sec. and read speed of 1120MB/sec. The Blackmagic test reported a write speed of 683.1MB/sec. and 704.7MB/sec. from different tests and a read speed of 1023.3MB/sec. I set the test file for both at 4GB. These speeds are faster than what I have previously found when testing this same Thunderbolt 2 SSD RAID on other systems.

For comparison, the AJA test reported a write speed of 1921MB/sec. and read speed of 2134MB/sec. when running on the system drive. The Blackmagic test doesn’t allow for testing on the system drive.

What Else You Need to Know
So what about the other upgrades and improvements? When exporting these R3D files I noticed the fan kicked on when resizing or adding color grading to the files. Seems like the GPU kicked on and heated up which is to be expected. The fan is not the loudest, but it is noticeable.

The battery life on the new MacBook Pro is great when just playing music, surfing the web or writing product reviews. I found that the battery lasted about two days without having to plug in the power adapter. However, when exporting QuickTimes from either Premiere or FCPX the battery life dropped — a lot. I was getting a battery life of one hour and six minutes, which is not good when your export will take two hours. Obviously, you need to plug in when doing heavy work; you don’t really have an option.

This leads me to the new USB-C/Thunderbolt 3 ports — and, yes, you still have a headphone jack (thank goodness they didn’t talk with the iPhone developers). First off, I thought the MagSafe power adapter should have won a Nobel Peace Prize. I love it. It must be responsible for saving millions of dollars in equipment when people trip over a power cord — gracefully disconnecting without breaking or pulling your laptop off the table. However, I am disappointed Apple didn’t create a new type of MagSafe cable with the USB-C port. I will miss it a lot. The good news is you can now plug in your power adapter to either side of the MacBook Pro.

Adapters and dongles will have to be purchased if you pick up a new MacBook Pro. Each time I used an external peripheral or memory card like an SD card, Tangent Ripple Color Correction panel or external hard drive, I was disappointed that I couldn’t plug them in. Nonetheless, a good Thunderbolt 3 dock is a necessity in my opinion. You could survive with dongles but my OCD starts flaring up when I have to dig around my backpack for adapters. I’m just not a fan. I love how Apple dedicated themselves to a fast I/O like USB-C/Thunderbolt 3, but I really wish they gave it another year. Just one old-school USB port would have been nice. I might have even gotten over no SD card reader.

The Touch Bar
I like it. I would even say that I love it — in the apps that are compatible. Right now there aren’t many. Adobe released an update to Adobe Photoshop that added compatibility with the Touch Bar, and it is really handy especially when you don’t have your Wacom tablet available (or a USB dongle to attach it). I love how it gives access to so many levels of functionality to your tools within your immediate reach.

It has super-fast feedback. When I adjusted the contrast on the Touch Bar I found that the MacBook Pro was responding immediately. This becomes even more evident in FCPX and the latest Resolve 12.5.4 update. It’s clear Apple did their homework and made their apps like Mail and Messages work with the Touch Bar (hence emojis on the Touch Bar). FCPX has a sweet ability to scrub the timeline, zoom in to the timeline, adjust text and more from just the Touch Bar — it’s very handy, and after a while I began missing it when using other computers.
In Blackmagic’s latest DaVinci Resolve release, 12.5.4, they have added Touch Bar compatibility. If you can’t plug in your color correction panels, the Touch Bar does a nice job of easing the pain. You can do anything from contrast work to saturation, even adjust the midtones and printer lights, all from the Touch Bar. If you use external input devices a lot, like Wacom tablets or color correction panels, the Touch Bar will be right up your alley.

One thing I found missing was a simple application launcher on the Touch Bar. If you do pick up the new MacBook Pro with Touch Bar, you might want to download Touch Switcher, a free app I found via 9to5mac.com that allows you to have an app launcher on your Touch Bar. You can hide the dock, allowing you more screen real estate and the efficient use of the Touch Bar to launch apps. I am kind of surprised Apple didn’t make something like this standard.

The Display
From a purely superficial and non-scientific point of view, the newly updated P3-compatible wide-gamut display looks great… really great, actually. The colors are rich and vibrant. I did a little digging under the hood and noticed that it is an 8-bit display (data that you can find by locating the pixel depth in the System Information > Graphics/Display), which might limit the color gradations when working in a color space like P3 as opposed to a 10-bit display displaying in a P3 color space. Simply, you have a wider array of colors in P3 but a small amount of color shades to fill it up.

The MacBook Pro display is labeled as 32-bit color meaning the RGB and Alpha channels each have 8 bits, giving a total of 32 bits. Eight-bit color gives 256 shades per color channel while 10-bit gives 1,024 shades per channel, allowing for much smoother transitions between colors and luminance values (imagine a sky at dusk going smoothly from an orange to light blue to dark blue — the more colors per channel allows for a smoother gradient between lights and darks). A 10-bit display would have 30-bit color with each channel having 10 bits.

I tried to hook up a 10-bit display, but the supplied Thunderbolt 3 to Thunderbolt 2 dongle Apple sent me did not work with the mini display port. I did a little digging and it seems people are generally not happy that Apple doesn’t allow this to work, especially since Thunderbolt 2 and mini DisplayPort are the same connection. Some people have been able to get around this by hooking up their display through daisy chaining something like a Thunderbolt 2 RAID.

While I couldn’t directly test an external display when I had the MacBook Pro, I’ve read that people have been able to push 10-bit color out of the USB-C/Thunderbolt 3 ports to an external monitor. So as long as you are at a desk with a monitor you can most likely have 10-bit color output from this system.

I reached out to Apple on the types of adapters they recommend for an external display and they suggest a USB-C to DisplayPort adapter made by Aukey. It retails for $9.99. They also recommend the USB-C to DisplayPort cable from StarTech, which retails for $39.99. Make sure you read the reviews on Amazon because the experience people have with this varies wildly. I was not able to test either of these so I cannot give my personal opinion.

Summing Up
In the end, the new MacBook Pro is awesome. If you own a recent release of the MacBook Pro and don’t have $3,500 to spare, I don’t know if this is the update you will be looking for. If you are trying to find your way around going to a Windows-based PC because of the lack of Mac Pro updates, this may ease the pain slightly. Without more than 16GB of memory and an Intel Xeon or two, however, it might actually slow you down.

The battery life is great when doing light work, one of the longest batteries I’ve used on a laptop. But when doing the heavy work, you need to be near an outlet. When plugged into that outlet be careful no one yanks out your USB-C power adapter as it might throw your MacBook Pro to the ground or break off inside.

I really do love Apple products. They typically just work. I didn’t even touch on the new Touch ID Sensor that can immediately switch you to a different profile or log you in after waking up the MacBook Pro from sleep. I love that you can turn the new MacBook Pro on and it simply works, and works fast.

The latest iteration of FCPX is awesome as well, and just because I don’t see it being used a lot professionally doesn’t mean it shouldn’t be. It’s a well-built NLE that should be given a fairer shake than it has been given. If you are itching for an update to an old MacBook Pro, don’t mind having a dock or carrying around a bunch of dongles, then the 2016 MacBook Pro with the Touch Bar is for you.

The new MacBook Pro chews through ProRes-based media from 1920×1080 to 4K, 6K and higher will play but might slow down. If you are a Red footage user this new MacBook Pro works great, but you still might have to knock the debayering down a couple notches.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Mettle VR plug-ins for Adobe Premiere

By Barry Goch

I was very frustrated. I took a VR production class, I bought a LG 360 camera, but I felt like I was missing something. Then it dawned on me — I wanted to have more control. I started editing 360 videos using the VR video viewing tools in Adobe Premiere Pro, but I still was lacking the control I desired. I wanted my audience to have a guided, immersive experience without having to be in a swivel chair to get the most out of my work. Then, like a bolt of lightning, it came to me — I needed to rotate the 360 video sphere. I needed to be able to reorient it to accomplish my vision, but how would I do that?

Rotate Sphere plug-in showing keyframing.

Mettle’s Skybox 360/VR Tools are exactly what I was looking for. The Rotate Sphere plug-in alone is worth the price of the entire plug-in package. With this one plug-in, you’re able to re-orient your 360 video without worrying about any technical issues — it gives you complete creative control to re-frame your 360 video — and it’s completely keyframable too! For example, I mounted my 360 camera on my ski helmet this winter and went down a ski run at Heavenly in Lake Tahoe. There are amazing views of the lake from this run, but I also needed to follow the skiers ahead of me. Plus, the angle of the slope changed and the angle to the subjects I was following changed as well. Since the camera was fixed, how could I guide the viewer? By using the Rotate Sphere plug-in from Mettle and keyframing the orientation of the shot as the slope/subject relationship changed relative to my position.

My second favorite plug-in is Project 2D. Without the Project 2D plug-in, when you add titles to your 360 videos they become warped and you have very little control over their appearance. In Project 2D, you create your title using the built-in titler in Premiere Pro, add it to the timeline, then apply the Project 2D Mettle Skybox plug-in. Now you have complete control over the scale, rotation of the titling element and the placement of the title within the 360 video sphere. You can also use the Project 2D plug-in to composite graphics or video into your 360 video environment.

Mobius Zoom transition in action.

Rounding out the Skybox plug-in set are 360 video-aware plug-ins that every content creator needs. What do I mean but 360 video-aware? For example, when you apply a blur that is not 360 video-content-aware, it crosses the seam where the equi-rectangular video’s edges join together and makes the seam unseemly. With the Skybox Blur, Denoise, Glow and Sharpen plug-ins, you don’t have this problem. Just as the Rotate Sphere plug-in does the crazy math to rotate your 360 video without distortion or introducing artifacts, these plug-ins do the same.

Transitioning between cuts in 360 video is an evolving art form. There is really no right or wrong way. Longer cuts, shorter cuts, dissolves and dips to black are some of the basic options. Now, Mettle is adding to our creative toolkit by applying their crazy math skills on transitions in 360 videos. Mettle started with their first pack of four transitions: Mobius Zoom, Random Blocks, Gradient Wipe and Iris Wipe. I used the Mobius Zoom to transition from the header card to the video and then the Iris Wipe with a soft edge to transition from one shot to the next in the linked video.

Check out this video, which uses Rotate Sphere, Project 2D, Mobius Zoom and Iris wipe effects.

New Plug-Ins
I’m pleased to be among the first to show you their second set of plug-ins specifically designed for 360 / VR video! Chroma Leaks, Light Leaks, Spherical Blurs and everyone’s favorite, Light Rays!

Mettle plug-ins work on both Mac and Windows platforms — on qualified systems — and in realtime. The Mettle plug-ins are also both mono- and stereo-aware.

The Skybox plug-in set for Adobe Premiere Pro is truly the answer I’ve been looking for since I started exploring 360 video. It’s changed the way I work and opened up a world of control that I had been wishing for. Try it for yourself by downloading a demo at www.mettle.com.


Barry Goch is currently a digital intermediate editor for Deluxe in Culver City, working on Autodesk Flame. He started his career as a camera tech for Panavision Hollywood. He then transitioned to an offline Avid/FCP editor. His resume includes Passengers, Money Monster, Eye in the Sky and Game of Thrones. His latest endeavor is VR video.

Review: The HP Z1G3 All-in-One workstation

By Brady Betzel

I’ll admit it. I’ve always been impressed with HP’s All-in-One workstations — from their z840 to their zBook mobile workstation and now their HP Z1G3. Yes, I know, the HP line of workstations are not cheap. In fact, you can save quite a bit of money building your own system, but you will probably have tons of headaches unless you are very confident in your computer-building skills. And if you don’t mind standing in the return line at the Fry’s Electronics.

HP spends tons of time and money on ISV certifications for their workstations. ISV certification stands for Independent Software Vendor certification. In plain English it means that HP spends a lot of time and money making sure the hardware inside of your workstation works with the software you use. For an industry pro that means apps like Adobe’s Premiere Pro and After Effects, Avid Media Composer, Autodesk products like 3DS Max and many others.

For this review,  I tested apps like Avid Media Composer, FilmLight’s Baselight for Media Composer color correction plug-in, Adobe Premiere Pro, Adobe Media Encoder and Adobe After Effects, as well as Blackmagic’s Resolve 12.5.2, which chewed through basic color correction. In terms of testing time, I typically keep a review computer system for a couple of months, but with this workstation I really wanted to test it as thoroughly as possible — I’ve had the workstation for three months and counting, and I’ve been running the system through all the appropriate paces.

I always love to review workstations like the HP Z1G3 because of the raw power they possess. While HP sent me one of the top-of-the-line Z1G3 configurations, which retails for a list price of $3,486, they have a pretty reasonable starting price at $1,349. From Intel i3, i5 and i7 configurations all the way up to the all mighty Intel Xeon — the HP Z1G3 can be customized to fit into your workflow whether you just need to check your email or color correct video from your GoPro.

Here are the specs that make up the HP Z1G3 All-in-One workstation I received:

● 23.6-inch UHD/4K non-glare and non-touch display (3840×2160)
● Intel Xeon E3-1270 v5 CPU, 3.6GHz (4 Cores / 8 Threads)
● 64GB DDR4 SODIMM 2133 GHz (4 x 16GB)
● Nvidia Quadro M2000M graphics (4GB)
● Two Z Turbo drives (512GB, PCIe M.2)
● Wireless keyboard and mouse
● Two Thunderbolt 3/USB 3.1 ports
● USB charging port
● Media card reader
● DisplayPort out

As I mentioned earlier, I tested the Z1G3 with many different apps, but recently I’ve been diving deeper into color correction, and luckily for my testing this fits right in. A few of the most strenuous real-world tests for computer systems is running 3D modeling apps like Maxon Cinema 4D and color correction suites like Resolve. Of course, apps like After Effects are great tests as well, but adding nodes on nodes on nodes in Resolve will really tax your CPU, as well as your GPU.

One thing that can really set apart high-end systems like the Z1G3 is the delay when using a precision color correction panel like Tangent’s Elements or Ripple. Sometimes you will move one of the color wheel balls and a half a second later the color wheel moves on screen. I tried adding a few clips and nodes on the timeline and when using the panels, I noticed no discernible delay (at least more than what I would expect). While this isn’t a scientific test, it is crucial for folks looking to plug in external devices.

For more scientific tests I stuck to apps like Cinebench from Maxon, AJA’s System Test and Blackmagic’s Disk Speed Test. In Cinebench, the Z1G3 ranked at the top of the list when compared to similar systems. In AJA’s System Test I tested the read/write speed of the hp-z1g3-aja-system-test-copynon-OS drive (basically the editing or cache drive). It sustained around 1520MB/s read and 1490MB/s write. I say around because I couldn’t get the AJA app to display the entire read/write numbers because of the high-resolution/zoom in Windows, I tried scaling it down to 1920×1080 but no luck. In Blackmagic’s Disk Speed Test, I was running at 1560MB/s read and 1497.3MB/s write. The drive that I ran this test on is HP’s version of the M.2 PCIe SSD powered by Samsung, more affectionately known by HP as a Z-Turbo drive. The only thing better at the moment would be a bunch of these drives arranged in a RAID-0 configuration. Luckily, you can do that through the Thunderbolt 3 port with some spare SSDs you have lying around.

Almost daily I ran Premiere Pro CC, Media Encoder and Resolve Studio 12.5.2. I was really happy with the performance in Premiere. When working with QuickTimes in inter-frame codecs like H.264 and AVC-HD (non-edit friendly codecs), I was able to work without too much stuttering in the timeline. When I used intra-frame codecs like ProRes HQ from a Blackmagic’s Pocket Cinema Camera, Premiere worked great. I even jumped into Adobe’s Lumetri color tools while using Tangent’s Ripple external color correction panel and it worked with little discernable delay. I did notice that Premiere had a little more delay when using the external color correction panel than Media Composer and Resolve, but that seemed to be more of a software problem rather than a workstation problem.

One of my favorite parts about using a system with an Nvidia graphics card, especially a Quadro card like the M2000M, is the ability to encode multiple versions of a file at once. Once I was done editing some timelapses in Premiere, I exported using Media Encoder. I would apply three presets I made: one square 600×600 H.264 for Instagram, one 3840×2160 H.264 for YouTube and an Animated GIF at 480×360 for Twitter. Once I told Media Encoder to encode, it ran all three exports concurrently — a really awesome feature. With the Nvidia Quadro card installed, it really sped along the export.

Media Composer
Another app I wanted to test was Media Composer 8.6.3. Overall Media Composer ran great except for the high-resolution display. As I’ve said in previous reviews, this isn’t really the fault of HP, but more of the software manufacturers who haven’t updated their interfaces to adapt to the latest UHD displays. I had filmed a little hike I took with my five-year-old. I gave him a GoPro while I had my own. Once we got the footage back home, I imported it into Media Composer, grouped the footage and edited it using the multi-cam edit workflow.

Simply put, the multi-camera split was on the left and the clip I had in the sequence was playing simultaneously on the right. Before I grouped the footage into a multi-group, I transcoded the H.264s into DNxHD 175 an intra-frame, edit-friendly codec. The transcode was nearly realtime, so it took 60 minutes to transcode a 60-minute H.264 — which is not bad. In the end, I was able to edit the two-camera multi-group at 1920×1080 resolution with only minor hiccups. Occasionally, I would get caught in fast forward for a few extra seconds when J-K-L editing, but nothing that made me want to throw my keyboard or mouse against the wall.

Once done editing, I installed the FilmLight color correction plug-in for Media Composer. I had a really awesome experience coloring using Baselight in Media Composer on the Z1G3. I didn’t have any slowdowns, and the relationship between using the color correction panel and Baselight was smooth.

Resolve
The last app I tested with HP’s Z1G3 All-in-One Workstation was Blackmagic’s Resolve 12.5.2. Much like my other tests, I concentrated on color correction with the Tangent Ripple and Element-Vs iOS app. I had four or five nodes going in the color correction page before I started to see a slow down. I was using the native H.264 and ProRes HQ files from the cameras, so I didn’t make it easy for Resolve, but it still worked. Once I added a little sharpening to my clips, the HP Z1G3 really started to kick into gear. I heard the faint hum of fans, which up until this point hadn’t kicked in. This is also where the system started to slow down and become sluggish.

Summing Up
The Z1G3 is one of my favorite workstations, period. A while ago, I reviewed the previous All-in-One workstation from HP, the Z1G2, and at the time it was my favorite. One of my few complaints was that, while it was easy to fix, it was very heavy and bulky. When I opened the Z1G3 box, I immediately noticed how much lighter and streamlined the design was. It almost felt like they took away 50 percent of the bulk, which is something I really appreciate. I can tell that one of the main focuses with the Z1G3 was minimizing its footprint and weight, while increasing the power. HP really knocked it out of the park.

One of the only things that I wish was different on the Z1G3 I tested was the graphics card. While the Nvidia Quadro M2000M is a great graphics card, it is a “mobile” version of a Quadro, which has 128 fewer CUDA cores and 26GB/s less bandwidth than its desktop equivalent the M2000. I would love the option of a full-sized Quadro and instead of the mobile version but I also understand the power consumption will go up as well as the form factor, so maybe I give HP a pass here.

In the end, I know everyone reading this review is saying to themselves, “I love my iMac so why would I want the HP Z1G3?” If you are a die-hard Apple user, or you just saw the new Microsoft Surface Studio announcement, then it might be a hard sell, but I love both Windows- and Mac OS-based systems, and the Z1G3 is awesome. What’s even more awesome is that it is easily upgradeable. I took off the back cover, and with simple switch I could have added a 2.5-inch hard drive or two in under a minute. If you are looking for a new powerful workstation and want one that not only stands up to Resolve and Premiere Pro CC, the HP Z1G3 is for you.


Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Review: Red Giant’s Universe 2

By Brady Betzel

Throughout 2016, we have seen some interesting acquisitions in the world of post production software and hardware — Razer bought THX, Blackmagic bought Ultimatte and Fairlight and Boris FX bought GenArts, to name a few. We’ve also seen a tremendous consolidation of jobs. Editors are now being tasked as final audio mixers, final motion graphics creators, final colorists and much more.

Personally, I love doing more than just editing, so knowing tools like Adobe After Effects and DaVinci Resolve, in addition to Avid Media Composer, has really helped me become not only an editor but someone who can jump into After Effects or Resolve and do good work.

hudUnfortunately, for some people it is the nature of the post beast to know everything. Plug-ins play a gigantic part in balancing my workload, available time and the quality of the final product. If I didn’t have plug-ins like Imagineer’s Mocha Pro, Boris’s Continuum Complete, GenArt’s Sapphire and Red Giant’s Universe 2, I would be forced to turn down work because the time it would take to create a finished piece would outweigh the fee I would be able to charge a client.

A while back, I reviewed Red Giant’s Universe when it was in version 1, (check it out here). In the beginning Universe allowed for lifetime, annual and free memberships. It seems the belt has tightened a little for Red Giant as Universe 2 is now $99 a year, $20 a month or a 14-day free trial. No permanent free version or lifetime memberships are offered (if you downloaded the free Universe before June 28, you will still be able to access those free plug-ins in the Legacy group). Moreover, they have doubled the monthly fee from $10 to $20 — definitely trying to get everyone on to the annual subscription train.

Personally, I think this resulted from too much focus on the broad Universe, trying to jam in as many plug-ins/transitions/effects as possible and not working on specific plug-ins within Universe. I actually like the renewed focus of Red Giant toward a richer toolset as opposed to a full toolset.

Digging In
Okay, enough of my anecdotal narrative and on to some technical awesomeness. Red Giant’s Universe 2 is a vast plug-in collection that is compatible with Adobe’s Premiere Pro and After Effects CS6-CC 2015.3; Apple Final Cut Pro X 10.0.9 and later; Apple Motion 5.0.7 and later; Vegas 12 and 13; DaVinci Resolve 11.1 and later; and HitFilm 3 and 4 Pro. You must have a compatible GPU installed as Universe does not have a CPU fallback plan for unsupported machines. Basically you must have 2GB or higher GPU, and don’t forget about Intel as their graphic support has improved a lot lately. For more info on OS compatibility and specific GPU requirements, check out Red Giant’s compatibility page.

Universe 2 is loaded with great plug-ins that, once you dig in, you will want to use all the time. For instance, I really like the ease of use of Universe’s RGB Separation and Chromatic Glow. If you want a full rundown of each and every effect you should download the Universe 2 trial and check this out. In this review I am only going to go over some of the newly added plug-ins — HUD Components,  Line, Logo Motion and Color Stripe — but remember there are a ton more.

I will be bouncing around different apps like Premiere Pro and After Effects. Initially I wanted to see how well Universe 2 worked inside of Blackmagic’s DaVinci Resolve 12.5.2. Resolve gave me a little trouble at first; it began by crashing once I clicked on OpenFX in the Color page. I rebooted completely and got the error message that the OpenFX had been disabled. I did a little research (and by research I mean I typed ”Disabled OpenFX Resolve” into Google), and  stumbled on a post on Blackmagic’s Forum that suggested deleting “C:\ProgramData\Blackmagic Design\Davinci Resolve\Support\OFXPluginCache.xml” might fix it. Once I deleted that and rebooted Resolve, I clicked on the OpenFX tab in the Color Page, waited 10 minutes, and it started working. From that point on it loaded fast. So, barring the Resolve installation hiccup, there were no problems installing in Premiere and After Effects.

Once installed, you will notice that Universe has a few folders inside of your plug-in’s drop down: Universe Blur, Universe Distort, Universe Generators, Universe Glow, Universe Legacy, Universe Motion Graphics, Universe Stylize and Universe Utilities. You may recognize some of these if you have used an earlier version of Universe, but something you will not recognize is that each Universe plug-in now has a “uni.” prefix.

I am still not sure whether I like this or hate this. On one hand it’s easy to search for if you know exactly what you want in apps like Premiere. On the other hand it runs counterintuitive to what I am used to as a grouchy old editor. In the end, I decided to run my tests in After Effects and Premiere. Resolve is great, but for tracking a HUD in 3D space I was more comfortable in After Effects.

HUD Components
First up is HUD Components, located under the Universe Motion Graphics folder and labeled: “uni.HUD Components.” What used to take many Video CoPilot tutorials and many inspirational views of HUD/UI master Jayse Hansen’s (@jayse_) work, now takes me minutes thanks to the new HUD components. Obviously, to make anything on the level of a master like Jayse Hansen will take hundreds of hours and thousands of attempts, but still — with Red Giant HUD Components you can make those sci-fi in-helmet elements quickly.

When you apply HUD Components to a solid layer in After Effects you can immediately see the start of your HUD. To see what the composite over my footage would look like, I went to change the blend mode to Add, which is listed under “Composite Settings.” From there you can see some awesome pre-built looks under the Choose a Preset button. The pre-built elements are all good starting points, but I would definitely dive further into customizing, maybe layer multiple HUDs over each other with different Blend Modes, for example.

Diving further into HUD Components, there are four separate “Elements” that you can customize, each with different images, animations, colors, clone types, and much more. One thing to remember is that when it comes to transformation settings and order of operations work from the top down. For instance, if you change the rotation on element one, it will affect each element under it, which is kind of handy if you ask me. Once you get the hang of how HUD Components works, it is really easy to make some unique UI components. I really like to use the uni.Point Zoom effect (listed under Universe Glow in the Effects & Presets); it gives you a sort of projector-like effect with your HUD component.

There are so many ways to use and apply HUD Components in everyday work, from building dynamic lower thirds with all of the animatable arcs, clones and rotations to building sci-fi elements, applying Holomatrix to it and even Glitch to create awesome motion graphics elements with multiple levels of detail and color. I did try using HUD Components in Resolve when tracking a 3D object but couldn’t quite get the look I wanted, so I ditched it and used After Effects.

Line
Second up is the Line plug-in. While drawing lines along a path in After Effects isn’t necessarily hard, it’s kind of annoying — think having to make custom map graphics to and from different places daily. Line takes the hard work out of making line effects to and from different points. This plug-in also contains the prefix uni. and is located under Universe Motion Graphics labeled uni.Line.

This plug-in is very simple to use and animate. I quickly found a map, applied uni.Line, placed my beginning and end points, animated the line using two keyframes under “Draw On” and bam! I had an instant travel-vlog style graphic that showed me going from California to Australia in under three minutes (yes, I know three minutes seems a little fast to travel to Australia but that’s really how long it took, render and all). Under the Effect Controls you can find preset looks, beginning and ending shape options like circles or arrows, line types, segmented lines and curve types. You can even move the peak of the curve under bezier style option.

Logo Motion
Third is Logo Motion, located under Universe Motion Graphics titled uni.LogoMotion. In a nutshell you can take a pre-built logo (or anything for that matter), pre-compose it, throw the uni.LogoMotion effect on top, apply a preset reveal, tweak your logo animation, dynamically adjust the length of your pre-comp — which directly affects the logo’s wipe on and off — and, finally, render.

This is another plug-in that makes my life as an editor who dabbles in motion graphics really easy. Red Giant even included some lower third animation presets that help create dynamic lower third movements. You can select from some of the pre-built looks, add some motion while the logo is “idle,” adjust things like rotation, opacity and blur under the start and end properties, and even add motion blur. The new preset browser in Universe 2 really helps with plug-ins like Logo Motion where you can audition animations easily before applying them. You can quickly add some life to any logo or object with one or two clicks; if you want to get detailed you can dial in the idle animation and/or transition settings.

Color Stripe
Fourth is Color Stripe, a transition that uses color layers to wipe across and reveal another layer. This one is a pretty niche case use, but is still worth mentioning. In After Effects. transitions are generally a little cumbersome. I found the Universe 2 transitions infinitely easier to use in NLEs like Adobe Premiere. From the always-popular swish pan to exposure blur, there are some transitions you might use once or some you might use a bunch. Color Stripe is a transition that you probably won’t want to use too often, but when you do need it, it will be right at your fingertips. You can choose from different color schemes like analogous, tetradic, or even create a custom scheme to match your project.

In the end, Universe 2 has some effects that are essential once you begin using them, like uni.Unmult, uni.RGB Separation and the awesome uni.Chromatic Glow. The new ones are great too, I really like the ease of use of uni.HUD Components. Since these effects are GPU accelerated you might be surprised at how fast and fluid they work in your project without slowdowns. For anyone who likes apps like After Effects, but can’t afford to spend hours dialing in the perfect UI interface and HUD, Universe 2 is perfect for you. Check out all of the latest Red Giant Universe 2 tools here.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Review: Tangent Ripple color correction panel

By Brady Betzel

Lately, it feels like a lot of the specializations in post production are becoming generalized and given to the “editor.” One of the hats that the editor now wears is that of color corrector — I’m not saying we are tasked with color grading an entire film, but we are asked to make things warmer or cooler or to add saturation.

With the standard Wacom tablet, keyboard and/or mouse combo, it can get a little tedious when color correcting — in Adobe Premiere, Blackmagic Resolve or Avid Media Composer/Symphony — without specialized color correction panels like the Baselight Blackboard, Resolve Advanced, Nucoda Precision, Avid Artist Color or even Tangent’s Element. In addition, those specialized panels run between $1,000 per piece to upwards of $30,000, leaving many people to fend for themselves using a mouse.

While color correcting with a mouse isn’t always horrible, once you use a proper color correction panel, you will always feel like you are missing a vital tool. But don’t worry! Tangent has released a new color correction panel that is not only affordable and compatible with many of today’s popular coloring and nonlinear editing apps, but is also extremely portable: the Tangent Ripple.

For this review I am covering how the Tangent Ripple works inside of Premiere Pro CC 2015.3, Filmlight’s Baselight Media Composer/Symphony plug-in and Resolve 12.5.

One thing I always found intimidating about color correction and grading apps like Resolve was the abundance of options to correct or grade an image. The Tangent Ripple represents the very basic first steps in the color correction pipeline: color balancing using lift, gamma, gain (or shadows, midtones and highlights) and exposure/contrast correction. I am way over-simplifying these first few steps but these are what the Ripple specializes in.

You’ve probably heard of the Tangent Element Panels, which go way beyond the basics — if you start to love grading with the Tangent Ripple or the Element-VS app, the Element set should be your next step. It retails for around $3,500, or a little below as a set (you can purchase the Element panels individually for cheaper, but the set is worth it). The Tangent Ripple retails for only $350.

Basic Color Correction
If you are an offline editor who wants to add life to your footage quickly, basic color correction is where you will be concentrating, and the Ripple is a tool you need to purchase. Whether you color correct your footage for cuts that go to a network executive, or you are the editor and finisher on a project and want to give your footage the finishing touch, you should check out what a little contrast, saturation and exposure correction can do.

panelYou can find some great basic color correcting tutorials on YouTube, Lynda.com and color correction-focused sites like MixingLight.com. On YouTube, Casey Faris has some quick and succinct color correction tutorials, check him out here. Ripple Training also has some quick Resolve-focused tips posted somewhat weekly by Alexis Van Hurkman.

When you open the Tangent Ripple box you get an instruction manual, the Ripple, three track balls and some carrying pouches to keep it all protected. The Ripple has a five-foot USB cable hardwired into it, but the track balls are separate and do not lock into place. If you were to ask a Ripple user to tell you the serial number on the bottom of the Ripple, most likely they will turn it over, dropping all the trackballs. Obviously, this could wreck the trackballs and/or injure someone, so don’t do it, but you get my point.

The Ripple itself is very simple in layout: three trackballs, three dials above the trackballs, “A” and “B” buttons and revert buttons next to the dials. That is it! If you are looking for more than that, you should take a look at the Element panels.

After you plug in the Ripple to an open USB port, you probably should download the Tangent Hub software. This will also install the Tangent Mapper, which allows you to customize your buttons in apps like Premiere Pro. Unfortunately, Resolve and the Media Composer Baselight plug-in do not allow for customization, but when you install the software you get a nice HUD that signals what service each Ripple button and knob does in the software you are using.

If you are like me and your first intro into the wonderful world of color correction in an NLE was Avid Symphony, you might have also encountered the Avid Artist Color panel, which is very similar in functionality: three balls and a couple of knobs. Unfortunately, I found that the Artist Color never really worked like it should within Symphony. Here is a bit of interesting news: while you can’t use the Ripple in the native Symphony color corrector, you can use external panels in the Baselight Avid plug-in! Finally a solution! It is really, really responsive to the Tangent Ripple too! The Ripple really does work great inside of a Media Composer plug-in.

The Ripple was very responsive, much more than what I’ve experienced with the Avid Artist Color panel. As I mentioned earlier, the Ripple will accomplish the basics of color correcting — you can fix color balance issues and adjust exposure. It does a few things well, and that is it. To my surprise, when I added a shape (a mask used in color correction) in Baselight, I was able to adjust the size, points and position of the shape using the Ripple. In the curves dialogue I was able to add, move and adjust points. Not only does Baselight change the game for powerful, in-Avid color correction, but it is a tool like the Ripple that puts color correction within any editor’s grasp. I was really shocked at how well it worked.

When using the Ripple in Resolve you get what Resolve wants to give you. The Ripple is great for basic corrections inside of Resolve, but if you want to dive further into the awesomeness of color correction, you are going to want to invest in the Tangent Element panels.

With the Ripple inside of Resolve, you get the basic lift, gamma and gain controls along with the color wheels, a bypass button and reset buttons for each control. The “A” button doesn’t do anything, which is kind of funny to me. Unlike the Baselight Avid plug-in, you cannot adjust shapes, or do much else with the Ripple panel other than the basics.

Element-Vs
Another option that took me by surprise was Tangent iOS and the Android app Element-Vs. I expected this app to really underwhelm me but I was wrong. Element-Vs acts as an extension of your Ripple — based off the Tangent Element panels. But keep in mind, it’s still an app and there is nothing comparable to the tactile feeling and response you get from a panel like the Ripple or Elements. Nonetheless, I did use the Element-Vs app on an iPad Mini and it was surprisingly great.

It is a bit high priced for an app, coming in at around $100, but I was able to get a really great response when cycling through the different Element “panels,” leading me to think that the Ripple and Element-Vs app combo is a real contender for the prosumer colorist. At a total of $450 ($350 for the Ripple and $100 for the Element-Vs app), you are in the same ballpark as a colorist who has a $3,000-plus set of panels.

As I said earlier, the Element panels have a great tactile feel and feedback that, at the moment, is hard to compare to an app, but this combo isn’t as shabby as I thought it would be. A welcome surprise was that the installation and connection were pretty simple too.

Premiere Pro
The last app I wanted to test was Premiere Pro CC. Recently, Adobe added external color panel support in version 2015.3 or above. In fact, Premiere has the most functionality and map-ability out of all the apps I tested — it was an eye-opening experience for me. When I first started using the Lumetri color correction tools inside of Premiere I was a little bewildered and lost as the set-up was different from what I was used to in other color correction apps.

I stuck to basic color corrections inside of Premiere, and would export an XML or flat QuickTime file to do more work inside of Resolve. Using the Ripple with Premiere changed how I felt about the Lumetri color correction features. When you open Premiere Pro CC 2015.3 along with the Tangent Mapper, the top row of tabs opens up. You can customize not only the standard functions of the Ripple within each Lumetri panel, like Basic, Creative, Curves, Color Wheels, HSL Secondaries and Vignette, but you can also create an alternate set of functions when you press the “A” button.

In my opinion, the best button press for the Ripple is the “B” button, which cycles you through the Lumetri panels. In the panel Vignette, the Ripple gives you options like Vignette Amount, Vignette Midpoint, feather and Vignette Roundness.

As a side note, one complaint I have about the Ripple is that there isn’t a dedicated “bypass” button. I know that each app has different button designations and that Tangent wants to keep the Ripple as simple as possible, but many people constantly toggle the bypass function.

Not all hope is lost, however. Inside of Premiere, if you hold the “A” button for alternate mapping and hit the “B” button, you will toggle the bypass off and on. While editing in Premiere, I used the Ripple to do color adjustments even when the Lumetri panel wasn’t on screen. I could cycle through the different Lumetri tabs, make adjustments and continue to edit using keyboard functions fast — an awesome feature both Tangent and Adobe should be promoting more, in my opinion.

It seems Tangent worked very closely with Adobe when creating the Ripple. Maybe it is just a coincidence, but it really feels like this is the Adobe Premiere Pro CC Tangent Ripple. Of course, you can also use the Element-Vs app in conjunction with the Ripple, but in Premiere I would say you don’t need it. The Ripple takes care of almost everything for you.

One drawback I noticed when using the Ripple and Element-Vs inside of Premiere Pro was a small delay when compared to using these inside of Resolve and Baselight’s Media Composer plug-in. Not a huge delay, but a slight hesitation — nothing that would make me not buy the Ripple, but something you should know.

Summing Up
Overall, I really love the Ripple color correction panel from Tangent. At $350, there is nothing better. The Ripple feels like it was created for editors looking to dive deep into Premiere’s Lumetri color controls and allows you to be more creative because of it.

Physically, the Ripple has a lighter and more plastic-type of feel than its Element Tk panel brother, but it still works great. If you need something light and compact, the Ripple is a great addition to your Starbuck’s-based color correction set-up.

I do wish there was a little more space between the trackballs and the rotary dials. When using the dials, I kept nudging the trackballs and sometimes I didn’t even realize what had happened. However, since the Ripple is made to be compact, lightweight, mobile and priced to beat every other panel on the market, I can forgive this.

It feels like Tangent worked really hard to make the Ripple feel like a natural extension of your keyboard. I know I sound like a broken record, but saving time makes me money, and the Tangent Ripple color correction panel saves me time. If you are an editor that has to color correct and grade dailies, an assistant editor looking to up their color correction game or just an all-around post production ninja who dabbles in different areas of expertise, the Tangent Ripple is the next tool you need to buy.


Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Updates to Adobe Creative Cloud include project sharing, more

By Brady Betzel

Adobe has announced team project sharing!! You read that right — the next Adobe Creative Cloud update, to be released later this year, will have the one thing I’ve always said kept Adobe from punching into Avid’s NLE stake with episodic TV and film editors.

While “one thing” is a bit of hyperbole, Team Projects will be much more than just simple sharing within Adobe Premiere Pro. Team Projects, in its initial stage, will also work with Adobe After Effects, but not with Adobe Audition… at least not in the initial release. Technically speaking, sharing projects within Creative Cloud seems like it will follow a check-in/check-out workflow, allowing you to approve another person’s updates to override yours or vice-versa.

During a virtual press demo, I was shown how the Team Projects will work. I asked if it would work “offline,” meaning without Internet connection. Adobe’s representative said that Team Projects will work with intermittent Internet disconnections, but not fully offline. I asked this because many companies do not allow their NLEs or their storage to be attached to any Internet-facing network connections. So if this is important to you, you may need to do a little more research once we actually can get our hands on this release.

My next question was if Team Projects was a paid service. The Adobe rep said they are not talking the business side of this update yet. I took this as an immediate yes, which is fine, but officially they have no comment on pricing or payment structure, or if it will even cost extra at all.

Immediately after I asked my last question, I realized that this will definitely tie in with the Creative Cloud service, which likely means a monthly fee. Then I wondered where exactly will my projects live? In the cloud? I know the media can live locally on something like an Avid ISIS or Nexis, but will the projects be shared over the Internet? Will we be able to share individual sequences and/or bins or just entire projects? There are so many questions and so many possibilities in my mind, it really could change the multiple editor NLE paradigm if Adobe can manage it properly. No pressure Adobe.

Other Updates
Some other Premiere Pro updates include: improved caption and subtitling tools; updated Lumetri Color tools, including much needed improvement to the HSL secondaries color picker; automatic recognition of VR/360 video and what type of mapping it needs; improved virtual reality workflow; destination publishing will now include Behance (No Instagram export option?); improved Live Text Templates, including a simplified workflow that allows you to share Live Text Templates with other users (will even sync Fonts if they aren’t present from Typekit) and without need for an After Effects License; native DNxHD and DNxHR QuickTime export support, audio effects from Adobe Audition, Global FX mute to toggle on and off all video effects in a sequence; and, best of all, a visual keyboard to map shortcuts! Finally, another prayer for Premiere Pro has been answered. Unfortunately, After Effects users will have to wait for a visual keyboard for shortcut assignment (bummer).

After Effects has some amazing updates in addition to Project Sharing, including a new 3D render engine! Wow! I know this has been an issue for anybody trying to do 3D inside of After Effects via Cineware. Most people will purchase VideoCopilot’s Element 3D to get around this, but for those that want to work directly with Maxon’s Cinema 4D, this may be the update that alleviates some of your 3D disdain via Cineware. They even made mention that you do not need a GPU for this to work well. Oh, how I would love for this to come to fruition. Finally, there’s a new video preview architecture for faster playback that will hopefully allow for a much more fluid and dynamic playback experience.

After Effects C4D RenderAdobe Character Animator has some updates too. If you haven’t played with Character Animator you need to download it now and just watch the simple tutorials that come with the app — you will be amazed, or at least your kids will be. If you haven’t seen how the Simpson’s used Character Animator, you should check it out with a YouTube search. It is pretty sweet. In terms of incoming updates, there will be faster and easier puppet creation, improved round trip workflow between Photoshop and Illustrator, and the ability to use grouped keyboard triggers.

Summing Up
In the end, the future is still looking up for the Adobe Creative Cloud video products, like Premiere Pro and After Effects. If there is one thing to jump out of your skin over in the forthcoming update it is Team Projects. If Team Projects works and works well, the NLE tide may be shifting. That is a big if though because there have been some issues with previous updates — like media management within Premiere Pro — that have yet to be completely ironed out.

Like I said, if Adobe does this right it will be game-changing for them in the shared editing environment. In my opinion, Adobe is beginning to get its head above water in the video department. I would love to see these latest updates come in guns blazing and working. From the demo I saw it looks promising, but really there is only one way to find out: hands-on experience.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter @allbetzroff. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Lucky Post helps with the funny for McDonald’s McPick 2 spots

Lucky Post editor Travis Aitken and sound designer Scottie Richardson were part of the new campaign for McDonald’s, via agency Moroch, that reminds us that there are many things you cannot choose, but you can “McPick 2.”

The campaign — shot by production house Poster with directors Plástico and Sebastian Caporelli — highlights humor in the subtleties of life. Parents features a not-so-cool, but well-meaning, dad and his teenage son talking about texts and “selfies” while enjoying McPick 2 meal from McDonald’s. His son explains the picture he is showing him isn’t a selfie, but his father defends, saying, “Yeah, it is. I took it myself.”

Passengers features a little guy sandwiched between two big, muscular guys in a three-seater row on an airplane. The only thing that makes him feel better is that he chose to bring a McPick 2 meal with him.

“Performance comedy, like these spots, is at its best when you’re seeing people interacting in frame,” says editor Aitken, who cut using Adobe Premiere. “You don’t want to manipulate too much in the edit — it is finding the best performances and allowing them to play out. In that sense, editing with dialogue comedy is punctuation. It’s vastly different than other genres — beauty, for example, where you are editing potentially unrelated images and music to create the story. Here, the story is in front of you.”

According to sound designer Richardson, “My job was to make sure dialogue was clear and create ambient noise that provided atmosphere but didn’t overwhelm the scenes. I used Avid Pro Tools with Soundminer and Sony Oxford noise reduction to provide balance and let the performances shine.”

The executive producer for Dallas-based Lucky Post was Jessica Berry. MPC’s Ricky Gausis provided the color grade.

Blending Ursa Mini and Red footage for Aston Martin spec spot

By Daniel Restuccio

When producer/director Jacob Steagall set out to make a spec commercial for Aston Martin, he chose to lens it on the Blackmagic Ursa Mini 4.6k and the Scarlet Red. He says the camera combo worked so seamlessly he dares anyone to tell which shots are Blackmagic and which are Red.

L-R Blackmagic’s Moritz Fortmann and Shawn Carlson with Jacob Steagall and Scott Stevens.

“I had the idea of filming a spec commercial to generate new business,” says Steagall. He convinced the high-end car maker to lend him an Aston Martin 2016 V12 Vanquish for a weekend. “The intent was to make a nice product that could be on their website and also be a good-looking piece on the demo reel for my production company.”

Steagall immediately pulled together his production team, which consisted of co-director Jonathan Swecker and cinematographers Scott Stevens and Adam Pacheco. “The team and I collaborated together about the vision for the spot which was to be quick, clean and to the point, but we would also accentuate the luxury and sexiness of the car.”

“We had access to the new Blackmagic Ursa Mini 4.6k and an older Red Scarlet with the MX chip,” says Stevens. “I was really interested in seeing how both cameras performed.”

He set up the Ursa Mini to shoot ProRes HQ at Ultra HD (3840×2160) and the Scarlet at 8:1 compression at 4K (4096×2160). He used both Canon still camera primes and a 24-105mm zoom, switching them from camera to camera depending on the shot. “For some wide shots we set them up side by side,” explains Stevens. “We also would have one camera shooting the back of the car and the other camera shooting a close-up on the side.”

In addition to his shooting duties, Stevens also edited the spot, using Adobe Premiere, and exported the XML into Blackmagic Resolve Studio 12. Stevens notes that, in addition to loving cinematography, he’s also “really into” color correction. “Jacob (Steagall) and I liked the way the Red footage looked straight out of the camera in the RedGamma4 color space. I matched the Blackmagic footage to the Red footage to get a basic look.”

Blackmagic colorist Moritz Fortmann took Stevens’ basis color correction and finessed the grade even more. “The first step was to talk to Jacob and Scott and find out what they were envisioning, what feel and look they were going for. They had already established a look so we saved a few stills as reference images to work off. The spot was shot on two different types of cameras, and in different formats. Step two was to analyze the characteristics of each camera and establish a color correction to match the two.  Step three was to tweak and refine the look. We did what I would describe as a simple color grade, only relying on primaries, without using any Power Windows or keys.”

If you’re planning to shoot mixed footage, Fortmann suggests you use cameras with similar characteristics, matching resolution, dynamic range and format. “Shooting RAW and/or Log provides for the highest dynamic range,” he says. “The more ‘room’ a colorist has to make adjustments, the easier it will be to match mixed footage. When color correcting, the key is to make mixed footage look consistent. One camera may perform well in low light while another one does not. You’ll need to find that sweet spot that works for all of your footage, not just one camera.”

Daniel Restuccio is a writer and chair of the multimedia department at California Lutheran University.

Frame.io’s Emery Wells talks about Premiere Pro collaboration

In this business, collaboration is key. And without a strong system in place, chaos ensues, things get missed and time and money is wasted. The makers of Frame.io, a video review, sharing and collaboration platform, know that first hand, having been hands-on post pros. Recently, they came out with a realtime tool for Adobe Premiere Pro, aptly named Frame.io for Premiere Pro. The product includes the entire Frame.io application, redesigned and re-engineered for Adobe’s panel architecture.

This product, they say, is based on decades of real-world experience sitting in an editor’s chair. Features of the product include a shared cloud bin that multiple editors can work from; one click import and export of sequences, project files and entire bins; realtime comments directly in the Premiere timeline with no marker syncing; auto versioning for rapid iteration on creative ideas; comment marker syncing for when you do not have an internet connection; and sync’d playback in Frame.io and your Premiere timeline.

To find out more, we reached out to Frame.io co-founder and CTO Emery Wells (right) to find out more.

With this latest offering, you target a specific product. Does this mean you will be customizing your tool for specific editing platforms going forward?
Frame.io is a web application that can be accessed from any browser. It is not NLE specific and you can upload media that’s come from anywhere. We’ve started to build (and will continue to build) tools that help bridge the gap between the creative desktop apps like Premiere and Frame.io. Each tool we integrate comes with its own unique set of challenges and capabilities. Premiere’s extension model was compatible with Frame.io’s architecture so we were able to bring the entire feature set directly inside Premiere.

How is this product different from others out there?
It’s much more significant than an integration. It’s an entire realtime collaboration layer for Adobe Premiere and is transformative to the way video gets made. The Premiere integration has already been in the hands of companies like BuzzFeed, where they have 200 producers cranking out 175 videos/week. That is an absolutely maddening pace. Frame.io and our latest Premiere integration brings efficiency to that process.

Can multiple editors work simultaneously, like Avid?
No. It’s not a replacement for an Avid set-up like that.

What are the costs?
The Premiere extension comes standard with any Frame.io plan, including our free plan.

What speed of Internet is required?
We recommend a minimum of 10 megabits per second, which is fairly accessible on any broadband connection these days.

How easy to use is it, really?
It’s as easy as the Frame.io web application itself. Anyone can get up to speed in 10-15 minutes of poking around.

What do you think is the most important thing users should know about this product?
We’re solving real problems based on real experience. We built the tool we wanted as editors ourselves. Frame.io for Premiere Pro really allows you to go home at a decent hour instead of waiting around for a render at 10pm. We automate the render, upload and notification. You don’t have to pull your hair out trying to stay organized just to move a project forward.

Shooting Creatively: Red Bull, BMX and the Silverdome

By Alex Horner

I’m a director/DP based out of Minneapolis with seven years of experience in commercials and branded content under my belt. I seek to find untold stories in the least expected places. While some prefer to have every bit of their shoot follow a specific path on location, I welcome the challenge of unfamiliar places, different ideas and variable scenarios.

Such was the case with Red Bull at the abandoned Silverdome in Pontiac, Michigan at the end of last year. I created a video featuring 19-year-old BMX rider Tyler Fernengel that has received close to five million views on YouTube to date.

I’ve shot a variety of projects with Red Bull over the years. This time, Ryan Taylor and I were approached to co-direct a video that would bring life back into the abandoned Silverdome with BMX — and we had the creative freedom to tell the story the way we wanted.

Action sports can be challenging to shoot because nothing is guaranteed. The rider may be having an off week, or they could get injured during filming. There’s only so much that we can control and plan for, including the weather.

We had four days to shoot the Silverdome spot, and we had to be careful about how we did it. Most of the set-ups were physically taxing for Tyler, and we could only film two or three of them in a day. It wasn’t worth pushing him to land a trick on the first day if it meant he would sustain an injury that would effect the rest of the shoot. He was having trouble with his ankle to begin with, but he powered through it. To top it all off, the temperature was in the low 40s, which added to the challenge.

Say you’re an athlete, and you’re supposed to perform a trick on command. If it’s particularly risky or dangerous, you’re probably going to feel the most confident at the first sign of an adrenaline rush. Our shoot relied on harnessing these moments with Tyler to make sure we were getting the best shots. He was patient when we needed more time to set up, even if his mind was telling him to go.

A lot of the set-ups were elaborate and technical. There was no room for error, which can be stressful for an athlete, especially when the camera is rolling. But with Tyler we were able to pull off a series of incredible shots. He’s the most professional athlete I’ve worked with.

Since the Silverdome doesn’t have elevators, we needed to be as light and nimble as possible. Our small crew consisted of the build team (ramps), producer, assistant camera, sound, gaffer, grip, drone operator, Ryan and myself. We had a golf cart on hand to shuffle gear to different sections of the stadium.

The Shoot
The Red Epic and Scarlet Dragon with Nikon primes and zooms fit the bill for this shoot. And, since YouTube supports 4K resolution, we had reason to finish in 4K. The Scarlet was our dedicated Movi M15 cam, which spared us from having to accommodate the 30 to 45 minutes required to switch cameras. The Trost slider was also a must-have. With that, we pulled off shots that would have otherwise involved a jib arm or a dolly.

We squeaked by with lights running off Honda putt putts: the Arri 1.2 HMI, Joker 800 with octabox and 1×1 LEDs. We were able to use natural lighting for most of our shots, except for the stairwell section, which was completely dark. We had a Sprinter van onsite for various grip needs, too.

I bring the Trost Motion slider to just about every shoot. Seventy-five percent of the time, it’s on a dolly with the Mitchell plate – usually a Super PeeWee III or a Fisher 10-11. I use it for slider moves, but also to reposition the camera quickly and easily. Instead of moving the dolly, I can slide the camera left or right with a simple adjustment. It’s especially handy if I’m shooting on a tabletop, when the camera needs to move half an inch to the left or the right.

Something like that can be tricky to execute on the dolly, but not with this slider. I also use it as an offset arm to shoot overheads, and through car windows — all while still being able to reposition the camera. It’s all the more useful because it has a variety of uses other than a slider.

My Red Epic Dragon weighs around 20 to 25 pounds once I have it built, but the slider handles it with ease and allows for smooth adjustments with zero play in the sled. While the Trost Motion can be on the heavier side for travel, I strap it to my F-Stop bag when hiking in remote locations. With a set of carbon Manfrotto sticks, head, a 100mm half ball, and a monopod for support, I can use it anywhere. It sets up in five minutes.

The Post
We worked on MacBook Pros running Adobe Premiere and edited natively with the R3Ds. We finished the film in 4K for YouTube. To save time on the back end, we came up with a look in-camera. When it came time for color, there wasn’t a whole lot left we needed to do other than balance the images out.

I like working on a MacBook Pro due to its mobility, and find that working outside of the office helps with creativity in the edit. Between the MacBook Pro and 5K iMac, the two machines offer everything I need when it comes to editing.

Chase Brandau and Nick Mihalevich handled all of the sound design. Sound was a huge part of the film, due to the haunting sounds of the Silverdome. The howling of wind through the halls, shifting HVAC vents, dripping water, etc.

Without sugar coating it, the Silverdome shoot was a grueling four days in tough conditions. But when you have the right gear and it all works perfectly — cameras, sliders, lighting and a solid crew — you end up with an awesome story to share. It’s all worth it.

Alex Horner is a director and DP at Minneapolis-based Horner Pictures.

72andSunny adds audio and editing suites

Creative company 72andSunny has added 13 new uncompressed-4K video editing suites to its Los Angeles office. Part of the large-scale project also includes three new audio mixing and sound editing suites, two of which feature RedNet Dante network interfaces from Focusrite. The installation was done by MW Audio Visual.

The company’s video editing suites vary in terms of tools. “We designed and built our edit suites to work in a number of editorial pipelines,” says John Keaney, director of Hecho En 72 operations at 72andSunny, which is their in-house studio and bespoke production unit. “Because certain projects are better suited for one editorial platform than another or the talent for a particular job may be stronger in one system than another, we built our systems to handle a few different operations.”

Their two main editing systems are Avid Media Composer 8 and Adobe Premiere Pro 9.2 (Creative Cloud 2015), but they will call on other editorial, motion graphics and audio applications as needed.

“With 12-core Mac Pros, ultra widescreen LG monitors for our project files, HP DreamColor monitors for editor playback, 55-inch Pluras for client playback, Blackmagic Ultrastudio 4K capture/playout devices and a Quantum StorNext SAN with SAN Fusion, we have a systems infrastructure that is compatible in either Avid or Adobe environments,” reports Keaney.

In terms of sound, one of the new audio suites is a full cinema post production and mixing studio, featuring a JBL 4722 cinema monitoring system and a Panasonic laser projector. A second studio, used for recording and mixing, uses the same Avid S6 console and Pro Tools HDX system as the cinema studio. Flanking the suites are two voiceover booths.

“The client wanted to be able to route any audio from either studio or either booth to any other location, instantly,” says Michael Warren, president of MW Audio Visual. “There is a RedNet 5 in the console rack in each studio and a RedNet 4 in each V-O booth, with a digital snake attached to the DB25 inputs of the RedNet 4 units. These are also connected, via shielded Cat-6 cabling, to Cisco switches that we have in each control room, V-O booth and rack. So we can matrix the audio from anywhere to anywhere, through the RedNet units.”

In fact, adds Warren, 72andSunny can send its audio anywhere in the world from there, through a sharable SAN that connects its entire campus and out to any other location via the Internet. “This is the new face of media workflow,” he says. “People are creating content for television, cinema, online — it doesn’t matter. It’s all about their ability to connect with each other and share the process, between rooms, across a campus, or globally.”

72andSunny’s clients include Google, Samsung, Activision, ESPN, Starbucks and Samsung.

Quick Chat: East Coast Digital’s Stina Hamlin on VR ‘Cardboard City’

New York City-based East Coast Digital believes in VR and has set up its studio and staff to be able to handle virtual reality projects. In fact, they recently provided editorial, 3D animation, color correction and audio post on the 60-second VR short Cardboard City, co-winner of the Samsung Gear Indie VR Filmmaker Contest. The short premiered at the 2016 Sundance Film Festival. You can check it out here.

Cardboard City, directed by Double Eye Productions’ Kiira Benzing, takes viewers inside the studio of Brooklyn-based stop-motion animator Danielle Ash, who has built a cardboard world inside her studio. There is a pickle vendor, a bakery and a neighborhood bar, all of which can be seen while riding a cardboard roller coaster.

East Coast Digital‘s Stina Hamlin was post producer on the project. We reached out to her to find out more about this project and how the VR workflow differs from the traditional production and post workflow.

Stina Hamlin

How did this project come about?
The project came about organically after being introduced to director Kiira Benzing by narrative designer Eulani Labay. We were all looking to get our first VR project under our belt.  In order to understand the post process involved, I thought it was vital to be involved in a project from the inception, through the production stage and throughout post.  I was seeking projects and people to team up with, and after I met Kiira this amazing team came together.

What direction did you get?
We were given the understanding of the viewer experience that the film should evoke and were asked to be responsible for the technical side of things on set and in editorial.

So you were you on set?
Yes, we were definitely on set. That was an important piece of the puzzle. We were able to consult on what we could do in color and we were able to determine file management and labeling of takes to make it easier to deal with when back in the edit room. Also, we were able to do a couple of stitches at the beginning of the day to determine best camera positioning, etc.

How does your workflow differ from a traditional project to a VR project?
A VR project is different because we are syncing and concerned with seven-plus cameras at a time. The file management has to be very detailed and the stitching process is tedious and uses new software that all editors are getting up to speed with.

Monitoring the cameras on set is tricky, so being able to stitch on set to make sure the look is true to the vision was huge.  That is something that doesn’t happen in the traditional workflow… the post team is definitely not on set.

Cardboard City

Can you elaborate on some of the challenges of VR in general and those you encountered on this project?
The challenges are dealing with multiple cameras and cards, battery or power, and media for every shot from every camera. Syncing the cameras properly in the field and in post can be problematic, and the file management has to uber-detailed.  Then there’s the stitching… there are different software options, no one is a master yet. It is tedious work, and all of this has to get done before you can even edit the clips together in a sequence.

Our project also used stop-motion animation, so we had the artist featured in our film experimenting with us on how to pull that off.  That was really fun and it turned out great!  I heard someone say recently at the Real Screen conference that you have to unlearn everything that you have learned about making a film.  It is a completely different way to tell a story in production and post.

What was your workflow like?
As I mentioned before, I thought that it was vital to be on set to help with media management and “shot looks” using only natural light and organically placed light in preparation for color. We were also able to stitch on set to get a sense of each set-up, which really helped the director and artist see their story and creatively do their job. We then had a better sense of managing the media and understanding how the takes were marked.

Once back in the edit room we used Adobe Premiere to clean up each take and sync each clip for each camera.  We then brought only those clips into the stitching software — Autopano and Giga software from Kolor.com — to stitch and clean up each scene. We rendered out each scene into a self contained QuickTime for color. We colored in DaVinci Resolve and edited the scenes together using Premiere.

What about the audio? 
We recorded nothing on location. All of the sound was designed in post using the mix from the animated short film Pickles for Nickels that was playing on the wall, in addition to the subway and roller coaster sound effects.

What tools were used on set?
We used GoPro Hero 4s with firmware 3.0 and shot in log, 2.7k/30fps. iPads and iPhones were used to wirelessly monitor the rig, which was challenging. We used a laptop with AutoPano and Giga software to stitch on set. This is the same software we used in the edit bay.

What’s next?
We are collaborating once more with Kiira Benzing on the follow-up to Cardboard City. It’s a full-fledged 360 VR short film. The sequel will be even more technically advanced and create additional possibilities for interaction with the user.

Bandito Brothers: picking tools that fit their workflow

Los Angeles-based post, production and distribution company Bandito Brothers is known for its work on feature films such as Need for Speed, Act of Valor and Dust to Glory. They provide a variety of services — from shooting to post to visual effects — for spots, TV, films and other types of projects.

Lance Holte in the company’s broadcast color by working on DaVinci Resolve 12.

They are also known in our world for their Adobe-based workflows, using Premiere and After Effects in particular. But that’s not all they are. Recently, Bandito invested in Avid’s new Avid ISIS|1000 shared storage system to help them work more collaboratively with very large and difficult-to-play files across all editing applications. The system — part of the Avid MediaCentral Platform— allows Bandito’s creative teams to collaborate efficiently regardless of which editing application they use.

“We’ve been using Media Composer since 2009, although our workflows and infrastructure have always been built around Premiere,” explains Lance Holte, senior director of post production, Bandito Brothers. “We tend to use Media Composer for offline editorial on projects that require more than a few editors/assistants to be working in the same project since Avid bin-locking in one project is a lot simpler than breaking a feature into 200 different scene-based Premiere projects.

“That said, almost every project we cut in Avid is conformed and finished in Premiere, and many projects — that only require two or three editors/assistants, or require a really quick turnaround time, or have a lot of After Effects-specific VFX work — are cut in Premiere. The major reason that we’ve partnered with Avid on their new shared storage is because it works really well with the Adobe suite and can handle a number of different editorial workflows.”

MixStage             
Bandito’s Mix Stage                                                         Bandito’s Edit 4.

He says the ISIS | 1000 gives them the collaborative power to share projects across a wide range of tools and staff, and to complete projects in less time. “The fact that it’s software-agnostic means everyone can use the right tools for the job, and we don’t need to have several different servers with different projects and workflows,” says Holte.

Bandito Brothers’ ISIS|1000 system is accessible from three separate buildings at its Los Angeles campus — for music, post production and finishing. Editors can access plates being worked on by its on-site visual effects company, or copy over an AAF or OMF file for the sound team to open in Avid Pro Tools in their shared workspace.

“Bandito uses Pro Tools for mixing, which also makes the ISIS|1000 handy, since we can quickly movie media between mix and editorial anywhere across the campus,” concludes Holte.

Currently, Bandito Brothers is working on a documentary called EDM, as well as commercial campaigns for Audi, Budweiser and Red Bull.

Top 3: My picks from Adobe’s Creative Cloud update

By Brady Betzel

Adobe’s resolve to update its Creative Cloud apps on a regular basis has remained strong. The latest updates, released on December 1, really hammer home Adobe’s commitment to make editing video, creating visual effects and color correcting on a tablet a reality, but it doesn’t end there. They have made their software stronger across the board, whether you are using a tablet, mobile workstation or desktop.

After Effects and Stacked Panels

I know everyone is going to have their own favorites, but here are my top three from the latest release:

1. Stacked Panels
In both After Effects and Premiere you will notice the ability to arrange your menus in Stacked Panels. I installed the latest updates on a Sony VAIO tablet and these Stacked Panels were awesome!

It’s really a nice way to have all of your tools on screen without having them take up too much real estate. In addition, Adobe has improved touch-screen interaction with the improved ability to pinch and zoom different parts of the interface, like increasing the size of a thumbnail with a pinch-to-zoom.

In Premiere, to find the Stacked Panels you need to find the drop down menu in the project panel, locate Panel Group Settings and then choose Stacked Panel Group and Solo Panels in Stack, if you want to only view one at a time. I highly recommend using the Stacked Panels if you are using a touchscreen, like a tablet or some of the newer mobile workstations out there in the world. Even if you aren’t, I really think it works well.

Premiere Pro and Optical Flow

Premiere Pro and Optical Flow

2. Optical Flow Time Remapping
Most editors are probably thinking, “Avid has had this for years and years and years, just like Avid had Fluid Morph years before Adobe introduced Morph Cut.” While I thought the exact same thing, I really love that Adobe’s version is powered by the GPU. This really beefs up the speed of the latest HP z840 with Nvidia Quadro or GTX 980 Ti graphics cards and all their CUDA cores. Be warned though, Optical Flow (much like Morph Cut) works only in certain situations.

If you’ve ever used Twixtor or Fluid Motion in Media Composer, you know that sometimes there is a lot of work that goes into making those effects work. It’s not always the right solution to time remapping footage, especially if you are working on content that will air on broadcast television — even though Optical Flow may look great, some content will fail certain networks’ quality control because of the weird Jello-looking artifacting that can occur.

After Effects and the Lumetri Color Panel

3. Lumetri Color inside of After Effects
While you might already have a favorite coloring app or plug-in to use, having the ability to take clips from Premiere to After Effects, while carrying over the color correction you made inside of the Lumetri panels, is key. In addition, you can use the Lumetri effect inside of After Effects (located under the Utility category) to quickly color your clips inside of After Effects.

Overall, this round of updates seemed to be par for the course, nothing completely revolutionary but definitely useful and wanted. Personally, I don’t think that adding HDR capabilities should have taken precedence over some other updates, such as collaboration improvements (think Avid Media Composer and Avid’s Shared Storage solution, ISIS), general stability improvements, media management, etc. But Adobe is holding true to their word and bringing some of the latest and greatest improvements to their users… and causing users (and manufacturers) of other tools to take notice.

Brady Betzel is an online editor at Margarita Mix in Hollywood. Previously, he was editing The Real World at Bunim Murray Productions. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter, @allbetzroff.

Utopic editor Suzie Moore on cutting Nissan film for different screens

Utopic editor Suzie Moore has been tasked with cutting a two-minute film for Nissan called Red Thread, out of agency inVNT, that plays on large screens at Worldwide Auto Shows in six cities across the globe. Each show offers a different stage, screen size and shape. For example, the screen in Frankfurt is rectangular while at the Detroit show the screen is 6K and wraps around.

This is Moore’s third season working on the film, the purpose of which is to grab attention at an event packed with competing auto manufacturers revealing concept cars and new technologies to the press. The prepro started in August 2015 and delivery of content began at the end of September and will run through April 2016. Moore and her team are on call day and night throughout the project to trouble shoot and make sure the film plays as expected in each city.

Chicago-based Utopic’s job is shepherding the large-screen format film from prepro to graphics to edit. Moore and team will be in constant contact with the Nissan production teams in all six cities — Frankfurt, Tokyo, Detroit, Geneva, New York and Beijing.

We reached out to editor Moore to find out more about the project, her work and the specific challenges of a project like this one.

What was the project shot on, who shot it and who directed?
We received finished/generic masters for the spots that we used in the film. When the film features executives speaking about the cars, it was typically shot on Red at 4K and directed by the creatives from InVNT.

Interviews were filmed at Nissan’s corporate offices in Japan with the help of TBWA/Hakuhodo. They were also shot in La Jolla, California, at the Nissan Design Center and in various locations in Europe and South America with the help of the Nissan Newsroom team. We source content/commercial spots from all over the world. It’s interesting to see how the brand is represented in South Africa, China, Russia and Brazil.

What did you use for the edit and can you talk about resolutions?
I use Adobe Premiere Pro. It’s the perfect program for this material because I set my sequence setting to whatever size the screen is. For example, the Detroit screen is 5760×896 — so long and thin. Coming from the standard 16×9 rectangle, this is a totally different window, which necessitates a different process. Especially because the material that we are using is 16×9. So the challenge is taking all these parts and making them into a new whole while maintaining the integrity of the brand story that we are telling for each specific region.

How did you work with the client on the edit? Were you mostly left alone to do the edit or was the client with you?
We’re on our own a lot. The agency comes in for a few days in August when we brainstorm. We then have a kickoff for each show about six weeks out, followed by conference calls and postings. The creative/account teams for InVNT are always on the move. When we supervised one of the interview shoots, the shoot was in Japan, the creative was in Russia, the producer was in Detroit, and I was in Chicago. The global reach that this project has is one of the reasons why I love it.

What were the challenges of working on a project of this scale?
As mentioned above, while I do love the global reach, the time zones present a challenge. Japan is 14 hours ahead, so that shoot I mentioned was at 4am for me (6pm Japan).

For the Sao Paulo video last year, we collaborated with a CG team in Japan. This also presented a language barrier. I would get emails first in Japanese, then we would have a translator translate.

The biggest challenge though, by far, is the edit itself. Making 16×9 footage fill a 6K screen without blowing it up, then delivering the final piece in puzzle pieces to be reassembled on site because the screen is different every for show — so it’s never the same video twice. It might have similar elements, but it’s always changing, so it’s a challenge to keep evolving it and making it better and better each time.

The edit at times is layers upon layers and nests within nests. I feel like sometimes I hit the “end of the equation.” Like what I want to accomplish pushes the program to its limit. It’s a different kind of editing… very process driven. The creativity comes once I figure out the process for the edit.

What kinds of VFX were involved and how many shots?
We decide on the graphic treatment at the beginning of each year in August. We typically have a mix of 2D and 3D elements that we use which are created in Maxon Cinema 4D and Adobe After Effects. Last year the concept was based on the lines of the car, so we used 3D strokes and lens flares along with some 3D shape elements. This year, it’s 3D tendrils and chevrons. So these graphics elements open and close the film and are used as transition moments throughout.

————————
Other credits on the film include Yessian, which provided sound design, music and audio post.

Quick Chat: Northern Lights editor Chris Carson on Globetops campaign

Northern Lights editor Chris Carson has teamed up with the global non-profit Globetops — which collects used laptops and donates them to people in need of computers worldwide — in order to tell the stories of entrepreneurs in need of access to technology in Guinea.

The almost four-minute Laptop Stories: Guinea takes viewers to Guinea, where the non-profit began, to deliver laptops to the leader of an agricultural group in need of access to paperwork to receive government subsidies, a teacher who needs to log her students’ grades, an artisan who creates technological courses and a single mother who dreams of starting her own Internet café. Watch the short here.

Chris Carson

Chris Carson

Let’s dig in a bit with Carson regarding the edit…

How long was the shoot, and what was it shot on?
It was shot over a week while Globetops founder and the film’s director, Becky Morrison, was traveling in Guinea distributing laptops. She used a Nikon D800.

How did you work with the Morrison? What direction were you given and did you have some say in the edit?
She gave me a lot of control over the direction of the edit. She helped me at the beginning, finding (and translating) the best interview pieces. There were so many recipients we wanted to profile, but we eventually narrowed it down to five.

What did you edit on?
I usually work in Avid Media Composer, but for this I used Adobe Premiere.

Can you talk about the challenge of editing this project, which features interviews in a language different than your own?
Language was a big challenge. The hardest part was finding a way to quickly tell everyone’s backstory so we could get on to them receiving their laptops. Another thing Becky wanted to convey was a sense of the recipients’ strength, community spirit and entrepreneurial savvy, and not just frame them as needy or desperate people.

I suppose happiness and emotion bridges all language, because the looks on their faces when they got the laptops were amazing.
I was tempted to just edit a string of 15 people receiving laptops, because their joy is so palpable. But we wanted to give a sense of why they needed computers, what kind of work they were engaged in, and how much they could improve their own lives and even their communities.

Did you only do editing, or were you asked to do anything else?
I only did the editing (and the graphics), but Ted Gannon from SuperExploder did the audio mix. The DP was Jordan Engle.

Notes from Adobe Max 2015

By Daniel Restuccio

Creativity, community, collaboration and the cloud were the dominant themes of Adobe Max 2015, which attracted over 7,000 creative types to the Los Angeles Convention Center earlier this week.

Adobe’s Creative Cloud, with 5.3 million subscribers, wants to be the antidote to the phenomenon of “content velocity” — the increasing demand for more content to be delivered faster, better and less expensively. They highlighted the following solutions for managing the emerging, worldwide, 24/7 work schedule.

Here are some highlights:
– Mobile apps: projects on mobile devices with apps like Adobe Comp or Adobe Clip and then get sent to desktop apps In-Design or Premiere for finishing. Newly announced app Adobe Capture aggregates Adobe Shape, Color, Hue and Brush into one app.
– Creative Sync: all assets are in the Creative Cloud and can be instantly updated by anyone on any desktop or mobile device.
– All Adobe apps are now touch enabled on Microsoft Windows.
– Adobe Stock — the company’s royalty-free collection of high-quality photos, illustrations and graphics, which offers 40 million images, will soon expand to video.
– Video editing: Premiere is already UHD-, 4K- and 8K-enabled with Dolby Vision HDR extended dynamic range exhibition on the horizon.

Director Tim Miller (R) with Deadpool movie with Adobe’s senior VP, Bryan Lamkin,

Director Tim Miller (R) with Deadpool movie with Adobe’s senior VP, Bryan Lamkin,

Movies and Premiere Pro
Deadpool director Tim Miller addressed the crowd and spoke with Adobe’s Senior VP/GM, Bryan Lamkin. Miller shared that David Fincher persuaded him to make the switch to Adobe Premiere Pro for this film. In fact, Premiere editor Vashi Nedomansky set up the Premiere Pro systems on Deadpool, which was shot on Arri Raw.

Nedomansky trained the editors and assistant editors on the system, which is pretty much a clone of the one he set up for David Fincher — including using Jeff Brue’s solid state drives via OpenDrive — on Gone Girl.

The Coen brothers next film, Hail Caesar!, is also being cut on Premiere, so I think we’ve hit the tipping point with Premiere and feature work. I don’t suspect anyone’s going to throw out their Media Composers anytime soon, but Premiere is now the little engine that could, like Final Cut was back in the day.

Photos: Elizabeth Lippman

Crooked Letter Films shoots bike shop spot in ProRes with Cion

Brooklyn Bicycle Co. turned to Crooked Letter Films, a New York City-based one-stop shop, to shoot and post a promo for its website. Showing off the best parts of the New York City borough and highlighting Brooklyn Bicycle Co.’s role in the community, the 90-second spot  follows three local bikers along their daily routes through the city, highlighting landmarks and close-ups of the bikes.

Prior to production, Crooked Letter (@CL_films) founder Gabriel Gomez and DP Benjamin Garst compiled an extensive shot list. While a majority of the shots required a tripod, several of the action shots called for handheld. They chose to shoot the film in ProRes with an AJA Cion production camera.

“We’re in a day and age where anyone with a DSLR can call themselves a filmmaker, so we’ve recently started upping the ante for projects by working in higher resolutions and less-compressed formats, like ProRes,” explains Gomez. “For this project in particular, we wanted to create a branded piece that felt more like a short film, and Cion helped us give the piece that high-end cinematic feel.”

Gomez and his team often shot out of the back of a van, taking advantage of the camera’s lightweight and accessible top handle to manage “run-and-gun” situations. Using the camera’s PL mount, Gomez and Garst branched used a combination of Cooke 18-100 T3.1 and Cooke 25-250 T3.7 Cine PL lenses. Having set Cion at ISO 320 with extended gamma and flat settings, they captured 700GB of ProRes 444 and ProRes 422 footage. The combination of the Extended 1 Gamma mode with flat color correction in ProRes 444 enabled the team to get a lot of dynamic range, which in turn allowed the team to get a lot out of the color in post.

Using Cion’s AJA Pak doc, Crooked Letter’s team transferred ProRes footage into Adobe Premiere Pro CC for post, which they used to stylize the look of each shot. The team had many layers of color information from Cion to work with, so it could easily adjust the highlights and manipulate the footage without ruining any of the original color information during grading. The available color information proved particularly important for the closing shot of the Brooklyn Bridge.

“That last shot is big, wide and closes the piece, so it was crucial we get it right,” Gomez says. “Given the pretty bleak, overcast shooting conditions that day, we thought it might prove a challenge, but Cion provided such an amazing amount of infinite color detail that we were able to turn a gray and blown-out shot in reality into a picturesque sunrise shot with purple and orange washes, and even bring back the clouds.”

Review: Lenovo ThinkPad W550s Ultrabook mobile workstation

By Brady Betzel

Over the last few years, I’ve done a lot of workstation reviews, including ones for HP’s z800 and z840, Dell’s mobile workstations and now the Lenovo ThinkPad W550s mobile workstation.

After each workstation review goes live, I’m always asked the same question: Why would anyone pay the extra money for a professional workstation when you can buy something that performs almost as good if not better for half the price? That’s a great question.

What separates workstations from consumer or DIY systems is primarily ISV (Independent Software Vendor) certifications. Many companies, including Lenovo, work directly with software manufacturers like Autodesk and Adobe to ensure that the product you are receiving will work with the software you use, including drivers, displays, keypads, ports (like the mini display port) and so on. So while you are paying a premium to ensure compatibility, you are really paying for the peace of mind that your system will work with the software you use most. The Lenovo W550s has ISV-certified drivers with Autodesk, Dassault, Nemetscheck Vectorworks, PTC and Siemens, all relating to drivers for the Nvidia Quadro K620M graphics card.

W550s_Standard_05

Beyond ISV driver certifications, the Lenovo ThinkPad W550s is a lightweight powerhouse with the longest battery life I have ever seen in a mobile workstation — all for around $2,500.

Out of the box I noticed two batteries charging when I powered on Windows 8.1 — you can choose Windows 7 (64-bit) or 8.1 (64-bit). One of the best features I have seen in a mobile workstation is the ability to swap batteries without powering down (I guess that’s the old man in me coming out), and Lenovo has found a way to do it without charging an arm and a leg and physically only showing one battery. For $50 (included in the $2,500 price), you can have a three-cell (44Whr) battery in the front and a six-cell (72Whr) battery in the back. I was able to work about three days in a row without charging.

This was intermittent work ranging from sending out tweets with 10 tabs up in Chrome to encoding a 4K H.264 for YouTube in Adobe Media Encoder. It was a very welcome surprise, and if I had a second battery I could swap them out without losing power because of the battery in the front (built-in).

Under the Hood
The battery life is the biggest feature in my opinion, but let’s layout the rest of the specs… Processor: Intel Core i7-5600U (4MB Cache, up to 3.20GHz – I got 2.6); OS: Windows 8.1 Pro 64; Display: 15.5-inches 3K (2880×1620), IPS, Multi-touch, with WWAN; Graphics: Nvidia Quadro K620M 2GB; Memory: 16 PC3-12800 DDR3L; Keyboard: backlit with number keypad; Pointing Device: trackpoint (little red joystick looking mouse), Touchpad and Fingerprint Reader; Camera: 720p; Hard Drive: 512GB Serial ATA3, SSD; Battery: three-cell Li-Polymer 44Whr (Front), six-cell Li-ion 72Whr Cyl HC (Rear); Power Cord: 65W AC Adapter; Wireless: Intel 7265 AC/B/G/N dual band wireless plus Bluetooth; Warranty: one-year carry-in (diagnosed by phone first).

The W550s has a bunch of great inputs, like the mini display port, which I got to work instantly with an external monitor; three USB 3.0 ports with one of them always on for charging of devices; a smart card reader, which I used a lot; and even a VGA port.

W550s_Product tour_06 W550s_Product tour_05

In terms of power I received a nice Intel i7-5600U Quad Core CPU running at 2.6GHz or higher. Combined with the Nvidia Quadro K620M and 16GB of DDR3L, the Intel i7-5600U delivered enough power to encode my GoPro Hero 3+ Black Edition 4K timelapses quickly using the GoPro software and Adobe Media Encoder.

Encoding and layering effects is what really bogs a video editing system down, so what better way to see what the W550s is made of than by removing the fisheye on my clip with an effect on the image sequence containing about 2,400 stills in Adobe Premiere, speeding up the timelapse by 1,000 percent and sending the sequence to Adobe Media Encoder? In the end, the W550s chewed through the render and spit out a 4K YouTube-compatible H.264 in around 15 minutes. The CUDA cores in the Nvidia Quadro K620M really helped, although this did kick the fans on. I did about six of these timelapses to verify that my tests were conclusive. If you want to see them you can check them out on YouTube.

The Quadro K620M is on the lower end of the mobile Quadro family but boasts 384 CUDA cores that help with the encoding and transcoding of media using the Adobe Creative Suite. In fact, I needed a laptop to use in a presentation I did for the Editors’ Lounge. I wanted to run After Effects CC 2014 along with Video Copilot’s Element 3D V1.6 plug-in, Maxon Cinema 4D Studio R16 and Avid Media Composer 6.5, all while running Camtasia (screen capture software) the entire time. That’s a lot to run at once, and I decided to give the W550s the task.

In terms of processing power the W550s worked great — I even left After Effects running while I was inside of Cinema 4D doing some simple demos of House Builder and MoText work. I have to say I was expecting some lag when switching between the two powerhouse software programs, but I was running Element 3D without a hiccup, even replicating the text particle and adding materials and lighting to them – both a testament to a great plug-in as well as a great machine.

While the power was not a problem for the W550s, I did encounter some interesting problems with the screen resolution. I have to preface this by saying that it is definitely NOT Lenovo’s problem that I am describing, it has to do with Avid Media Composer not being optimized for this high resolution of a screen. Avid Media Composer was almost unusable on the 15.5-inch 3K (2880×1620), IPS, multi-touch screen. The user interface has not been updated for today’s high-resolution screens, including the W550s. It is something to definitely be aware of when purchasing a workstation like this.

I did a few benchmarks for this system using Maxon Cinebench R15 software, which tests the OpenGL and CPU performances as compared to other systems with similar specs. The OpenGL test revealed a score of 35.32fps while the CPU test revealed a score of 265cb. You can download Cinebench R15 here and test your current set-up against my tests of the W550s.

There are a couple of things cosmetically that I am not as fond of on the W550s. When you purchase the larger rear battery, keep in mind that it adds about ¼- to ½-inch lift — it will no longer sit flat. In addition the keyboard is very nice and I found myself really liking the addition of the numeric keypad, especially when typing in exact frames in Premiere, the touchpad has the three buttons on top instead of underneath like I have typically encountered. On one hand I can see how if you retrain yourself to use the three buttons with the left hand while using your right hand on the touch pad it may be more efficient. On the other hand it will get annoying. I like the idea of a touchscreen, in theory — It’s nice to move windows around. But practically speaking, from a video and motion graphics standpoint, it probably isn’t worth the extra money and I would stick to a non-touch screen for a mobile workstation.

The last item to cover is the warranty. Typically, workstations have a pretty good warranty. Lenovo gives you a one-year carry-in warranty with the purchase of the W550s, which to me is short. This really hurts the price of the workstation because to get more than a three-year warranty — one that will actually help you within a business day if a crisis arises – will cost you at least a few hundred dollars more.

Summing Up
In the end, the price and awesome battery life make the Lenovo ThinkPad W550s a lightweight mobile workstation that can crunch through renders quickly. If I was ordering one for myself I would probably max out the memory at 32GB, get rid of the touchscreen (maybe even keep the 1920×1080 resolution version) and keep everything else… oh, I would also upgrade to a better warranty.

Before you leave, take these highlights with you: extreme battery life, lightweight and durable, and powerful enough for multimedia use.

Quick Chat: Paul London from K Street Post in DC

Washington DC-based post boutique K Street Post, which opened its doors in April 2004, provides editing, audio post and finishing for spots, PSAs, station promos, corporate presentations and docs. Oh, and as their location might suggest, when the political season heats up, so do their post suites.

Recently we reached out to owner Paul London to find out more about K Street and how they work.

What was your goal when you opened K Street?
At the time we opened there were many small, nonlinear edit shops in Washington mainly using Avid Media Composer and Final Cut Pro. From the beginning, I wanted to set myself apart by taking advantage of my graphic design skills. I wanted to focus on high-end, graphically-intensive video and TV commercial projects. That’s one of the reasons I chose Quantel gear early on; their built-in editor, effects, text and paint tools were perfect for this type of work.

You are in the center of DC. How much of your work is political-based advertising?
K Street Post has always concentrated on television spot work. This includes local/regional commercials (Next Day Blinds, Silver Diner, Washington Times, Jiffy Lube), and some national TV ads for associations like the American Petroleum Institute and national PSAs for the USO (pictured below).

US use this one

We also do a large amount of political advertising for governors, congressmen, senators and some presidential races and issue advertising for political action committees (PACs and Super PACs).

Can you name a recent political job?
We have been working on a number of political campaigns recently, including a four-minute video for Carson America played at the Chicago rally during Ben Carson’s presidential announcement this Tuesday. We were making adjustments and shot changes right up to the last minute.

You are a boutique. What are the benefits of staying small?
K Street Post is small and that is by design. Right now we have a Quantel Pablo Rio edit room, a Final Cut Pro 7/Adobe Premiere edit room and an Avid Pro Tools audio room. During the very busy months of the political season — August and September of every even numbered year — we typically add an additional edit room, but the way things are shaping up for the 2016 race, we might end up adding more.

I love being small. I get to focus on what I like best, which is being creative and working with clients on TV projects. I have thought about growing the company and adding additional rooms, but then I would become more of a manager and have less time to be creative. Another great thing about being small is clients have direct access to us. The schedule book is easy to manage and clients can discuss projects and adjust booking directly with me.

K Street Post’s new Quantel Pablo Rio suite.

Can you talk about your workflow?
We handle all aspects of the post process, so offline editing with FCP or Premiere and online editing and color correction with the Pablo Rio. But to be honest, we have not done an offline for some time. The political spots we work on simply don’t have time for that type of workflow. You usually have 8 to 12 hours to create a finished ad, so you load, edit, grade and get a very polished version off to the client within the day. This is where the Pablo Rio comes in for us. We have found no other system that can do this with the quality level required for a statewide or national TV commercial.

Also, most of our commercial work is graphic heavy. It’s not uncommon to have 30 to 40 layers being used to create the finished ad. The Pablo Rio is fast enough for this type of client-attended work. It’s also great at accommodating changes, and there are lots of changes! It’s quite normal for a political ad to have several script changes even during editing.

Marching Murphys use 2
K Street Post’s Pablo Rio compositing with up to 50 layers.

If you could share one tip with clients about getting the most out of the post experience, what would it be?
Good question. I ask my clients to get me involved as early as possible. The more I know about the project the more I become immersed in it and the better the final result. Clients should never ambush their editors with their projects.

Making ‘Being Evel’: James Durée walks us through post

Compositing played a huge role in this documentary film.

By Randi Altman

Those of us of a certain age will likely remember being glued to the TV as a child watching Evel Knievel jump his motorcycle over cars and canyons. It felt like the world held its collective breath, hoping that something horrible didn’t happen… or maybe wondering what it would be like if something did.

Well, Johnny Knoxville, of Jackass and Bad Grandpa fame, was one of those kids, as witnessed by, well, his career. Knoxville and Oscar-winning filmmaker Daniel Junge (Saving Face) combined to make Being Evel, a documentary on the daredevil’s life and career. Produced by Knoxville’s Dickhouse Productions (yup, that’s right) and HeLo, it premiered at Sundance this year.

Continue reading

Radical/Outpost’s Evan Schechtman talks latest FCP X updates, NLE trends

By Randi Altman

As you might have heard, Apple has updated its Final Cut Pro to version 10.1.4, with what they call “key stability improvements.”

That includes the Pro Video Formats 2.0 software update, which provides native support for importing, editing and exporting MXF files with Final Cut Pro X. While the system already supported import of MXF files from video cameras, this update extends the format support to a broader range of files and workflows.

In addition the native MXF support, there is also an option to export AVC-Intra MXF files.  There are fixes for past issues with automatic library backups. It also fixes a problem where clips Continue reading

Bellawood hardwood floors take knocks, still shine in new spot

By Randi Altman

Cleats on a hardword floor. Yes, cleats… on a hard wood floor. That hurts just to type. And that’s only one of the cringe-worthy floor offenses shown in a recent spot for Bellawood and Lumber Liquidators out of agency Big River Advertising.

There’s tap dancing, skateboarding, a dog with less than perfect drinking habits, broken glass, coffee spewing…. you know, life. And that’s the point of the spot: the Bellwood floor depicted on A Bountiful Life has had a well-lived life. To help tell that story, Big River called on Trademarky Films and director Ruben Latre of Hostage Films, who also shot and helped post the project.

Continue reading

IBC: Square Box offers CatDV 11, CatDV Web 2, further integration with Adobe

At IBC, Square Box Systems is showing CatDV 11, the next-generation of its asset management software, in addition to CatDV Web 2, the company’s new approach to online media asset management.

The new features and toolsets in CatDV 11 and CatDV Web2, along with fresh workflow capabilities between Adobe Premiere Pro CC editorial software and Adobe Anywhere realtime collaborative platform, are designed to allow content owners to more easily repurpose and monetize their media assets, and significantly improve end-to-end workflow efficiency from production and post, through to delivery and distribution.

“CatDV 11 and CatDV Web 2 build on the strengths of previous versions of these leading MAM platforms, and deliver a step-change in performance and capabilities for small, medium and large enterprises alike,” says Dave Clack, CEO of Square Box. “CatDV Web 2 in particular provides a fresh, clean user interface that is completely intuitive to non-technical users, making it much easier for marketers and sales teams to repurpose material and monetize their content. These latest advances in CatDV media asset management systems make it far simpler and faster to perform the whole range of management tasks on an even wider range of production materials.”

Key features of CatDV 11 include an enhanced user-experience, with additional media management tools, accessible to new customers and familiar to exiting users; native-capable playback engine, supporting the latest digital cinematography cameras; improved integration with major video editing and creative collaboration platforms; and new functions for enterprise-scale deployments. CatDV 11’s new 64-bit architecture speeds up overall performance.

CatDV works with many high-res media formats, and the new CatDV 11 extends this support with a new player architecture supporting the latest broadcast formats as well as native Red Epic and AVCHD/MTS footage.

Key features of the new GUI include the creation of workspaces for convenient browsing, ingest, logging and search functions; tabbed browsing, which simplifies the process of working with multiple catalogues and query results; user-defined color-coding of smart labels; simplified tree navigation that allows the grouping of useful functions together that improve workflow; pop-up panels with metadata preview; and a hover-to-scrub facility that enables users to quickly preview their content. The addition of global metadata fields within CatDV 11 help to make large, enterprise deployments and automation easier.

CatDV Web 2 offers intuitive media and metadata management toolsets via an entirely browser-driven web interface. Features include logging, search and retrieval tools, a rough-cut editing capability, and easy system administration.

CatDV Web 2 can be used as the only interface to CatDV, or can co-exist with existing web and desktop clients, and is suited for cloud deployments, increasing the reach of CatDV outside typical production or engineering teams.

Integration with Adobe’s Premiere Pro CC and Anywhere
CatDV has worked closely with Adobe to improve workflow capabilities between CatDV 11 and Adobe Premiere Pro CC editorial software and Adobe Anywhere realtime collaborative platform.

An integrated CatDV panel within Premiere Pro supports a wide variety of search mechanisms, sub-clips, sequences and markers. Metadata and footage can also be previewed. CatDV for Anywhere adds the power of CatDV media management to Adobe Anywhere for video: sharing CatDV’s content seamlessly with Anywhere productions, within CatDV and via Adobe Premiere Pro CC.

Main image: CatDV Adobe Premiere Panel 3.

A young filmmaker discovers post can be lonely… and gratifying

By Emory Parker

French dramatist Charles-Guillaume Étienne coined the phrase, “If you want something done right, do it yourself.” This seems to be the justification for the resurgence of the auteur-style of filmmaking. Young filmmakers, especially those fresh out of film school, believe they will become successful by performing as many roles as they can on set. It’s not uncommon to see one person’s name appear several times in the credits of a film. But, what if that person is not the most qualified for one of the jobs she has given herself?

Last summer I helped a friend get over a bad break-up and was inspired to write a song about it. I sat down at the piano and my fingers kept returning to the same four chords. I began to hear different patterns emerg from the chords and play simultaneously in my head. I have been singing my whole life but have little experience writing music or playing Continue reading

Review: MediaSilo with Premiere/Prelude integration

By Brady Betzel

A few months ago I reviewed a cloud-based asset management and collaboration platform called Aframe. After that review hit, many people emailed asking how I “really” felt about it, and would it work in different production scenarios.

Competitors also reached out asking if I could review their product too. My Forbidden Technologies review is already up on postPerspective.com, and I recently took MediaSilo up on their offer and ran some of my own tests. I discovered that we really are going to be working in the cloud in the near future — this is not just a fad.

While these products do differ in their offerings, it’s clear there is a race to supremacy in the cloud wars. Forbidden Technology offers Forscene, a Web-based NLE, cloud storage and Continue reading

RuckSackNY creates anti-texting/driving PSA in time for Thanksgiving

Turkey-walk

New York — Manhattan based RuckSackNY has completed a Public Service Announcement in response to the growing number of texting-related car accidents.

RuckSackNY (www.rucksackny.com) created the 30-second spot, Why Did the Turkey Cross the Road?, with the aim of educating the audience about the dangers surrounding texting while driving or walking. See it here: https://www.youtube.com/watch?v=bQlJ-RXtGGc

Fred and Natasha Ruckel, the creative directors working on the texting PSA, spend a significant amount of time driving between the city and various shoot locations, encountering, like many of us, drivers swerving and veering precariously, sometimes at high speed, while scrolling through their text messages.

“It scares me to be on the road at times”, noted Fred Ruckel, “Because a car could just slam right into you.” Fred’s wife and business partner, Natasha, finds the situation stressful, but would lament, “There’s nothing you can do about it, don’t let it get to you.”

After a few near misses, Fred decided that there had to be something that he could do about it. “I wanted make a Public Service Announcement video to help raise awareness.”

Continue reading