Tag Archives: Review

Review: Samsung’s 970 EVO Plus 500GB NVMe M.2 SSD

By Brady Betzel

It seems that the SSD drives are dropping in price by the hour. (This might be a slight over-exaggeration, but you understand what I mean.) Over the last year or so there has been a huge difference in pricing, including high-speed NVMe SSD drives. One of those is the highly touted Samsung EVO Plus NVMe line.

In this review, I am going to go over Samsung’s 500GB version of the 970 EVO Plus NVMe M.2 SSD drive. The Samsung 970 EVO Plus NVMe M.2 SSD drive comes in four sizes — 250GB, 500GB, 1TB, and 2TB — and retails (according to www.samsung.com) for $74.99, $119.99, $229.99 and $479.99, respectively. For what it’s worth, I really didn’t see much of price difference on other sites I visited, namely Amazon.com and Best Buy.

On paper, the EVO Plus line of drives can achieve speeds of up to 3,500MB/s read and 3,300MB/s write. Keep in mind that the lower the storage size the lower the read/write speeds will be. For instance, the EVO Plus 250GB SSD can still get up to 3,500MB/s in sequential read speeds, while the sequential write speeds dwindle down to max speeds of 2,300MB/s. Comparatively, the “standard” EVO line can get 3,400MB/s to 3,500MB/s sequential read speeds and 1,500MB/s sequential write speeds on the 250GB EVO SSD. The 500GB version costs just $89.99, but if you need more storage size, you will have to pay more.

There is another SSD to compare the 970 EVO Plus to, and that is the 970 Pro, which only comes in 512GB and 1TB sizes — costing around $169.99 and $349.99, respectively. While the Pro version has similar read speeds to the Plus (up to 3,500MB/s read) and actually slower write speeds (up to 2,700MB/s), the real ticket to admission for the Samsung 970 Pro is the Terabytes Written (TBW) warranty period. Samsung warranties the 970 line of drives for five years or Terabytes Written, whichever comes first. In the 500GB line of 970 drives, the “standard” and Plus 970 cover 300TBW, while the Pro covers a whopping 600TBW.

Samsung says its use of the latest V-NAND technology, in addition to its Phoenix controller, provides the highest speeds and power efficiency of the EVO NVMe drives. Essentially, V-NAND is a way to vertically stack memory instead of the previous method of stacking memory in a planar way. Stacking vertically allows for more memory in the same space in addition to longer life spans. You can read more about the Phoenix controller here.

If you are like me and want both a good warranty (or, really, faith in the product) and blazing speeds, check out the Samsung 970 EVO Plus line of drives. Great price point with almost all of the features as the Pro line. The 970 line of NVMe M.2 SSD drives fits the 2280 form factor (meaning 22mm x 80mm) and fits an M key-style interface. It’s important to understand what interface your SSD is compatible with: either M key (or M) or B key. Cards in the Samsung 970 EVO line are all M key. Most newer motherboards will have at least one if not two M.2 ports to plug drives into. You can also find PCIe adapters for under $20 or $30 on Amazon that will give you essentially the same read/write speeds. External USB 3.1 Gen 2, USB-C enclosures can also be found that will give you an easier way of replacing the drives when needed without having to open your case.

One really amazing way to use these newly lower-priced drives: When color correcting, editing, and/or performing VFX miracles in apps like Adobe Premiere Pro or Blackmagic Resolve, use NVMe drives for only cache, still stores, renders and/or optimized media. With the low cost of these NVMe M.2 drives, you might be able to include the price of one when charging a client and throw it on the shelf when done, complete with the project and media. Not only will you have a super-fast way to access the media, but you can easily get another one in the system when using an external drive.

Summing Up
In the end, the price points of the Samsung 970 EVO Plus NVMe M.2 drives are right in the sweet spot. There are, of course, competing drives that run a little bit cheaper, like the Western Digital Black SN750 NVMe SSDs (at around $99 for the 500GB model), but they come with a slightly slower read/write speed. So for my money, the Samsung 970 line of NVMe drives is a great combination of speed and value that can take your computer to the next level.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Dell’s Precision T5820 workstation

By Brady Betzel

Multimedia creators are looking for faster, more robust computer systems and seeing an increase in computing power among all brands and products. Whether it’s an iMac Pro with a built-in 5K screen or a Windows-based, Nvidia-powered PC workstation, there are many options to consider. Many of today’s content creation apps are operating-system-agnostic, but that’s not necessarily true of hardware — mainly GPUs. So for those looking at purchasing a new system, I am going to run through one of Dell’s Windows-based offerings: the Dell Precision T5820 workstation.

The most important distinction between a “standard” computer system and a workstation is the enterprise-level quality and durability of internal parts. While you might build or order a custom-built system for less money, you will most likely not get the same back-end assurances that “workstations” bring to the party. Workstations aren’t always the fastest, but they are built with zero downtime and hardware/software functionality in mind. So while non-workstations might use high-quality components, like an Nvidia RTX 2080 Ti (a phenomenal graphics card), they aren’t necessarily meant to run 24 hours a day, 365 days a year. On the other hand, the Nvidia Quadro series GPUs are enterprise-level graphics cards that are meant to run constantly with low failure rates. This is just one example, but I think you get the point: Workstations run constantly and are warrantied against breakdowns — typically.

Dell Precision T5820
Dell has a long track record of building everyday computer systems that work. Even more impressive are its next-level workstation computers that not only stand up to constant use and abuse but are also certified with independent software vendors (ISVs). ISV is a designation that suggests Dell has not only tested but supports the end-user’s primary software choices. For instance, in the nonlinear editing software space I found out that Dell had tested the Precision T5820 workstation with Adobe Premiere Pro 13.x in Windows 10 and has certified that the AMD Radeon Pro WX 2100 and 3100 GPUs with 18.Q3.1 drivers are approved.

You can see for yourself here. Dell also has driver suggestions from some recent versions of Avid Media Composer, as well as other software packages. That being said, Dell not only tests but will support hardware configurations in the approved software apps.

Beyond the ISV certifications and the included three-year hardware warranty with on-site/in-home service after remote diagnostics, how does the Dell Precision T5820 perform? Well, it’s fast and well-built.

The specs are as follows:
– Intel Xeon W-2155 3.3GHz, 4.5GHz Turbo, 10-core, 13.75MB cache with hyperthreading
– Windows 10 Pro (four cores plus for workstations — this is an additional cost)
– Precision 5820 Tower with 950W chassis
– Nvidia Quadro P4000, 8GB, four DisplayPorts (5820T)
– 64GB (8x8GB) 2666MHz DDR and four RDIMM ECC
– Intel vPro technology enabled
– Dell Ultra-Speed Drive Duo PCIe SSD x8 Card, 1 M.2 512GB PCIe NVMe class 50 Solid State Drive (boot drive)
– 3.5-inch 2TB 7200rpm SATA hard drive (secondary drive)
– Wireless keyboard and mouse
– 1Gb network interface card
– USB 3.1 G2 PCIe card (two Type C ports, one DisplayPort)
– Three years hardware warranty with onsite/in-home service after remote diagnosis

All of this costs around $5,200 without tax or shipping and not including any sale prices.

The Dell Precision T5820 is the mid-level workstation offering from Dell that finds the balance between affordability, performance and reliability — kind of the “better, Cheaper, faster” concept. It is one of the quietest Dell workstations I have tested. Besides the spinning hard drive that was included on the model I was sent, there aren’t many loud cards or fans that distract me when I turn on the system. Dell is touting the new multichannel thermal design for advanced cooling and acoustics.

The actual 5820 case is about the size of a mid-sized tower system but feels much slimmer. I even cracked open the case to tinker around with the internal components. The inside fans and multichannel cooling are sturdy and even a little hard to remove without some force — not necessarily a bad thing. You can tell that Dell made it so that when something fails, it is a relatively simple replacement. The insides are very modular. The front of the 5820 has an optical drive, some USB ports (including two USB-C ports) and an audio port. If you get fancy, you can order the systems with what Dell calls “Flex Bays” in the front. You can potentially add up to six 2.5-inch or five 3.5-inch drives and front-accessible storage of up to four M.2 or U.2 PCIe NVMe SSDs. The best part about the front Flex Bays is that, if you choose to use M.2 or U.2 media, they are hot-swappable. This is great for editing projects that you want to archive to an M.2 or save to your Blackmagic DaVinci Resolve cache and remove later.

In the back of the workstation, you get audio in/out, one serial port, PS/2, Ethernet and six USB 3.1 Gen 1 Type A ports. This particular system was outfitted with an optional USB 3.1 Gen 2 10GB/s Type C card with one DisplayPort passthrough. This is used for the Dell UltraSharp 32-inch 4K (UHD) USB-C monitor that I received along with the T5820.

The large Dell UltraSharp 32-inch monitor (U3219Q) offers a slim footprint and a USB-C connection that is very intriguing, but they aren’t giving them away. They cost $879.99 if ordered through Dell.com. With the ultra-minimal Infinity Edge bezel, 400 nits of brightness for HDR content, up to UHD (3840×2160) resolution, 60Hz refresh rate and multiple input/output connections, you can see all of your work in one large IPS panel. For those of you who want to run two computers off one monitor, this Dell UltraSharp has a built-in KVM switch function. Anyone with a MacBook Pro featuring USB-C/Thunderbolt 3 ports can in theory use one USB-C cable to connect and charge. I say “in theory” only because I don’t have a new MacBook Pro to test it on. But for PCs, you can still use the USB-C as a hub.

The monitor comes equipped with a DisplayPort 1.4, HDMI, four USB 3.0 Type A ports and a USB-C port. Because I use my workstation mainly for video and photo editing, I am always concerned with proper calibration. The U3219Q is purported by Dell to be 99% Adobe sRGB-, 95% DCI-P3- and 99% Rec. 709-accurate, so if you are using Resolve and outputting through a DeckLink, you will be able to get some decent accuracy and even use it for HDR. Over the years, I have really fallen in love with Dell monitors. They don’t break the bank, and they deliver crisp and accurate images, so there is a lot to love. Check out more of this monitor here.

Performance
Working in media creation I jump around between a bunch of apps and plugins, from Media Composer to Blackmagic’s DaVinci Resolve and even from Adobe After Effects to Maxon’s Cinema 4D. So I need a system that can not only handle CPU-focused apps like After Effects but GPU-weighted apps like Resolve. With the Intel Xeon and Nvidia Quadro components, this system should work just fine. I ran some tests in Premiere Pro, After Effects and Resolve. In fact, I used Puget Systems’ benchmarking tool with Premiere and After Effects projects. You can find one for Premiere here. In addition, I used the classic 3D benchmark Cinebench R20 from Maxon, and even did some of my own benchmarks.

In Premiere, I was able to play 4K H.264 (50MB and 100MB 10-bit) and ProRes files (HQ and 4444) in realtime at full resolution. Red Raw 4K was able to playback in full-quality debayer. But as the Puget Systems’ Premiere Benchmark shows, 8K (as well as heavily effected clips) started to bog the system down. With 4K, the addition of Lumetri color correction slowed down playback and export a little bit — just a few frames under realtime. It was close though. At half quality I was essentially playing in realtime. According to the Puget Systems’ Benchmark, the overall CPU score was much higher than the GPU score. Adobe uses a lot of single core processing. While certain effects, like resizes and blurs, will open up the GPU pipes, I saw the CPU (single-core) kicking in here.

In the Premiere Pro tests, the T5820 really shined bright when working with mezzanine codec-based media like ProRes (HQ and 4444) and even in Red 4K raw media. The T5820 seemed to slow down when multiple layers of effects, such as color correction and blurs, were added on top of each other.

In After Effects, I again used Puget Systems’ benchmark — this time the After Effects-specific version. Overall, the After Effects scoring was a B or B-, which isn’t terrible considering it was up against the prosumer powerhouse Nvidia RTX 2080. (Puget Systems used the 2080 as the 100% score). It seemed the tracking on the Dell T5820 was a 90%, while Render and Preview scores were around 80%. While this is just what it says — a benchmark — it’s a great way to see comparisons between machines like the benchmark standard Intel i9, RTX 2080 GPU, 64GB of memory and much more.

In Resolve 16 Beta 7, I ran multiple tests on the same 4K (UHD), 29.97fps Red Raw media that Puget Systems used in its benchmarks. I created four 10-minute sequences:
Sequence 1: no effects or LUTs
Sequence 2: three layers of Resolve OpenFX Gaussian blurs on adjustment layers in the Edit tab
Sequence 3: five serial nodes of Blur Radius (at 1.0) created in the Color tab
Sequence 4: in the Color tab, spatial noise reduction was set at 25 radius to medium, blur set to 1.0 and sharpening in the Blur tab set to zero (it starts at 0.5).

Sequence 1, without any effects, would play at full debayer quality in real time and export at a few frames above real time, averaging about 33fps. Sequence 2, with Resolve’s OpenFX Gaussian blur applied three times to the entire frame via adjustment layers in the Edit tab, would play back in real time and export at between 21.5fps and 22.5fps. Sequence 3, with five serial nodes of blur radius set at 1.0 in the Blur tab in the Color tab, would play realtime and export at about 23fps. Once I added a sixth serial blur node, the system would no longer lock onto realtime playback. Sequence 4 — with spatial noise reduction set at 25 radius to medium, blur set to 1.0 and sharpening in the Blur tab set to zero in the Color tab — would play back at 1fps to 2fps and export at 6.5fps.

All of these exports were QuickTime-based H.264s exported using the Nvidia encoder (the native encoder would slow it down by 10 frames or so). The settings were UHD resolution; “automatic — best” quality; disabled frame reordering; force sizing to highest quality; force debayer to highest quality and no audio. Once I stacked two layers of raw Red 4K media, I started to drop below realtime playback, even without color correction or effects. I even tried to play back some 8K media, and I would get about 14fps on full-res. Premium debayer, 14 to 16 on half res. Premium 25 on half res. good, and 29.97fps (realtime) on quarter res. good.

Using the recently upgraded Maxon Cinebench R20 benchmark, I found the workstation to be performing adequately around the fourth-place spot. Keep in mind, there are thousands of combinations of results that can be had depending on CPU, GPU, memory and more. These are only sample results that you could verify against your own for 3D artists. The Cinebench R20 results were CPU: 4682, CPU (single-core): 436, and MP ratio: 10.73x. If you Google or check out some threads for Cinebench R20 result comparisons, you will eventually find some results to compare mine against. My results are a B to B+. A much higher-end Intel Xeon or i9 or an AMD Threadripper processor would really punch this system up a weight class.

Summing Up
The Dell Precision T5820 workstation comes with a lot of enterprise-level benefits that simply don’t come with your average consumer system. The components are meant to be run constantly, and Dell has tested its systems against current industry applications using the hardware in these systems to identify the best optimizations and driver packages with these ISVs. Should anything fail, Dell’s three-year warranty (which can be upgraded) will get you up and running fast. Before taxes and shipping, the Dell T5820 I was sent for review would retail for just under $5,200 (maybe even a little more with the DVD drive, recovery USB drive, keyboard and mouse). This is definitely not the system to look at if you are a DIYer or an everyday user who does not need to be running 24 hours a day, seven days a week.

But in a corporate environment, where time is money and no one wants to be searching for answers, the Dell T5820 workstation with accompanying three-year ProSupport with next-day on-site service will be worth the $5,200. Furthermore, it’s invaluable that optimization with applications such as the Adobe Creative Suite is built-in, and Dell’s ProSupport team has direct experience working in those professional apps.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 

Review: LaCie mobile, high Speed 1TB SSD

By Brady Betzel

With the flood of internal and external hard drives hitting the market at relatively low prices, it is sometimes hard to wade through the swamp and find the drive that is right for your workflow. In terms of external drives, do you need a RAID? USB-C? Is Thunderbolt 3 the same as USB-C? Should I save money and go with a spinning drive? Are spinning drives even cheaper than SSD drives these days? All of these questions are valid and, hopefully, I will answer them.

For this review, I’m taking a look at the LaCie Mobile SSD  which comes in three versions: 500GB, 1TB and 2TB, costing around $129.95, $219.95 and $399.95, respectively. According to LaCie’s website the mobile SSD drives are exclusive to Apple, but with some searching on Amazon you can find all three available as well and at lower prices than I’ve mentioned. The 1TB version I am seeing for $152.95 is being sold on Amazon through LaCie, so I assume the warranty still holds up.

I was sent the 1TB version of the LaCie Mobile SSD for review and testing. Along with the drive itself, you will get two connection cables: a (USB 3.0 speed) USB-A to USB-C cable, as well as a (USB 3.1 Gen2 speed) GenUSB-C to USB-C cable. For clarity, USB-C is the type of connection — the oval-like shape and technology used to transfer data. While USB-C will work on Thunderbolt 3 connections, Thunderbolt 3 only connections will not work on USB-C connections. Yes, that is super-confusing considering they look the same. But in the real world, Thunderbolt 3 is more Mac OS-based while USB-C is more Windows-based. You can find rare Thunderbolt 3 connections on Windows-based PCs, but you are more likely to find USB-C. That being said, the LaCie Mobile SSD is compatible with both USB-C and Thunderbolt 3, as well as USB 3.0. Keep in mind you will not get the high transfer speed with the USB 3.0 to USB-C cable. You will only get that with the (USB 3.1 Gen 2) USB-C to USB-C cable. The drive comes formatted as exFAT, which is immediately compatible with both Mac OS and Windows.

So, are spinning drives worth the cheaper price? In my opinion, no. Spinning drives are more fragile when moved around a lot and they transfer at much slower speeds. Advertised speeds vary from about 130MB/s for spinning drives to 540MB/s for SSDs, so for today what amounts to $100 more will give you a significant speed increase.

A very valuable piece of the LaCie Mobile SSD purchase is the limited three-year warranty and three years of data recovery services for free. No matter how your data becomes corrupted, Seagate will try and recover it — Seagate became LaCie’s parent company in 2014. Each product is eligible for one in-lab data recovery attempt and can be turned around in as little as two days, depending on the type of recovery. The recovered media will then be sent back to you on a storage device as well as be available to you from a cloud-based account that will be hosted online for 60 days. This is a great feature that’s included in the price.

The drive itself is small, measuring approximately .35” x 3” x 3.8” and weighing only .22 lbs. The outside has sharp lines much in the vein of a faceted diamond. It feels solid and great to carry. The color is about the same as a MacBook Pro, space gray and is made of aluminum.

Transfer SpeedsAlright, let’s get to the nitty-gritty: transfer speeds. I tested the LaCie Mobile SSD on both a Windows-based PC with USB-C and an iMac Pro with Thunderbolt 3/USB-C. On the Windows PC, I initially connected the drive to a port on the front of my system and I was only getting around 150MB/s write speed (about the speed of USB 3.0). Immediately, I knew something was wrong, so I connected to a USB-C port that was in a PCI-e slot in the rear of my PC. On that port I was getting 440.9MB/s write speed and 516.3MB/s read speeds. Moral of the story, make sure your USB-C ports are not just for charging or simply the USB-C connector running at USB 3.0 speeds.

On the iMac Pro, I was getting write speeds of 487.2MB/s and read speeds of 523.9MB/s. This is definitely on par with the correct Windows PC transfer speeds. The retail packaging on the LaCie Mobile SSD states a 540MB/s speed (doesn’t differentiate between read or write), but much like retail miles-per-gallon readouts on car sales brochures, you have to take their numbers with a few grains of salt. And while I have previoulsy tested drives (not from LaCie) that would initially transfer at a high rate and drop down, the LaCie Mobile SSD drive sustained the high speed transfer rates.

Summing Up
In the end, the size and design of the LaCie Mobile SSD will be one of the larger factors in determining if you buy this drive. It’s small. Like real small, but it feels sturdy. I don’t think anyone can argue that the LaCie Rugged drives (the ones that are orange-rubber encased) are a staple of the post industry. I really wish LaCie kept that tradition and added a tiny little orange rubberized edge. Not only does it feel safer for some reason, but it is a trademark that immediately says, “I’m a professional.”

Besides the appearance, the $152.95 price tag for a 1TB SSD drive that can easily fit into your shirt pocket without being noticed is pretty reasonable. At $219.95 I might say keep looking around. In addition, if you aren’t already an Adobe Creative Cloud subscriber you will get a free 30-day trial (normally seven days) included with purchase.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: CyberPower PC workstation with AMD Ryzen

By Brady Betzel

With the influx of end users searching for alternatives to Mac Pros, as well as new ways to purchase workstation-level computing solutions, there is no shortage of opinions on what brands to buy and who might build it. Everyone has a cousin or neighbor that builds systems, right?

I’ve often heard people say, “I’ve never built a system or used (insert brand name here), but I know they aren’t good.” We’ve all run into people who are dubious by nature. I’m not so cynical, and when it comes to operating and computer systems, I consider myself Switzerland.

When looking for the right computer system, the main question you should ask is, “What do you need to accomplish?” Followed by, “What might you want to accomplish in the future?” I’m a video editor and colorist, so I need the system I build to work fluidly with Avid Media Composer, Blackmagic DaVinci Resolve and Adobe’s Premiere and After Effects. I also want my system to work with Maxon Cinema 4D in case I want to go a little further than Video Copilot’s Element 3D and start modeling in Cinema 4D. My main focus is video editing and color correction but I also need flexibility for other tools.

Lately, I’ve been reaching out to companies in the hopes of testing as many custom-built Windows -based PCs as possible. There have been many Mac OS-to-Windows transplants over the past few years, so I know pros are eager for options. One of the latest seismic shifts have come from the guys over at Greyscalegorilla moving away from Mac to PCs. In particular, I saw that one of the main head honchos over there, Nick Campbell (@nickvegas), went for a build complete with the Ryzen Threadripper 32-core workhorse. You can see the lineup of systems here. This really made me reassess my thoughts on AMD being a workstation-level processor, and while not everyone can afford the latest Intel i9 or AMD Threadripper processors, there are lower-end processors that will do most people just fine. This is where the custom-built PC makers like CyberPower PC, who equip machines with AMD processors, come into play.

So why go with a company like CyberPowerPC? The prices for parts are usually competitive, and the entire build isn’t much more than if you purchased the parts by themselves. Also, you deal with CyberPower PC for Warranty issues and not individual companies for different parts.

My CustomBuild
In my testing of an AMD Ryzen 7 1700x-based system with a Samsung NVMe hard drive and 16GB of RAM, I was able to run all of the software I mentioned before. The best part was the price; the total was around, $1,000! Not bad for someone editing and color correcting. Typically those machines can run anywhere from $2,000 to $10,000. Although the parts in those more expensive systems are more complex and have double to triple the amount of cores, some of that is wasted. And when on a budget you will be hard-pressed to find a better deal than CyberPower PC. If you build a system yourself, you might get close but not far off.

While this particular build isn’t going to beat out the AMD Threadripper’s or Intel i9-based systems, the AMD Ryzen-based systems offer a decent bang for the buck. As I mentioned, I focus on video editing and color correcting so I tested a simple one-minute UHD (3840×2160) 23.98 H.264 export. Using Premiere along with Adobe’s Media Encoder, I used about :30 seconds of Red UHD footage as well as some UHD S-log3/s-gamut3 footage I shot on the Sony a7 III creating a one-minute long sequence.

I then exported it as an H.264 at a bitrate around 10Mb/s. With only a 1D LUT on the Sony a7iii footage, the one-minute sequence took one minute 13 seconds. With added 10% resizes and a “simple” Gaussian blur over all the clips, the sequence exported in one minute and four seconds. This is proof that the AMD GPU is working inside of Premiere and Media Encoder. Inside Premiere, I was able to playback the full-quality sequence on a second monitor without any discernible frames dropping.

So when people tell you AMD isn’t Intel, technically they are right, but overall the AMD systems are performing at a high enough level that for the money you are saving, it might be worth it. In the end, with the right expectations and dollars, an AMD-based system like this one is amazing.

Whether you like to build your own computer or just don’t want to buy a big-brand system, custom-built PCs are a definite way to go. I might be a little partial since I am comfortable opening up my system and changing parts around, but the newer cases allow for pretty easy adjustments. For instance, I installed a Blackmagic DeckLink and four SSD drives for a RAID-0 setup inside the box. Besides wishing for some more internal drive cages, I felt it was easy to find the cables and get into the wiring that CyberPowerPC had put together. And because CyberPowerPC is more in the market for gaming, there are plenty of RGB light options, including the memory!

I was kind of against the lighting since any color casts could throw off color correction, but it was actually kind of cool and made my setup look a little more modern. It actually kind of got my creativity going.

Check out the latest AMD Ryzen processors and exciting improvements to the Radeon line of graphics cards on www.cyberpowerpc.com and www.amd.com. And, hopefully, I can get my hands on a sweet AMD Ryzen Threadripper 2990WX with 32 cores and 64 threads to really burn a hole in my render power.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Razer Blade 15-inch mobile workstation

By Mike McCarthy

I am always looking for the most powerful tools in the smallest packages, so I decided to check out the Razer Blade 15-inch laptop with an Nvidia GeForce RTX 2080 Max-Q graphics card. The Max-Q variants are optimized for better thermals and power usage — at the potential expense of performance — in order to allow more powerful GPUs to be used in smaller laptops. The RTX 2080 is Nvidia’s top-end mobile GPU, with 2,944 CUDA cores and 8GB of DDR6 memory, running at 384GB/s with 13.6 billion transistors on the chip.

The new Razer Blade has a six-core Intel i7-8750H processor with 16GB RAM and a 512GB SSD. It has mDP 1.4, HDMI 2.0b, Thunderbolt 3 and three USB 3.1 ports. Its 15.6-inch screen can run at 144Hz refresh rate but only supports full HD 1920×1080, which is optimized for gaming, not content creation. The past four laptops I have used have all been UHD resolution at various sizes, which gives far more screen real estate for creative applications and better resolution to review your imagery.

I also prefer to have an Ethernet port, but I am beginning to accept that a dongle might be acceptable for that, especially since it opens up the possibility of using 10 Gigabit Ethernet. We aren’t going to see 10GigE on laptops anytime soon due to the excessive power consumption, but you only need 10GigE when at certain locations that support it, so a dongle or docking station is reasonable for those use cases.

Certain functionality on the system required a free account to be registered with Razer, which is annoying, but I’ve found this requirement is becoming the norm these days. That gives access to the Razer Synapse utility for customizing the system settings, setting fan speed and even remapping keyboard functionality. Any other Razer peripherals would be controlled here as well. As part of a top-end modern gaming system, the keyboard has fully controllable color back lighting. While I find most of the default “effects” to be distracting, the option to color code your shortcut keys is interesting. And if you really want to go to the next level, you can customize it further.

For example, when you press the FN key, by default the keys that have function behaviors connected with them light up white, which impressed me. The colors and dimming are generated by blinking the LEDs, but I was able to perceive the flicker when moving my eyes, so I stuck with colors that didn’t involve dimming channels. But that still gave me six options (RGB, CYM) plus white.

This is the color config I was running in the photos, but the camera does not reflect how it actually looks. In pictures, the keys look washed out, but in person they are almost too bright and vibrant. But we are here for more than looks, so it was time to put it through its paces and see what can happen under the hood.

Testing
I ran a number of benchmarks, starting with Adobe Premiere Pro. I now have a consistent set of tests to run on workstations in order to compare each system. The tests involve Red, Sony Venice and ARRI Alexa source files, with various GPU effects applied and exported to compressed formats. It handled the 4K and 8K renders quite well — pretty comparable to full desktop systems — showcasing the power of the RTX GPU. Under the sustained load of rendering for 30 minutes, it did get quite warm, so you will want adequate ventilation … and you won’t want it sitting on your lap.

My next test was RedCine-X Pro, with its new CUDA playback acceleration of files up to 8K. But what is the point of decoding 8K if you can’t see all the pixels you are processing? So for this test, I also connected my Dell UP3218K screen to the Razer Blade’s Mini DisplayPort 1.4 output. Outputting to the monitor does affect performance a bit, but that is a reasonable expectation. It doesn’t matter if you can decode 8K in real time if you can’t display it. Nvidia provides reviewers with links to some test footage, but I have 40TB to choose from, in addition to test clips from all different settings on the various cameras from my Large Format Camera test last year.

The 4K Red files worked great at full res to the external monitor — full screen or pixel for pixel — while the system barely kept up with the 6K and 8K anamorphic files. 8K full frame required half-res playback to view smoothly on the 8K display. Full-frame 8K was barely realtime with the external monitor disabled, but that is still very impressive for a laptop (I have yet to accomplish that on my desktop). The rest of the files played back solidly to the local display. Disabling the CUDA GPU acceleration requires playing back below 1/8th res to do anything on a laptop, so this is where having a powerful GPU makes a big difference.

Blackmagic Resolve is the other major video editing program to consider, and while I do not find it intuitive to use myself, I usually recommend it to others who are looking for a high level of functionality but aren’t ready to pay for Premiere. I downloaded and rendered a test project from Nvidia, which plays Blackmagic Raw files in real time with a variety of effects and renders to H.264 in 40 seconds, but it takes 10 times longer with CUDA disabled in Resolve.

Here, as with the other tests, the real-world significance isn’t how much faster it is with a GPU than without, but how much faster is it with this RTX GPU compared to with other options. Nvidia clams this render takes 2.5 times as long on a Radeon-based MacBook Pro, and 10% longer on a previous-generation GTX 1080 laptop, which seems consistent with my previous experience and tests.

The primary differentiation of Nvidia’s RTX line of GPUs is the inclusion of RT cores to accelerate raytracing and Tensor cores to accelerate AI inferencing, so I wanted to try tasks that used those accelerations. I started by testing Adobe’s AI-based image enhancement in Lightroom Classic CC. Nvidia claims that the AI image enhancement uses the RTX’s Tensor cores, and it is four times faster with the RTX card. The visual results of the process didn’t appear to be much better than I could have achieved with manual development in Photoshop, but it was a lot faster to let the computer figure out what to do to improve the images. I also ran into an issue where certain blocks of the image got corrupted in the process, but I am not sure if Adobe or Nvidia is at fault here.

Raytracing
While I could have used this review as an excuse to go play Battlefield V to experience raytracing in video games, I stuck with the content-creation focus. In looking for a way to test raytracing, Nvidia pointed me to OctaneRender. Otoy has created a utility called OctaneBench for measuring the performance of various hardware configurations with its render engine. It reported that the RTX’s raytracing acceleration was giving me a 3x increase in render performance.

I also tested ProRender in Maxon Cinema 4D, which is not a raytracing renderer but does use GPU acceleration through OpenCL. Apparently, there is a way to use the Arnold ray-tracing engine in Cinema 4D, but I was reaching the limits of my 3D animation expertise and resources, so I didn’t pursue that path, and I didn’t test Maya for the same reason.

With ProRender, I was able to render views of various demo scenes 10 to 20 times faster than I could with a CPU only. I will probably include this as a regular test in future reviews, allowing me to gauge render performance far better than I can with Cinebench (which returned a CPU score of 836). And compiling a list of comparison render times will add more context to raw data. But, for now, I was able to render the demo “Bamboo” scene in 39 seconds and the more complex “Coffee Bean” scene in 188 seconds, beating even the Nvidia marketing team’s expected results.

VR
No test of a top-end GPU would be complete without trying out its VR performance. I connected my Windows-based Lenovo Explorer Mixed Reality headset, installed SteamVR and tested both 360 video editing in Premiere Pro and the true 3D experiences available in Steam. As would be expected, the experience was smooth, making this one of the most portable solutions for full-performance VR.

The RTX 2080 is a great GPU, and I had no issues with it. Outside of true 3D work, the upgrade from the Pascal-based GTX 1080 is minor, but for anyone upgrading from systems older than that, or doing true raytracing or AI processing, you will see a noticeable improvement in performance.

The new Razer Blade is a powerful laptop for its size, and while I did like it, that doesn’t mean I didn’t run into a few issues along the way. Some of those, like the screen resolution, are due to its focus on gaming instead of content creation, but I also had an issue with the touch pad. Touch pad issues are common when switching between devices constantly, but in this case, right-clicking instead of left-clicking and not registering movement when the mouse button was pressed were major headaches. The problems were only alleviated by connecting a mouse and sticking with that, which I frequently do anyway. The power supply has a rather large connector on a cumbersome thick and stiff cord, but it isn’t going to be falling out once you get it inserted. Battery life will vary greatly depending on how much processing power you are using.

These RTX chips are the first mobile GPUs with dedicated RT cores and with Tensor cores, since Volta-based chips never came to laptops. So for anyone with processing needs that are accelerated by those developments, the new RTX chip is obviously worth the upgrade. If you want the fastest thing out there, this is it. (Or at least it was, until Razer added options for 9th Generation Intel processors this week and a 4K OLED screen (an upgrade I would highly recommend for content creators). The model I reviewed goes for $3,000. The new 9th Gen version with a 240Hz screen is the same price, while the 4K OLED Touch version costs an extra $300.

Summing Up
If you are looking for a more balanced solution or are on a more limited budget, you should definitely compare the new Razer Blade to the new Nvidia GTX 16 line of mobile products that was just announced. Then decide which option is a better fit for your particular needs and budget.

The development of eGPUs has definitely shifted this ideal target for my usage. While this system has a Thunderbolt 3 port, it is fast enough that you won’t see significant gains from an eGPU, but that advantage comes at the expense of battery life and price. I am drawn to eGPUs because I only need maximum performance at my desk, but if you need top-end graphics performance totally untethered, RTX Max-Q chips are the solution for you.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Mzed.com’s Directing Color With Ollie Kenchington

By Brady Betzel

I am constantly looking to educate myself, no matter what the source — or subject. Whether I am learning how to make a transition in Adobe After Effects from an eSports editor on YouTube to Warren Eagles teaching color correction in Blackmagic’s DaVinci Resolve on FXPHD.com, I’m always beefing up my skills. I even learn from bad tutorials — they teach you what not to do!

But when you come across a truly remarkable learning experience, it is only fair to share with the rest of the world. Last year I saw an ad for an MZed.com course called “Directing Color With Ollie Kenchington,” and was immediately interested. These days you can pretty much find any technical tutorial you can dream of on YouTube, but truly professional, higher education-like, theory-based education series are very hard to come by. Even ones you need to pay for aren’t always worth their price of admission, which is a huge let down.

Ollie sharing his wisdom.

Once I gained access to MZed.com I wanted to watch every educational series they had. From lighting techniques with ASC member Shane Hurlbut to the ARRI Amira Camera Primer, there are over 150 hours of education available from industry leaders. However, I found my way to Directing Color…

I am often asked if I think people should go to college or a film school. My answer? If you have the money and time, you should go to college followed by film school (or do both together, if the college offers it). Not only will you learn a craft, but you will most likely spend hundreds of hours studying and visualizing the theory behind it. For example, when someone asks me about the science behind camera lenses, I can confidently answer them thanks to my physics class based on lenses and optics from California Lutheran University (yes, a shameless plug).

In my opinion, a two-, four- or even 10-year education allows me to live in the grey. I am comfortable arguing for both sides of a debate, as well as the options that are in between —  the grey. I feel like my post-high school education really allowed me to recognize and thrive in the nuances of debate. Leaving me to play devil’s advocate maybe a little too much, but also having civil and proactive discussions with others without being demeaning or nasty — something we are actively missing these days. So if living in the grey is for you, I really think a college education supplemented by online or film school education is valuable (assuming you make the decision that the debt is worth it like I did).

However, I know that is not an option for everyone since it can be very expensive — trust me, I know. I am almost done paying off my undergraduate fees while still paying off my graduate ones, which I am still two or three classes away from finishing. That being said, Directing Color With Ollie Kenchington is the only online education series I have seen so far that is on the same level as some of my higher education classes. Not only is the content beautifully shot and color corrected, but Ollie gives confident and accessible lessons on how color can be used to draw the viewer’s attention to multiple parts of the screen.

Ollie Kenchington is a UK-based filmmaker who runs Korro Films. From the trailer of his Directing Color series, you can immediately see the beauty of Ollie’s work and know that you will be in safe hands. (You can read more about his background here.)

The course raises the online education bar and will elevate the audiences idea of professional insight. The first module “Creating a Palette” covers the thoughts behind creating a color palette for a small catering company. You may even want to start with the last Bonus Module “Ox & Origin” to get a look at what Ollie will be creating throughout the seven modules and about an hour and a half of content.

While Ollie goes over “looks,” the beauty of this course is that he goes through his internal thought processes including deciding on palettes based on color theory. He didn’t just choose teal and orange because it looks good, he chooses his color palette based on complementary colors.

Throughout the course Ollie covers some technical knowledge, including calibrating monitors and cameras, white balancing and shooting color charts to avoid having wrong color balance in post. This is so important because if you don’t do these simple steps, your color correction session while be much harder. And wasting time on fixing incorrect color balance takes time away from the fun of color grading. All of this is done through easily digestible modules that range from two to 20 minutes.

The modules include Creating a Palette; Perceiving Color; Calibrating Color; Color Management; Deconstructing Color 1 – 3 and the Bonus Module Ox & Origin.

Without giving away the entire content in Ollie’s catalog, my favorite modules in this course are the on-set modules. Maybe because I am not on-set that often, but I found the “thinking out loud” about colors helpful. Knowing why reds represent blood, which raise your heart rate a little bit, is fascinating. He even goes through practical examples of color use in films such as in Whiplash.

In the final “Deconstructing Color” modules, Ollie goes into a color bay (complete with practical candle backlighting) and dives in Blackmagic’s DaVinci Resolve. He takes this course full circle to show how since he had to rush through a scene he can now go into Resolve and add some lighting to different sides of someone’s face since he took time to set up proper lighting on set, he can focus on other parts of his commercial.

Summing Up
I want to watch every tutorial MZed.com has to offer. From “Philip Bloom’s Cinematic Masterclass” to Ollie’s other course “Mastering Color.” Unfortunately, as of my review, you would have to pay an additional fee to watch the “Mastering Color” series. It seems like an unfortunate trend in online education to charge a fee and then when an extra special class comes up, charge more, but this class will supposedly be released to the standard subscribers in due time.

MZed.com has two subscription models: MZed Pro, which is $299 for one year of streaming the standard courses, and MZed Pro Premium for $399. This includes the standard courses for one year and the ability to choose one “Premium” course.

“Philip Bloom’s Cinematic Master Class” was the Premium course I was signed up for initially, but you you can decide between this one and the “Mastering Color” course. You will not be disappointed regardless of which one you choose. Even their first course “How to Photograph Everyone” is chock full of lighting and positioning instruction that can be applied in many aspects of videography.

I really was impressed with Directing Color with Ollie Kenchington, and if the other course are this good MZed.com will definitely become a permanent bookmark for me.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Boris FX’s Continuum and Mocha Pro 2019

By Brady Betzel

I realize I might sound like a broken record, but if you are looking for the best plugin to help with object removals or masking, you should seriously consider the Mocha Pro plugin. And if you work inside of Avid Media Composer, you should also seriously consider Boris Continuum and/or Sapphire, which can use the power of Mocha.

As an online editor, I consistently use Continuum along with Mocha for tight blur and mask tracking. If you use After Effects, there is even a whittled-down version of Mocha built in for free. For those pros who don’t want to deal with Mocha inside of an app, it also comes as a standalone software solution where you can copy and paste tracking data between apps or even export the masks, object removals or insertions as self-contained files.

The latest releases of Continuum and Mocha Pro 2019 continue the evolution of Boris FX’s role in post production image restoration, keying and general VFX plugins, at least inside of NLEs like Media Composer and Adobe Premiere.

Mocha Pro

As an online editor I am alway calling on Continuum for its great Chroma Key Studio, Flicker Fixer and blurring. Because Mocha is built into Continuum, I am able to quickly track (backwards and forwards) difficult shapes and even erase shapes that the built-in Media Composer tools simply can’t do. But if you are lucky enough to own Mocha Pro you also get access to some amazing tools that go beyond planar tracking — such as automated object removal, object insertion, stabilizing and much more.

Boris FX’s latest updates to Boris Continuum and Mocha Pro go even further than what I’ve already mentioned and have resulted in a new version naming, this round we are at 2019 (think of it as Version 12). They have also created the new Application Manager, which makes it a little easier to find the latest downloads. You can find them here. This really helps when jumping between machines and you need to quickly activate and deactivate licenses.

Boris Continuum 2019
I often get offline edits effects from a variety plugins — lens flares, random edits, light flashes, whip transitions, and many more — so I need Continuum to be compatible with offline clients. I also need to use it for image repair and compositing.

In this latest version of Continuum, BorisFX has not only kept plugins like Primatte Studio, they have brought back Particle Illusion and updated Mocha and Title Studio. Overall, Continuum and Mocha Pro 2019 feel a lot snappier when applying and rendering effects, probably because of the overall GPU-acceleration improvements.

Particle Illusion has been brought back from the brink of death in Continuum 2019 for a 64-bit keyframe-able particle emitter system that can even be tracked and masked with Mocha. In this revamp of Particle Illusion there is an updated interface, realtime GPU-based particle generation, expanded and improved emitter library (complete with motion-blur-enabled particle systems) and even a standalone app that can design systems to be used in the host app — you cannot render systems inside of the standalone app.

While Particle Illusion is a part of the entire Continuum toolset that works with OFX apps like Blackmagic’s DaVinci Resolve, Media Composer, After Effects, and Premiere, it seems to work best in applications like After Effects, which can handle composites simply and naturally. Inside the Particle Illusion interface you can find all of the pre-built emitters. If you only have a handful make sure you download additional emitters, which you can find in the Boris FX App Manager.

       
Particle Illusion: Before and After

I had a hard time seeing my footage in a Media Composer timeline inside of Particle Illusion, but I could still pick my emitter, change specs like life and opacity, exit out and apply to my footage. I used Mocha to track some fire from Particle Illusion to a dumpster I had filmed. Once I dialed in the emitter, I launched Mocha and tracked the dumpster.

The first time I went into Mocha I didn’t see the preset tracks for the emitter or the world in which the emitter lives. The second time I launched Mocha, I saw track points. From there you can track where you want your emitter to track and be placed. Once you are done and happy with your track, jump back to your timeline where it should be reflected. In Media Composer I noticed that I had to go to the Mocha options and change the option from Mocha Shape to no shape. Essentially, the Mocha shape will act like a matte and cut off anything outside the matte.

If you are inside of After Effects, most parameters can now be keyframed and parented (aka pick-whipped) natively in the timeline. The Particle Illusion plugin is a quick, easy and good-looking tool to add sparks, Milky Way-like star trails or even fireworks to any scene. Check out @SurfacedStudio’s tutorial on Particle Illusion to get a good sense of how it works in Adobe Premiere Pro.

Continuum Title Studio
When inside of Media Composer (prior to the latest release 2018.12), there were very few ways to create titles that were higher resolution than HD (1920×1080) — the New Blue Titler was the only other option if you wanted to stay within Media Composer.

Title Studio within Media Composer

At first, the Continuum Title Studio interface appeared to be a mildly updated Boris Red interface — and I am allergic to the Boris Red interface. Some of the icons for the keyframing and the way properties are adjusted looks similar and threw me off. I tried really hard to jump into Title Studio and love it, but I really never got comfortable with it.

On the flip side, there are hundreds of presets that could help build quick titles that render a lot faster than New Blue Titler did. In some of the presets I noticed the text was placed outside of 16×9 Title Safety, which is odd since that is kind of a long standing rule in television. In the author’s defense, they are within Action Safety, but still.

If you need a quick way to make 4K titles, Title Studio might be what you want. The updated Title Studio includes realtime playback using the GPU instead of the CPU, new materials, new shaders and external monitoring support using Blackmagic hardware (AJA will be coming at some point). There are some great pre-sets including pre-built slates, lower thirds, kinetic text and even progress bars.

If you don’t have Mocha Pro, Continuum can still access and use Mocha to track shapes and masks. Almost every plugin can access Mocha and can track objects quickly and easily.
That brings me to the newly updated Mocha, which has some new features that are extremely helpful including a Magnetic Spline tool, prebuilt geometric shapes and more.

Mocha Pro 2019
If you loved the previous version of Mocha, you are really going to love Mocha Pro 2019. Not only do you get the Magnetic Lasso, pre-built geometric shapes, the Essentials interface and high-resolution display support, but BorisFX has rewritten the Remove Module code to use GPU video hardware. This increases render speeds about four to five times. In addition, there is no longer a separate Mocha VR software suite. All of the VR tools are included inside of Mocha Pro 2019.

If you are unfamiliar with what Mocha is, then I have a treat for you. Mocha is a standalone planar tracking app as well as a native plugin that works with Media Composer, Premiere and After Effects, or through OFX in Blackmagic’s Fusion, Foundry’s Nuke, Vegas Pro and Hitfilm.

Mocha tracking

In addition (and unofficially) it will work with Blackmagic DaVinci Resolve by way of importing the Mocha masks through Fusion. While I prefer to use After Effects for my work, importing Mocha masks is relatively painless. You can watch colorist Dan Harvey run through the process of importing Mocha masks to Resolve through Fusion, here.

But really, Mocha is a planar tracker, which means it tracks multiple points in a defined area that works best in flat surfaces or at least segmented surfaces, like the side of a face, ear, nose, mouth and forehead tracked separately instead of all at once. From blurs to mattes, Mocha tracks objects like glue and can be a great asset for an online editor or colorist.

If you have read any of my plugin reviews you probably are sick of me spouting off about Mocha, saying how it is probably the best plugin ever made. But really, it is amazing — especially when incorporated with plugins like Continuum and Sapphire. Also, thanks to the latest Media Composer with Symphony option you can incorporate the new Color Correction shapes with Mocha Pro to increase the effectiveness of your secondary color corrections.

Mocha Pro Remove module

So how fast is Mocha Pro 2019’s Remove Module these days? Well, it used to be a very slow process, taking lots of time to calculate an object’s removal. With the latest Mocha Pro 2019 release, including improved GPU support, the render time has been cut down tremendously. In my estimation, I would say three to four times the speed (that’s on the safe side). In Mocha Pro 2019 removal jobs that take under 30 seconds would have taken four to five minutes in previous versions. It’s quite a big improvement in render times.

There are a few changes in the new Mocha Pro, including interface changes and some amazing tool additions. There is a new drop-down tab that offers different workflow views once you are inside of Mocha: Essentials, Classic, Big Picture and Roto. I really wish the Essentials view was out when I first started using Mocha, because it gives you the basic tools you need to get a roto job done and nothing more.

For instance, just giving access to the track motion objects (Translation, Scale, Rotate, Skew and Perspective) with big shiny buttons helps to eliminate my need to watch YouTube videos on how to navigate the Mocha interface. However, if like me you are more than just a beginner, the Classic interface is still available and one I reach for most often — it’s literally the old interface. Big Screen hides the tools and gives you the most screen real estate for your roto work. My favorite after Classic is Roto. The Roto interface shows just the project window and the classic top toolbar. It’s the best of both worlds.

Mocha Pro 2019 Essentials Interface

Beyond the interface changes are some additional tools that will speed up any roto work. This has been one of the longest running user requests. I imagine the most requested feature that BorisFX gets for Mocha is the addition of basic shapes, such as rectangles and circles. In my work, I am often drawing rectangles around license plates or circles around faces with X-splines, so why not eliminate a few clicks and have that done already? Answering my need, Mocha now has elliptical and rectangular shapes ready to go in both X-splines and B-splines with one click.

I use Continuum and Mocha hand in hand. Inside of Media Composer I will use tools like Gaussian Blur or Remover, which typically need tracking and roto shapes created. Once I apply the Continuum effect, I launch Mocha from the Effect Editor and bam, I am inside Mocha. From here I track the objects I want to affect, as well as any objects I don’t want to affect (think of it like an erase track).

Summing Up
I can save tons of time and also improve the effectiveness of my work exponentially when working in Continuum 2019 and Mocha Pro 2019. It’s amazing how much more intuitive Mocha is to track with instead of the built-in Media Composer and Symphony trackers.

In the end, I can’t say enough great things about Continuum and especially Mocha Pro. Mocha saves me tons of time in my VFX and image restoration work. From removing camera people behind the main cast in the wilderness to blurring faces and license plates, using Mocha in tandem with Continuum is a match made in post production heaven.

Rendering in Continuum and Mocha Pro 2019 is a lot faster than previous versions, really giving me a leg up on efficiency. Time is money right?! On top of that, using Mocha Pro’s magic Object removal and Modules takes my image restoration work to the next level, separating me from other online editors who use standard paint and tracking tools.

In Continuum, Primatte Studio gives me the leg up on greenscreen keys with its exceptional ability to auto analyze a scene and perform 80% of the keying work before I dial-in the details. Whenever anyone asks me what tools I couldn’t live without, I without a doubt always say Mocha.
If you want a real Mocha Pro education you need to watch all of Mary Poplin’s tutorials. You can find them on YouTube. Check out this one on how to track and replace a logo using Mocha Pro 2019 in Adobe After Effects. You can also find great videos at Borisfx.com.

Mocha point parameter tracking

I always feel like there are tons of tools inside of the Mocha Pro toolset that go unused simply because I don’t know about them. One I recently learned about in a Surfaced Studio tutorial was the Quick Stabilize function. It essentially stabilizes the video around the object you are tracking allowing you to more easily rotoscope your object with it sitting still instead of moving all over the screen. It’s an amazing feature that I just didn’t know about.

As I was finishing up this review I saw that Boris FX came out with a training series, which I will be checking out. One thing I always wanted was a top-down set of tutorials like the ones on Mocha’s YouTube page but organized and sent along with practical footage to practice with.

You can check out Curious Turtle’s “More Than The Essentials: Mocha in After Effects” on their website where I found more Mocha training. There is even a great search parameter called Getting Started on BorisFX.com. Definitely check them out. You can never learn enough Mocha!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: G-Tech’s G-Speed Shuttle using a Windows PC

By Barry Goch

When I was asked to review the G-Technology G-Speed Shuttle SSD drive, I was very excited. I’ve always had great experiences with G-Tech and was eager to try out this product with my MSI 17.3-inch GT73VR Titan PC laptop… and this is where the story gets interesting.

I’ve been a Mac fan for years. I’ve owned Macs going back to the Mac Classic in the ‘90s. But a couple of years ago I reached a tipping point. My 17-inch MacBook Pro didn’t have the horsepower to support VR video, and I was looking to upgrade to a new Mac. But when I started looking deeper, comparing specifications and performance, specifically looking to harness the power of industry-leading GPUs for Adobe Premiere with its VR capabilities, I bought the MSI Titan VR because it shipped with the Nvidia GTX1070 graphics card.

The laptop is a beast and has all the power and portability I needed but couldn’t find in a Mac laptop at the time. I wanted to give you my Mac-to-PC background before we jump in, because to be clear: The G-Speed Shuttle SSD will provide the best performance when used with Thunderbolt 3 Macs. That doesn’t mean it won’t be great on a PC; it just won’t be as good as when used on a Mac.

G-Tech makes the PC configuration software easy to find on their website… and easy to use. I did find, though, that I could only configure the drive NTFS with RAID-5 on the PC. But, I was also able to speed test the G-Speed Shuttle SSD as a Mac-formatted drive on the PC, as well as using MacDrive that enables Mac drive formatting and mounting.

We actually reached out to G-Tech, which is a Western Digital brand, about the Mac vs. PC equation. This is what Matthew Bennion, director of product line management at G-Technology said: “Western Digital is committed to providing high-speed, reliable storage solutions to both PC and Mac power users. G Utilities, formatted for Windows computers, is constantly being added to more of our products, including most recently our G-Speed Shuttle products. The addition of G Utilities makes our full portfolio Windows-friendly.”

Digging In
The packaging of the G-Speed Shuttle SSD is very clean and well laid out. There is a parts box that has the Thunderbolt cable, power cable and instructions. Underneath the perfectly formed plastic box insert, wrapped in a plastic bag, was the drive itself. The drive has a lightweight polycarbonate chassis. I was surprised how light it was when I pulled it out of the box.

There are four drive bays, each with an SSD drive. The first things I noticed was the drive’s weight and sound — it’s very lightweight for so much storage, and it’s very quiet with no spinning disks. SSDs run quieter, cooler and uses less power than traditional spinning disks. I think this would be a perfect companion for a DIT looking for a fast, lightweight and low-power-consumption RAID for doing dailies.

I used the drive with Red RAW files inside of Resolve and RedCine-X. I set up a transcode project to make Avid offline files that the G-Speed Shuttle SSD handled muscularly. I left the laptop running overnight working on the files on more than one occasion and didn’t have any issues with the drive at all.

The main shortcoming of using a PC setup using the G-Shuttle is the lack of ability to create Apple ProRes codec QuickTime files. I’ve become accustomed to working with ProRes files created with my Blackmagic Ursa Mini camera, and PCs read those files fine. If you’re delivering to YouTube or Vimeo, it’s not a big deal. It is a bit of an obstacle if you need to deliver ProRes. For this review, I worked around this by rendering out a DPX sequence to the Mac-formatted G-Speed Shuttle SSD drive in Resolve (I also used Premiere) and made ProRes files using Autodesk Flame on my venerable 17-inch MacBook Pro. The Flame is the clear winner in quality of file delivery. So, yes, not being able to write ProRes is a pain, but there are ways around it. And, again, if you’re delivering just for the Web, it’s no big deal.

The Speed
My main finding involves the speed of the drive on a PC. In their marketing material for the drive, G-Tech advertises a speed of 2880 MB/sec with Thunderbolt 3. Using the AJA speed test, I was able to get 1590MB/sec — a speed more comparable with Thunderbolt 2. Perhaps it had something to do with the fact that using the G-Tech PC drive configuration program? I could only set up the drive as RAID-5, and not the faster RAID-0 or RAID-1. I did also run speed tests on the Mac-formatted G-Speed Shuttle SSD and I found similar speeds. I am certain that if I had a newer Thunderbolt 3 Mac, I would have gotten speeds closer to their advertised Mac speed specifications.

Summing Up
Overall, I really liked the G-Speed Shuttle SSD. It looks cool on the desk, it’s lightweight and very quiet. I wish I didn’t have to give it back!

And the cost? It’s 16TB for $7499.95, and 8TB for $4999.95.


Barry Goch is a Finishing Artist at The Foundation and a Post Production Instructor at UCLA Extension. You can follow him on Twitter at @gochya.

Review: HP DreamColor Z31x studio display for cinema 4K

By Mike McCarthy

Not long ago, HP sent me their newest high-end monitor to review, and I was eager to dig in. The DreamColor Z31x studio display is a 31-inch true 4K color-critical reference monitor. It has many new features that set it apart from its predecessors, which I have examined and will present here in as much depth as I can.

It is challenging to communicate the nuances of color quality through writing or any other form on the Internet, as some things can only be truly appreciated firsthand. But I will attempt to communicate the experience of using the new DreamColor as best I can.

First, we will start with a little context…

Some DreamColor History
HP revolutionized the world of color-critical displays with the release of the first DreamColor in June 2008. The LP2480zx was a 24-inch 1920×1200 display that had built-in color processing with profiles for standard color spaces and the ability to calibrate it to refine those profiles as the monitor aged. It was not the first display with any of these capabilities, but the first one that was affordable, by at least an order of magnitude.

It became very popular in the film industry, both sitting on desks in post facilities — as it was designed — and out in the field as a live camera monitor, which it was not designed for. It had a true 10-bit IPS pane and the ability to reproduce incredible detail in the darks. It could only display 10-bit sources from the brand-new DisplayPort input or the HDMI port, and the color gamut remapping only worked for non-interlaced RGB sources.

So many people using the DreamColor as a “video monitor” instead of a “computer monitor” weren’t even using the color engine — they were just taking advantage of the high-quality panel. It wasn’t just the color engine but the whole package, including the price, that led to its overwhelming success. This was helped by the lack of better options, even at much higher price points, since this was the period after CRT production ended but before OLED panels had reached the market. This was similar to (and in the same timeframe as) Canon’s 5D MarkII revolutionizing the world of independent filmmaking with its HDSLRs. The combination gave content creators amazing tools for moving into HD production at affordable price points.

It took six years for HP to release an update to the original model DreamColor in the form of the Z27x and Z24x. These had the same color engine but different panel technology. They never had the same impact on the industry as the original, because the panels didn’t “wow” people, and the competition was starting to catch up. Dell has PremierColor and Samsung and BenQ have models featuring color accuracy as well. The Z27x could display 4K sources by scaling them to its native 2560×1440 resolution, while the Z24x’s resolution was decreased to 1920×1080 with a panel that was even less impressive.

Fast forward a few more years, and the Z24x was updated to Gen2, and the Z32x was released with UHD resolution. This was four times the resolution of the original DreamColor and at half the price. But with lots of competition in the market, I don’t think it has had the reach of the original DreamColor, and the industry has matured to the point where people aren’t hooking them to 4K cameras because there are other options better suited to that environment, specifically battery powered OLED units.

DreamColor at 4K
Fast forward a bit and HP has released the Z31x DreamColor studio display. The big feature that this unit brings to the table is true cinema 4K resolution. The label 4K gets thrown around a lot these days, but most “4K” products are actually UHD resolution, at 3840×2160, instead of the full 4096×2160. This means that true 4K content is scaled to fit the UHD screen, or in the case of Sony TVs, cropped off the sides. When doing color critical work, you need to be able to see every pixel, with no scaling, which could hide issues. So the Z31x’s 4096×2160 native resolution will be an important feature for anyone working on modern feature films, from editing and VFX to grading and QC.

The 10-bit 4K Panel
The true 10-bit IPS panel is the cornerstone of what makes a DreamColor such a good monitor. IPS monitor prices have fallen dramatically since they were first introduced over a decade ago, and some of that is the natural progression of technology, but some of that has come at the expense of quality. Most displays offering 10-bit color are accomplishing that by flickering the pixels of an 8-bit panel in an attempt to fill in the remaining gradations with a technique called frame rate control (FRC). And cheaper panels are as low as 6-bit color with FRC to make them close to 8-bit. There are a variety of other ways to reduce cost with cheaper materials, and lower-quality backlights.

HP claims that the underlying architecture of this panel returns to the quality of the original IPS panel designs, but then adds the technological advances developed since then, without cutting any corners in the process. In order to fully take advantage of the 10-bit panel, you need to feed it 10-bit source content, which is easier than it used to be but not a forgone conclusion. Make sure you select 10-bit output color in your GPU settings.

In addition to a true 10-bit color display, it also natively refreshes at the rate of the source image, from 48Hz-60Hz, because displaying every frame at the right time is as important as displaying it in the right color. They say that the darker blacks are achieved by better crystal alignment in the LCD (Liquid Crystal Display) blocking out the backlight more fully. This also gives a wider viewing angle, since washing out the blacks is usually the main issue with off-axis viewing. I can move about 45 degrees off center, vertically or horizontally, without seeing any shift in the picture brightness or color. Past that I start to see the mid levels getting darker.

Speaking of brighter and darker, the backlight gives the display a native brightness of 250 nits. That is over twice the brightness needed to display SDR content, but this not an HDR display. It can be adjusted anywhere from 48 to 250 nits, depending on the usage requirements and environment. It is not designed to be the brightest display available, it is aiming to be the most accurate.

Much effort was put into the front surface, to get the proper balance of reducing glare and reflections as much as possible. I can’t independently verify some of their other claims without a microscope and more knowledge than I currently have, but I can easily see that the matte surface of the display is much better than other monitors in regards to fewer reflections and less glare for the surrounding environment, allowing you to better see the image on the screen. That is one of the most apparent strengths of the monitor, obviously visible at first glance.

Color Calibration
The other new headline feature is an integrated colorimeter for display calibration and verification, located in the top of the bezel. It can swing down and measure the color parameters of the true 10-bit IPS panel, to adjust the color space profiles, allowing the monitor to more accurately reproduce colors. This is a fully automatic feature, independent of any software or configuration on the host computer system. It can be controlled from the display’s menu interface, and the settings will persist between multiple systems. This can be used to create new color profiles, or optimize the included ones for DCI P3, BT.709, BT.2020, sRGB and Adobe RGB. It also includes some low-blue-light modes for use as an interface monitor, but this negates its color accurate functionality. It can also input and output color profiles and all other configuration settings through USB and its network connection.

The integrated color processor also supports using external colorimeters and spectroradiometers to calibrate the display, and even allows the integrated XYZ colorimeter itself to be calibrated by those external devices. And this is all accomplished internally in the display, independent of using any software on the workstation side. The supported external devices currently include:
– Klein Instruments: K10, K10-A (colorimeters)
– Photo Research: PR-655, PR-670, PR-680, PR-730, PR-740, PR-788 (spectroradiometers)
– Konica Minolta: CA-310 (colorimeter)
– X-Rite: i1Pro 2 (spectrophotometer), i1Display (colorimeter)
– Colorimetry Research: CR-250 (spectroradiometer)

Inputs and Ports
There are five main display inputs on the monitor: two DisplayPort 1.2, two HDMI 2.0 and one DisplayPort over USB-C. All support HDCP and full 4K resolution at up to 60 frames per second. It also has an 1/8-inch sound jack and a variety of USB options. There are four USB 3.0 ports that are shared via KVM switching technology between the USB-C host connection and a separate USB-B port to a host system. These are controlled by another dedicated USB keyboard port, giving the monitor direct access to the keystrokes. There are two more USB ports that connect to the integrated DreamColor hardware engine, for connecting external calibration instruments, and for loading settings from USB devices.

My only complaint is that while the many USB ports are well labeled, the video ports are not. I can tell which ones are HDMI without the existing labels, but what I really need is to know which one the display views as HDMI1 and which is HDMI2. The Video Input Menu doesn’t tell you which inputs are active, which is another oversight, given all of the other features they added to ease the process of sharing the display between multiple inputs. So I recommend labeling them yourself.

Full-Screen Monitoring Features
I expect the Z31x will most frequently be used as a dedicated full-resolution playback monitor, and HP has developed a bunch of new features that are very useful and applicable for that use case. The Z31x can overlay mattes (with variable opacity) for Flat and Scope cinema aspect ratios (1.85 and 2.39). It also can display onscreen markers for those sizes, as well as 16×9 or 3×4, including action and title safe, including further options for center and thirds markers with various colors available. The markers can be further customized with HP’s StudioCal.XML files. I created a preset that gives you 2.76:1 aspect ratio markers that you are welcome to download and use or modify. These customized XMLs are easy to create and are loaded automatically when you insert a USB stick containing them into the color engine port.

The display also gives users full control over the picture scaling, and has a unique 2:1 pixel scaling for reviewing 2K and HD images at pixel-for-pixel accuracy. It also offers compensation for video levels and overscan and controls for de-interlacing, cadence detection, panel overdrive and blue-channel-only output. You can even control the function of each bezel button, and their color and brightness. These image control features will definitely be significant to professional users in the film and video space. Combined with the accurate reproduction of color, resolution and frame rate, this makes for an ideal display for monitoring nearly any film or video content at the highest level of precision.

Interface Display Features
Most people won’t be using this as an interface monitor, due to the price and because the existing Z32x should suffice when not dealing with film content at full resolution. Even more than the original DreamColor, I expect it will primarily be used as a dedicated full-screen playback monitor and users will have other displays for their user interface and controls. That said, HP has included some amazing interface and sharing functionality in the monitor, integrating a KVM switch for controlling two systems on any of the five available inputs. They also have picture-in-picture and split screen modes that are both usable and useful. HD or 2K input can be displayed at full resolution over any corner of the 4K master shot.

The split view supports two full-resolution 2048×2160 inputs side by side and from separate sources. That resolution has been added as a default preset for the OS to use in that mode, but it is probably only worth configuring for extended use. (You won’t be flipping between full screen and split very easily in that mode.) The integrated KVM is even more useful in these configurations. It can also scale any other input sizes in either mode but at a decrease in visual fidelity.

HP has included every option that I could imagine needing for sharing a display between two systems. The only problem is that I need that functionality on my “other” monitor for the application UI, not on my color critical review monitor. When sharing a monitor like this, I would just want to be able to switch between inputs easily to always view them at full screen and full resolution. On a related note, I would recommend using DisplayPort over HDMI anytime you have a choice between the two, as HDMI 2.0 is pickier about 18Gb cables, occasionally preventing you from sending RGB input and other potential issues.

Other Functionality
The monitor has an RJ-45 port allowing it to be configured over the network. Normally, I would consider this to be overkill but with so many features to control and so many sub-menus to navigate through, this is actually more useful than it would be on any other display. I found myself wishing it came with a remote control as I was doing my various tests, until I realized the network configuration options would offer even better functionality than a remote control would have. I should have configured that feature first, as it would have made the rest of the tests much easier to execute. It offers simple HTTP access to the controls, with a variety of security options.

I also had some issues when using the monitor on a switched power outlet on my SmartUPS battery backup system, so I would recommend using an un-switched outlet whenever possible. The display will go to sleep automatically when the source feed is shut off, so power saving should be less of an issue that other peripherals.

Pricing and Options
The DreamColor Z31x is expected to retail for $4,000 in the US market. If that is a bit out of your price range, the other option is the new Z27x G2 for half of that price. While I have not tested it myself, I have been assured that the newly updated 27-inch model has all of the same processing functionality, just in a smaller form-factor, with a lower-resolution panel. The 2560×1440 panel is still 10-bit, with all of the same color and frame rate options, just at a lower resolution. They even plan to support scaling 4K inputs in the next firmware update, similar to the original Z27x.

The new DreamColor studio displays are top-quality monitors, and probably the most accurate SDR monitors in their price range. It is worth noting that with a native brightness of 250 nits, this is not an HDR display. While HDR is an important consideration when selecting a forward-looking display solution, there is still a need for accurate monitoring in SDR, regardless of whether your content is HDR compatible. And the Z31x would be my first choice for monitoring full 4K images in SDR, regardless of the color space you are working in.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: HP’s ZBook Studio G4 mobile workstation

By Brady Betzel

It seems like each year around this time, I offer my thoughts on an HP mobile workstation and how it serves multimedia professionals. This time I am putting the HP ZBook Studio G4 through its paces. The ZBook Studio line of HP’s mobile workstations seems to fit right in the middle between ease of mobility, durability and power. The ZBook 14u and 15u are the budget series mobile workstations that run Intel i5/i7 processors with AMD FirePro graphics and top out at around $1,600. The ZBook 15 and 17 are the more powerful mobile workstations in the line with the added ability to include Intel Xeon processors, ECC memory, higher-end Nvidia Quadro graphics cards and more. But in the this review we will take the best of all models and jam them into the light and polished ZBook Studio G4.

The HP ZBook Studio G4 I was sent to test out had the following components:
– Windows 10 64 bit
– Intel Xeon 1535M (7th gen) quad-core processor – 3.10GHz with 4.2 Turbo Boost
– 4K UHD DreamColor/15.6-inch IPS screen
– 32GB ECC (2x16GB)
– Nvidia Quadro M1200 (4GB)
– 512GB HP Z Turbo Drive PCIe (MLC)
– 92Whr fast charging battery
– Intel vPro WLAN
– Backlit keyboard
– Fingerprint reader

According to the info I was sent directly from HP, the retail price is $3,510 on hp.com (US webstore). I built a very similar workstation on http://store.hp.com and was able to get the price at $3,301.65 before shipping and taxes, and $3,541.02 with taxes and free shipping. So actually pretty close.

So, besides the natural processor, memory and hard drive upgrades from previous generations, the ZBook Studio G4 has a few interesting updates, including the higher-wattage batteries with fast charge and the HP Sure Start Gen3 technology. The new fast charge is similar to the feature that some products like the GoPro Hero 5/6 cameras and Samsung Galaxy phones have, where they charge quicker than “normal.” The ZBook Studio, as well as the rest of the ZBook line, will charge 50% of your battery in around 30 minutes when in standby mode. Even when using the computer, I was able to charge the first 50% in around 30 minutes, a feature I love. After the initial 50% charge is complete, the charging will be at a normal rate, which wasn’t half bad and only took a few hours to get it to about 100%.

The battery I was sent was the larger of the two options and provided me with an eight-hour day with decent usage. When pushed using an app like Resolve I would say it lasted more like four hours. Nonetheless it lasted a while and I was happy with the result. Keep in mind the batteries are not removable, but they do have a three-year warranty, just like the rest of the mobile workstation.

When HP first told me about its Sure Start Gen 3, I thought maybe it was just a marketing gimmick, but then I experienced its power — and it’s amazing. Essentially, it is a hardware function available on only 7th generation Intel processors that allows the BIOS to repair itself upon identification of malware or corruption. While using the ZBook Studio G4, I was installing some software and had a hard crash (blue screen). I noticed when it restarted the BIOS was running through the Sure Start protocol, and within minutes I was back up and running. It was reassuring and would really set my mind at ease if deciding between a workstation-level solution or retail store computing solution.

You might be asking yourself why you should buy an enterprise-level mobile workstation when you could go buy a laptop for cheaper and almost as powerful at Best Buy or on Amazon? Technically, what really sets apart workstation components is their ability to run 24/7 and 365 days a year without downtime. This is helped by Intel Xeon processors that allow for ECC (Error Correcting Code memory), essentially bits don’t get flipped as they can with non-ECC memory. Or for laymen, like me, ECC memory prevents crashing by fixing errors itself before we see any repercussions.

Another workstation-level benefit is the environmental testing that HP runs the ZBooks through to certify their equipment as military grade, also known as MIL-810G testing. Essentially, they run multiple extreme condition tests such as high and low temperatures, salt, fog and even high-vibration testing like gunfire. Check out a more in-depth description on Wikipedia. Finally, HP prides itself on its ISV (Independent Software Vendors) verification. ISV certification means that HP spends a lot of time working with software vendors like Adobe, Avid, Autodesk and others to ensure compatibility with their products and HP’s hardware so you don’t have to. They even release certified drivers that help to ensure compatibility regularly.

In terms of warranty, HP gives you a three-year limited warranty. This includes on-site service within the Americas, and as mentioned earlier it covers the battery, which is a nice bonus. Much like other warranties it covers problems arising from faulty manufacturing, but not intentional or accidental damage. Luckily for anyone who purchases a Zbook, these systems can take a beating. Physically, the computer weighs in around 4.6lbs and is 18mm thin. It is machined aluminum that isn’t sharp, but it can start to dig into your wrists when typing for long periods. Around the exterior you get two Thunderbolt 3 ports, an HDMI port, three USB 3.1 ports (one on left and two on the right), an Ethernet port and Kensington Lock port. On the right side, you also get a power port — I would love for HP to design some sort of break-away cable like the old Magsafe cables on the MacBook Pros — and there is also a headphone/mic input.

DreamColor Display
Alright, so now I’ll go through some of the post-nerd specs that you might be looking for. Up first is the HP DreamColor display, which is a color-critical viewing solution. With a couple clicks in the Windows toolbar on the lower right you will find a colored flower — click on that and you can immediately adjust the color space you want to view your work in: AdobeRGB, sRGB, BT.709, DCI-P3 or Native. You can even calibrate or backup your own calibration for later use. While most colorists or editors use an external calibrated monitoring solution and don’t strictly rely on your viewing monitor as the color-critical source, using the DreamColor display will get you close to a color critical display without purchasing additional hardware.

In addition, DreamColor displays can play back true 24fps without frame rate conversion. One of my favorite parts of DreamColor is that if you use an external DreamColor monitor through Thunderbolt 3 (not using an SDI card), you can load your color profile onto the second or third monitor and in theory they should match. The ZBook Studio G4 seems to have been built as a perfect DIT (digital imaging technician) solution for color critical work in any weather-challenged or demanding environment without you having to worry about failure.

Speed & Testing
Now let’s talk about speed and how the system did with speed tests. When running a 24TB (6TB-4TB drives) G-Speed ShuttleXL with Thunderbolt 3 from G-Technology, I was able to get write speeds of around 1450MB/s and read speeds of 960MB/s when running the AJA System Test using a 4GB test file running RAID-0. For comparison, I ran the same test on the internal 512GB HP Z Turbo Drive, which had a write speed of 1310MB/s and read speed 1524MB/s. Of course, you need to keep in mind that the internal drive is a PCIe SSD whereas the RAID is 7200RPM drives. Finally, I ran the standard benchmarking app Cinebench R15 that comes from the makers of Maxon Cinema 4D, a 3D modeling app. For those interested, the OpenGL test ran at 138.85fps with a Ref. Match of 99.6%, CPU 470cb and CPU (Single Core) 177cb with an MP Ratio of 2.65x.

I also wanted to run the ZBook through some practical and real-world tests, and I wanted to test the rendering and exporting speeds. I chose to use Blackmagic’s DaVinci Resolve 14.2 software because it is widely used and an easily accessible app for many of today’s multimedia pros. For a non-scientific yet important benchmark, I needed to see how well the ZBook G4 played back R3D files (Red camera files), as well as QuickTimes with typical codecs you would find in a professional environment, such as ProRes and DNxHD. You can find a bunch of great sample R3D files on Red’s website. The R3D I chose was 16 seconds in length, shot on a Red Epic Dragon at 120fps and UHD resolution (3840×2160). To make sure I didn’t have anything skewing the results, I decided to clear all optimized media, if there was any, delete any render cache, uncheck “Use Optimized Media If Available” and uncheck “Performance Mode” just in case that did any voodoo I wasn’t aware of.

First was a playback test where I wanted to see at what decode quality I could playback in at realtime without dropping frames when I performed a slight color correction and added a power window. For this clip, I was able to get it to playback in a 23.98/1080p timeline in realtime when it was set to Half Resolution Good. At Half Resolution Premium I was dropping one or two frames. While playing back and at Full Resolution Premium, I was dropping five or six frames —playing back at around 17 or 18fps. Playing back at Half Resolution Good is actually great playback quality for such a high-quality R3D with all the head room you get when coloring a raw camera file and not a transcode. This is also when the fans inside the ZBook really kicked in. I then exported a ProRes4444 version of the same R3D clip from RedCine-X Pro with the LUT info from the camera baked in. I played the clip back in Resolve with a light color treatment and one power window with no frames dropped. When playing back the ProRes4444 file the fans stayed at a low pitch.

The second test was a simple DNxHD 10-bit export from the raw R3D. I used the DNxHD 175x codec — it took about 29 seconds, which was a little less than double realtime. I then added spatial noise reduction on my first node using the following settings: Mode: Better, Radius: Medium, Spatial Threshold (luma/chroma locked): 25. I was able to playback the timeline at around 5fps and exported the same DNxHD 175x file, but it took about 1 minute 27 seconds, about six times realtime. Doing the same DNxHD 175x export test with the ProRes4444 file, it took about 12 seconds without noise reduction and with the noise reduction about 1 minute and 16 seconds — about 4.5 times realtime. In both cases when using Noise Reduction, the fans kicked on.

Lastly, I wanted to see how Resolve would handle a simple one minute, 1080p, ProRes QuickTime in various tests. I don’t think it’s a big surprise but it played back without dropping any frames with one node of color correction, one power window and as a parallel node with a qualifier. When adding spatial noise reduction I started to get bogged down to about 6fps. The same DNxHD 175x export took about 27 seconds or a little less than half realtime. With the same spatial noise reduction as above it took about 4 minutes and 21 seconds, about 4.3 times realtime.

Summing Up
The HP ZBook Studio G4 is a lightweight and durable enterprise-level mobile workstation that packs the punch of a color-critical 4K (UHD — 3840×2160) DreamColor display, powered by an Nvidia Quadro M1200, and brought together by an Intel Xeon processor that will easily power many color, editing or other multimedia jobs. With HP’s MIL-810G certification, you have peace of mind that even with some bumps, bruises and extreme weather your workstation will work. At under 5lbs and 18mm thin with a battery that will charge 50% in 30 minutes, you can bring your professional apps like DaVinci Resolve, Adobe Premiere and Avid Media Composer anywhere and be working.

I was able to use the ZBook along with some of my Tangent Element color correction panels in a backpack and have an instant color critical DIT solution without the need for a huge cart — all capable of color correction and transcoding. The structural design of the ZBook is an incredibly sturdy, machined aluminum chassis that is lightweight enough to easily go anywhere quickly. The only criticisms are I would often miss the left click of the trackpad leaving me in a right-click scenario, the Bang & Olufsen speakers sound a little tin-like to me and, finally, it doesn’t have a touch bar… just kidding.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Red Giant Trapcode Suite 14

By Brady Betzel

Every year we get multiple updates to Red Giant’s Adobe After Effects plug-in behemoth, Trapcode Suite. The 14th update to the Trapcode suite is small but powerful and brings significant updates to Version 3 of Trapcode as well as Form (Trapcode Form 3 is a particle system generator much like Particular, but instead of the particles living and dying they stay alive forever as grids, 3D objects and other organic shapes). If you have the Trapcode Suite from a previous purchase the update will cost $199, and if you are new the suite costs $999, or $499 with an academic discount.

Particular 3 UI

There are three updates to the Suite that warrant the $199 upgrade fee: Trapcode 3, Form 3 and Tao 1.2 update. However, you still get the rest of the products with the Trapcode Suite 14: Mir 2.1, Shine 2.0, Lux 1.4, 3D Stroke 2.6, Echospace 1.1, Starglow 1.7, Sound Keys 1.1 and Horizon 1.1

First up is the Tao 1.2 update. Trapcode Tao allows you to create 3D geometric patterns along a path in After Effects. If you do a quick YouTube search of Tao you will find some amazing examples of what it can do. In the Tao 1.2 update Red Giant has added a Depth-of-Field tool to create realistic bokeh effects on your Tao objects. It’s a simple but insanely powerful update that really gives your Tao creations a sense of realism and beauty. To enable the new Depth-of-Field, wander over to the Rendering twirl-down menu under Tao and either select “off” or “Camera Settings.” It’s pretty simple. From there it is up to your After Effects camera skills and Tao artistry.

Trapcode Particular 3
Trapcode Particular is one of Red Giant’s flagship plugins and it’s easy to see why. Particular allows you to create complex particle animations within After Effects. From fire to smoke to star trails, it can pretty much do whatever your mind can come up with, and Version 3 has some powerful updates, including the overhauled Trapcode Particular Designer.

The updated designer window is very reminiscent of the Magic Bullet Designer window, easy and natural to use. Here you design your particle system, including the look, speed and overall lifespan of your system. While you can also adjust all of these parameters in the Effects Window dialog, the Designer gives an immediate visual representation of your particle systems that you can drag around and see how it interacts with movement. In addition you can see any presets that you want to use or create.

Particular 3

In Particular 3, you can now use OBJ objects as emitters. An OBJ is essentially a 3D object. You can use the OBJ’s faces, vertices, edges, and the volume inside the object to create your particle system.

The largest and most important update to the entire Trapcode Suite 14 is found within Particular 3, and it is the ability to add up to eight particle systems per instance of Particular. What does that mean? Well, your particle systems will now interact in a way that you can add details such as dust or a bright core that can carry over properties from other particle systems in the same same instance, adding the ability to create way more intricate systems than before.

Personally, the newly updated Designer is what allows me to dial in these details easily without trying to twirl down tons of menus in the Effect Editor window. A specific use of this is that you want to duplicate your system and inherit the properties, but change the blend mode and/or colors, simply you click the drop down arrow under system and click “duplicate.” Another great update within the multiple particle system update is the ability to create and load “multi-system” presets quickly and easily.

Now, with all of these particle systems mashed together you probably are wondering, “How in the world will my system be able to handle all of these when it’s hard to even playback a system in the older Trapcode Suite?” Well, lucky for us Trapcode Particular 3 is now OpenGL — GPU-accelerated and allowing for sometimes 4x speed increases. To access these options in the Designer window, click the cogwheel on the lower edge of the window towards the middle. You will find the option to render using the CPU or the GPU. There are some limitations to the GPU acceleration. For instance, when using mixed blend modes you might not be able to use certain GPU acceleration types — it will not reflect the proper blend mode that you selected. Another limitation can be with Sprites that are QuickTime movies; you may have to use the CPU mode.

Last but not least, Particular 3’s AUX system (a particle system within the main particle system) has been re-designed. You can now choose custom Sprites as well as keyframe many parameters that could not be keyframed before.

Form 3 UI

Trapcode Form 3
For clarification, Trapcode Particular can create particle emitters that emit particles that have a life, so basically they are born and they die. Trapcode Form is a particle system that does not have a life — it is not born and it does not die. Some practical examples can be a ribbon like background or a starfield. These particle systems can be made from 3D models and even be dynamically driven by an audio track. And much like Particular’s updated Designer, Form 3 has an updated designer that will help you build you particle array quickly and easily. Once done inside the Designer you can hop out and adjust parameters in the Effects Panel. If you want to use pre-built objects or images as your particles you can load those as Sprites or Textured Polygons and animate their movement.

Another really handy update in Trapcode Form 3 is the addition of the Graphing System. This allows you to animate controls like color, size, opacity and dispersion over time.

Just like Particular, Form reacts to After Effect’s cameras and lights, completely immersing them into any scene that you’ve built. For someone like me, who loves After Effects and the beauty of creations from Form and Particular but who doesn’t necessarily have the time to create from scratch, there is a library of over 70 pre-built elements. Finally, Form has added a new rendering option called Shadowlet rendering which adds light falloff to your particle grid or array.

Form 3

Summing Up
In the end, the Trapcode Suite 14 has significantly updated Trapcode Particular 3 with multiple particle systems, Trapcode Form 3 with a beautiful new Designer, and Trapcode Tao with Depth-of-Field, all for an upgrade price of $199. Some Trapcode Particular users have been asking for the ability to build and manipulate multiple particle systems together, and Red Giant has answered their wishes.

If you’ve never used the Trapcode Suite you should also check out the rest of the mega-bundle which includes apps like Shine, 3D Stroke, Starglow, MIr, Lux, Sound Keys, Horizon and Echospace here. And if you want to get more in-depth rundowns of each of these programs check out Harry Frank’s (@graymachine) and Chad Perkin’s tutorials on the Red Giant News website. Then immediately follow @trapcode_lab and @RedGiantNews on Twitter.

If you want to find out more about the other tools in the Trapcode Suite check out my previous two-part review of Suite 13 here on postPerspective: https://postperspective.com/review-red-giants-trapcode-suite-13-part-1 and https://postperspective.com/review-red-giant-trapcode-suite-13-part-2.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Boxx’s Apexx 4 7404 workstation

By Brady Betzel

The professional workstation market has been blown open recently with companies like HP, Apple, Dell, Lenovo and others building systems containing i3/i5/i7/i9 and Xeon processors, and  AMD’s recent re-inauguration into the professional workstation market with their Ryzen line of processors.

There are more options than ever, and that’s a great thing for working pros, but for this review, I’m going to take a look at Boxx Technologies Apexx 4 7404, which the company sent me to run through its paces over a few months, and it blew me away.

The tech specs of the Apexx 4 7404 are:
– Processor: Intel i7-6950X CPU (10 cores/20 threads)
– One core is overclocked to 4.3GHz while the remaining nine cores can run at 4.1GHz
– Memory: 64GB DDR4 2400MHz
– GPUs: Nvidia Quadro P5000 (2560 CUDA cores, 16GB GDDR5X)
– Storage drive: NVMe Samsung SSD 960 (960GB)
– Operating system drive: NVMe Intel SSDPEDMW400 (375GB)
– Motherboard: ASUS X99-E WS/USB3.1

On the front of the workstation, you get two USB 3.0, two USB 2.0, audio out/mic in, and on the rear of the 7404 there are eight USB 3.0, two USB 3.1, two Gigabit Ethernet, audio out/mic in, line in, one S/PDIF out and two eSATA. Depending on the video card(s) you choose, you will have some more fun options.

This system came with a DVD-RW drive, which is a little funny these days but I suppose still necessary for some people. If you need more parts or drives there is plenty of room for all that you could ever want, both inside and out. While these are just a few of the specs, they really are the most important, in my opinion. If you purchase from Boxx all of these can be customized. Check out all of the different Boxx Apexx 4 flavors here.

Specs
Right off the bat you will notice the Intel i7-6950X CPU, which is a monster of a processor and retails for around $1,500, just by itself. With its hefty price tag, this Intel i7 lends itself to niche use cases like multimedia processing. Luckily for me (and you), that is exactly what I do. One of the key differences between a system like the Boxx workstation and ones from companies like HP is that Boxx takes advantage of the X or K series Intel processors and overclocks them, getting the most from your processors all while still being backed by Boxx’s three-year warranty. The 7404 has one core overclocked to 4.3GHz which can sometimes provide a speed increase for apps that don’t use multiple cores. While this isn’t a lot of cases it doesn’t hurt to have that extra boost.

The Apexx 4 case is slender (at 6.85-inches wide) and quiet. Boxx embraces liquid cooling systems to keep your enterprise-class components made by companies like Samsung, Intel, etc. running smoothly. Boxx systems are built and fabricated in Texas from aircraft grade aluminum parts and steel strengthening components.

When building your own system you might pick a case because the price is right or it is all that is available for your components (or that is what pcpartpicker.com tells you that is what fits). This can mean giving up build quality and potentially bad airflow. Boxx knows this and has gone beyond just purchasing other companies cases — they forge their own workstation case masterpieces.

Boxx’s support is based in Austin – no outsourcing — and their staff knows the apps we use such as Autodesk, Adobe and others.

Through Its Paces
I tested the Apexx 4 7404 using Adobe Premiere Pro and Adobe Media Encoder since they are really the Swiss Army knives of the multimedia content creation world. I edited together a 10-minute UHD (3840×2160) sequence using an XAVC MP4 I shot using a Sony a6300. I did a little color correction with the Lumetri Color tools, scaled the image up to 110% and exported the file through Media Encoder. I exported it as a 10-bit DNxHQX, UHD, QuickTime MOV.

It took seven minutes and 40 seconds to export to the OS drive (Intel) and about six minutes and 50 seconds to go to the internal storage drive (Samsung). Once I hit export I finally got the engines to rev up inside of the Boxx, the GPU fans seemed to kick on a little; they weren’t loud but you could hear a light breeze start up. On my way out of Premiere I exported an XML to give me a headstart in Resolve for my next test.

My next test was to import my Premiere XML into Blackmagic’s Resolve 14 Studio and export with essentially the same edits, reproduce the color correction, and apply the same scaling. It took a few minutes to get Resolve 14 up and running, but after doing a few uninstalls, installing Resolve 12.5.6 and updating my Nvidia drivers, Resolve 14 was up and running. While this isn’t a Boxx problem, I did encounter this during my testing so I figured someone might run into the same issue, so I wanted to mention it.

I then imported my XML, applied a little color correction, and double checked that my 110% scaling came over in the XML (which it did), and exported using the same DNxHQX settings that I used in Premiere. Exporting from Resolve 14 to the OS drive took about six minutes and 15 seconds, running at about 41 frames per second. When exporting to the internal storage drive it took about six minutes and 11 seconds, running between 40-42 frames per second. For those keeping track of testing details, I did not cache any of the QuickTimes and turned Performance Mode off for these tests (in case Blackmagic had any sneaky things going on in that setting).

After this, I went a little further and exported the same sequence with some Spatial Noise Reduction set across the entire 10-minute timeline using these settings: Mode: Better; Radius: Medium; Spatial Threshold: 15 on both Luma and Chroma; and Blend: 0. It ran at about nine frames per second and took about 25 minutes and 25 seconds to export.

Testing
Finally, I ran a few tests to get some geeky nerd specs that you can compare to other users’ experiences to see where this Boxx Apexx 4 7404 stands. Up first was the AJA System Test, which tests read and write speeds to designated disks. In addition, you can specify different codecs and file sizes to base this test off of. I told the AJA System Test to run its test using the 10-bit Avid DNxHQX codec, 16GB file size and UHD frame size (3860×2140). I ran it a few times, but the average was around 2100/2680 MB/sec write and read to the OS drive and 1000/1890 MB/sec write and read to the storage drive.

To get a sense of how this system would hold up to a 3D modeling test, I ran the classic Cinebench R15 app. OpenGL was 215.34 frames per second with 99.6% ref. match, CPU scored 2121cb and CPU (single core) cored 181cb with MP Ratio of 11.73x. What the test really showed me when I Googled Cinebench scores to compare mine to was that the Boxx Apexx 4 7404 was in the top of the heap for all categories. Specifically, within the top 20 for overall render speed being beaten only by systems with more cores and placed in the top 15 for single core speed — the OpenGL fps is pretty incredible at over 215fps.

Summing Up
In the end, the Boxx Apexx 4 7404 custom-built workstation is an incredible powerhouse for any multimedia workflow. From rendering to exporting to transcoding, the Boxx Apexx 4 7404 with dual Nvidia Quadro P5000s will chew through anything you throw at it.

But with this power comes a big price: the 7404 series starts at $7,246! The price of the one I tested lands much higher north though, more like just under $14,000 — those pesky Quadros bump the price up quite a bit. But if rendering, color correcting, editing and/or transcoding is your business, Boxx will make sure you are up and running and chewing through every gigabyte of video and 3D modeling you can run through it.

If you have any problems and are not up and running, their support will get you going as fast as possible. If you need parts replaced they will get that to you fast. Boxx’s three-year warranty, which is included with your purchase, includes getting next day on-site repair for the first year but this is a paid upgrade if you want it to continue for years two and three of your warranty. But don’t worry. If you don’t upgrade your warranty you still have two years of great support.

In my opinion, you should really plan for the extended on-site repair upgrade for all three years of your warranty — you will save time, which will make you more money. If you can afford a custom-built Boxx system, you will get a powerhouse workstation that makes working in apps like Premiere and Resolve 14 snappy and fluid.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Red Giant Trapcode Suite 13, Part 2

By Brady Betzel

In my recent Red Giant Trapcode Suite 13 for After Effects review, Part 1, I touched on updates to Particular, Shine, Lux and Starglow. In this installment, I am going to blaze through the remaining seven plug-ins that make up the Trapcode Suite. Those include Form, Mir, Tao, 3D Stroke, Echospace, Sound Keys and Horizon. While Particular is the most well-known plug-in in the Suite, the following seven all are incredibly useful and can help make you money.

Form 2.1
Trapcode Form 2.1 is best described as a particle system, much like Particular, but with particles that live forever and are used in forms like cubes. If you’ve used Element 3D by Video CoPilot you probably know that you can load objects from Maxon Cinema 4D into your Adobe After Effects projects pretty easily and, for all intents and purposes, quickly. Form allows you to load these 3D OBJ files and alter them inside of After Effects.

When you load the OBJ file, Form applies particles at each vertices point. The more vertices you have in your 3D object, the more detail you will have in your Form. It is really a cool way to create a techy kind of look for a HUD (heads up display) or sweet motion graphics piece that needs that futuristic pointillism type look. The original function of Form was to create particle grids that could be exploded or tightly wound and that would live on forever, as opposed to Particular, which creates particle systems with a birth and a death.

Form

Form 2.1

A simple way to think of how Form works is to imagine the ability to take simple text and transform it into “particles” to create a sandy explosion or turn everyday objects into particles that live forever. From Grids to Strings and Spheres to Sprites, with enough practice you can create some of the most stunning backgrounds or motion graphics wizardry inside of Trapcode Form, all of which is affected by After Effect lights and cameras in 3D space.

I was really surprised at how powerful and smooth Trapcode Form can run. I am running a tablet with an Intel i7 processor and I was able to get very reasonable performance, even with my camera depth-of-field turned on.

Mir 2.0
Trapcode Mir is an extremely useful plug-in for those wanting to create futuristic terrains or modern triangulated environments with tunnels and valleys. Mir is versatile and can go from creating smooth ocean floors to spiky mountain tops to extreme wireframe structures. Some of the newest updates in Mir 2.0 are the ability to add a spiral to the Mir landscape mesh you create (think galaxy); seamless looping under the fractal menu; ability to choose between triangles and quads for your surfaces; the really cool ability to add a second pass wireframe on top of your surface for that futuristic grid look; texture sampling from smooth gradients to solid colors; control of the maximums and minimums under z-range (basically allows for easier peaks and valleys); multi-, smoothridge, multi-smoothridge and regular fractals for differing displacements on your textures; and improved VRAM management for speedy processing.

Mir 2

Mir 2.0

These days GIFs are all the rage, so I am really impressed with the seamless loop option. It might seem ridiculous but if you’ve seen what is popular on social media you will know it’s emojis and GIFs. If you want to prep your seamless loop, check out this quick video from Trapcode creator Peder Norrby (@trapcode_lab).

Simply, you create beginning and end keyframes, find the seamless loop options under the Fractal category, step back one frame from your end loop point, mark your end-of-work area, go to the loop point (which should be one frame past where you marked the end to your work area) and click Set End Keyframe. From there Trapcode Mir will fill in the rest of the details and create your seamless loop ready to be exported as a GIF and blasted on Twitter. It’s really that easy.

If you are looking for an animated GIF export setting, try exporting through Adobe Media Encoder and searching “GIF” in the presets. You will find an “Animated GIF” preset, which I resized to something more appropriate like 1280×720 but that still came out at 49MB — way over the 5MB Twitter upload limit. I tried a few times, first with 50% quality at 640×360, which got me to 13.7MB. I even changed the quality down to 5% in Media Encoder, but I kept getting 13.7MB until I brought the size down to 320×180. That got me just under 4MB, which is perfect! If you do a lot of GIF work, an easy way to compress them is to use http://ezgif.com/optimize and to fiddle with their optimization settings to get under 5MB. It’s quick and it all lives online.

As with all Trapcode Suite plug-ins (or anything for that matter), the only way to get good is to experiment and allow yourself to fail or succeed. This holds true for Mir. I was making garbage one minute and with a couple changes I made some motion graphics that made me see the potential of the plug-in and how I could actually make content that people would be blown away with.

3D Stroke

3D Stroke

3D Stroke
One plug-in that isn’t new but will lead into the next one is Trapcode 3D Stroke. 3D Stroke takes the built-in After Effects plug-in Stroke to a new level. Traditional Stroke is an 8-bit plug-in while Trapcode 3D Stroke can run on the color-burning 32-bits-per-channel mode. If you want to add a stroke along a path that interacts with your comp cameras in 3D space, Trapcode 3D Stroke is what you want. From creating masks of your text and applying a sweet 3D Stroke to them to intricate 3D paths that zoom in between objects with a HDR-like glow, 3D Stroke is one of those tools to have in your After Effects tool box.

When using it I really fell in love with the repeater. Much like Element 3D’s particle arrays, the repeater can create multiple instances of your paths or text paths to create some interesting and infinitely adjustable objects.

Tao
Trapcode Tao is new to the Trapcode Suite of plug-ins. Tao gives us the ability to create 3D geometry along a path, and boy did people immediately fall in love with this tool when it was released. You can find tons of examples and tutorials of Tao from experts like VinhSon Nguyen, better known as @CreativeDojo on Twitter. Check out his tutorial on Vimeo, too. Tao is a tricky beast, and one way I learned about it in-depth was to download Peder Norrby’s project files over at http://www.trapcode.com and dissect them as best I could.

Tao

Tao

If you remember Trapcode 3D Stroke from earlier, you know that it allows us to create awesome glows and strokes along paths in 3D space. Trapcode Tao operates in much the same way as 3D Stroke except that it uses particles like Mir to create organic flowing forms in 3D space that interact with After Effects’ cameras and lights.

Trapcode Tao is about as close as you can get to modeling 3D geometry inside of After Effects at realtime speeds with image-based lighting. The only other way to achieve this is with Video CoPilot’s Element 3D or by using Cinema 4D via Cineware, which is sometimes a painstaking process.

Horizon 1.1
Another product that I was surprised by was Trapcode Horizon 1.1. In the age of virtual reality and 360 video you can never have too many ways to make your own worlds to pan cameras around in. With a quick Spherical Map search on Google, I found all the equi-rectangular maps I could handle. Once inside of After Effects, you need to import and resize your map to your comp size, add a new solid and camera, throw Horizon on top of your solid, under Image Map > Layer, choose the layer name containing your spherical image, and BAM! You have a 360-world. You can then add elements like Trapcode Particular, 3D Stroke or Tao and pan and zoom around to make some pretty great opening titles or even make your own B-Roll!

Echospace

Echospace 1.1

Echospace 1.1
Trapcode Echospace 1.1 is a powerful section in the Trapcode Suite 13 plug-in library. It is one of those plug-ins where you watch the tutorials and wonder why people don’t talk about it more. In simple terms, Echospace replicates layers and creates interdependent parenting links to the original layer, allowing you to create complex repeated element animations and layouts. In essence it feels more like a complex script as opposed to a plug-in.

Let’s say you want to create some offset animation of multiple shape layers in three-dimensional space, Echospace is your tool. It’s a little hard to use and if you don’t Shy the replicated layers and nulls, it will be intimidating. When you create the repeated layers, Echospace automatically sets your layers to Shy if you enable Shy layers in your tool bar. A great Harry Frank (@graymachine) tutorial/Red Giant Live episode can be found on the Red Giant website: http://www.redgiant.com/tutorial/red-giant-tv-live-episode-8-motion-graphics-with-trapcode-echospace.

Sound Keys 1.3
The last plug-in in the massive Trapcode Suite v13 library is Sound Keys 1.3. Sound Keys analyzes audio files and can draw keyframes based on their rhythm. One reason I left this until the end of my review is that you can attach any of the parameters from the other Trapcode Suite 13 plug-ins to the outputs of the Sound Keys 1.3 keyframes via a pick whip. If I just lost you by saying pick whip, snap back into it.

If you learn one thing in the After Effects scripting world, it’s that you can attach one parameter to another by alt+clicking (command+clicking) on the stopwatch of the parameter that you want to be driven by another parameter and dragging the curly-looking icon over the other parameter. So in the Sound Keys case, you can attach the scale of an object to the rhythm of a bass drum.

Soundkeys Color Orientation

Sound Keys 1.3

What I really liked about Sound Keys is that it not only can create a dynamically driven piece of motion graphics, but you can also use the audio meters it draws to visualize the audio. You see this a lot in lyric music videos or YouTube videos that are playing music only but still want a touch of visual flare, and with Sound Keys 1.3 you can change the visual representation of the audio including color, quantization (little dots that you see on audio meters) and size.

Easily isolate an audio frequency with the onscreen controls, find the effect you want to drive by the audio, and pick whip your way to dynamic motion graphic. If I was the graphics designer I wish I was, I would take Sound Keys and something like Particular or Tao and create some stunning work. I bet I could even make some money making some lyric videos… one day.

Summing Up
In the end, the Trapcode Suite v13 is an epic and monumental release. The total cost as a package is $999, and while it is a significantly higher cost than After Effects, let me tell you: it has the ability to make you way more money with some time and effort. Even with just an hour or so a day I feel like my Trapcode game would go to the next level.

For those that have the Trapcode Suite and want to upgrade for $199, there are some huge benefits to the v13 update including Trapcode Tao, GPU performance upgrades across the board, and even things like the second pass wireframe for Mir.

If you are a student, you can grab Trapcode Suite 13 for $499 with a little verification legwork. If you are worried about your system working efficiently with the Trapcode Suite you can check the technical requirements here, but I was working on an Intel i7 tablet with 8GB of memory and Intel Iris 6100 graphics processor. I found everything to be very speedy for the limitations I had. Tao was the only plug-in that wouldn’t display correctly, but rightly so, as you can read the GPU requirements here.

If I was you and had a cool $999 burning a hole in my After Effects wallet I would pick up Trapcode Suite 13 immediately.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter @allbetzroff. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Review: Avid Pro Tools 12

By Ron DiCesare

In 1990, I was working at a music studio where I did a lot of cut downs of 60s, 30s, 15s and 10s for TV and radio commercials. Back then we used ¼-inch analog tape with a razor blade to physically cut the tape. Since I did so many ¼-inch tape edits, the studio manager was forward thinking enough to introduce a new 2-track digital editing system by Digidesign called Sound Tools. I took to it like a fish takes to water since I was already using computers, MIDI sequencers and drum machines —  even replacing chips in drum machines — which is fitting since that is how Peter Gotcher and Evan Brooks started Digidesign back in 1984. (See my History of Audio Post here.)

A short time later, Pro Tools was introduced and everyone at the studio thought it was simply an upgrade to Sound Tools but with a different name. We purchased the first available version of Pro Tools and launched the new version to discover that there were now 4 audio tracks instead of 2. My first thought was, “Oh no, what am I going to do with the 2 extra tracks?!” Fearing the worst, my second thought was, “Oh shit, I bet this thing no longer does crossfades and I will have to use those two extra tracks to “ping pong” from one set of tracks to the other for fades.” Thankfully, I quickly realized that not only could Pro Tools 1.0 do crossfades, but it could do a lot more, including revolutionizing the entire audio industry.

During my long history of working on Sound Tools and Pro Tools, I have seen all of the advancements with the software firsthand. I am pleased to say that Avid’s latest version of Pro Tools, 12.3 includes some of the most helpful improvements yet.

Offerings and Pricing Options
Avid now offers its most flexible pricing ever for Pro Tools 12 — there are three different ways to purchase or upgrade. Just like before, Pro Tools can be purchased or upgraded outright, which is called a perpetual license. Don’t let the word license scare you; it still is a one-time purchase. In addition to the perpetual license, there are two new ways to lease Pro Tools either on a monthly basis or an annual subscription basis. This is an interesting step for Avid. The advantage to both types of subscriptions is that the user is eligible for all of the upgrades and tech support included with their subscription. This is an excellent way to ensure your program is always up to date while bug fixes are made along the way.

Offering such pricing flexibility does create a bit of confusion regarding what pricing options are available, since there are three versions of Pro Tools combined with the difference between first-time purchasers verses upgrades for preexisting users.

The first available option is called Pro Tools First, which is a free version. As a free version, this is an ideal option for anyone who is looking to get on board with Pro Tools for the first time. However, to take full advantage of Pro Tools 12, which is listed here in my review, you would need to purchase one of the two main versions, Pro Tools 12 or Pro Tools|HD 12.

Here is how the pricing breaks down: Pro Tools 12 Perpetual Licensing (AKA purchase outright) is $599. the Monthly subscription with upgrade plan is $29.99 per month.
The Annual subscription with upgrade plan is$24.92 per month (or $299 annually).

Pricing can vary according to your situation if you own previous versions or you have let too much time lapse in between upgrades. Suffice it to say, that whatever your unique situation is there is a purchase plan for you.

What’s Not New
The one thing product reviews rarely, if ever, cover is what has not changed. To me, what hasn’t changed is the first thing I want to know when I am working with any new version of existing software. I cannot stress enough the importance of being able to quickly and easily pick up exactly where I left off from my old version after upgrading. Unfortunately I know how often the software’s new features can make my old way of working obsolete.

I can’t help but think of a notable recent example when the upgrade to FCP X no longer supported OMF for audio exports. What were they thinking? Keeping previous workflows intact is an extremely important issue to me. Immediately after my upgrade from Pro Tools 10 to Pro Tools HD|12, I launched a session and it worked exactly as it did in version 10, eliminating any downtime for me.

One thing that is not new, but is extremely important to mention is the switch from the original Digidesign Audio Engine to the Avid Audio Engine. This happened on Pro Tools 11. Even with the change to the Avid Audio Engine, I was not forced to abandon my old workflow. The advantage of the Avid Audio Engine is key — among other things, this is what allows for the long overdue offline bounce, or faster-than-realtime bounce. And for anyone who is still on Pro Tools 10 and below, the offline bounce is a major reason to move to Pro Tools 12.

Because everyone uses Pro Tools in so many different and complex ways, I encourage you to view Avid’s website www.avid.com for a list of all of the new and improved functions. There are too many new features and improvements to list each one in this review. That is why I came up with a list of my 12 favorite new features of Pro Tools|HD 12.

My 12 Favorite New Features of Pro Tools 12
1. Avid Application Manager. There is a new icon at the top of your screen called the Avid Application Manager. Clicking on it will launch a window allowing you to log into your account, keep up with any updates and view a list of any uninstalled plug-ins available, along with your support options. You can also verify what type of license you have and when it was activated. This is helpful if you have the month-to-month or annual subscription so you can see when your next renewal is. Even with the perpetual license, you can still see what upgrades and bug fixes are available at any time.

2. Buy or Rent Plug-ins. One very cool new feature is the option to buy or rent any plug-in from a new menu option directly in Pro Tools called The Marketplace. This is particularly useful if you are opening another person’s session that has used a plug-in you do not own or if you are opening your session at a studio where they do not own a particular plug-in that you have at your studio. The rent option is a great way to access any missing plug-ins without having to commit to them fully.

3. Pitch Shift Legacy. Call me crazy, but I am thrilled that Avid has included the original version of Pitch Shift in the audio suite. In Pro Tools 11, Pitch Shift was changed to a piano keyboard-based plug-in called Pitch 2. As cool as it is to base your work off of a piano keyboard used in Pitch 2, I missed some of the basic features found only in the original version. I am pleased to say that Avid now offers both versions of Pitch Shift in the audio suite — the new piano-based keyboard version and the original, now called Pitch Shift Legacy.

4. Track Commit. Track Commit is used for converting virtual instruments to audio files, and it can be used for saving processing power overall. Even if you do not use virtual instruments, it still can be a very useful function, offering you the option to “print” your plug-ins to the audio track. This is a great way of saving processing and plug-in power. You can also render your automation, including panning. All of this saves processing power and any possible confusion if someone else is working on your session down the line.

5. Clip Transparency. Some people may remember the days of ¼-inch tape editing that I mentioned at the start of this article. Back then, audio editing had to be done solely with your ears. When Sound Tools and Pro Tools came along, editing became a visual skill, too. Clip Transparency takes visual editing one step further. It allows you to see two clips superimposed over each other while moving them on the same audio track. This is ideal for anyone who needs to line up a new clip with the old clip like when doing ADR.

The best part is it’s not only for seeing two different clips overlaid at the same time; it can be used when you are moving a single region or clip along your audio track. Clip Transparency allows you to see the old position superimposed with the new position of the same clip while you are shifting it for comparison.

It is perfect for those countless times when I have zoomed in past the start of the clip and I can’t see how much I am moving the clip relative to the old position. Clip Transparency now allows me to see how much I am shifting the audio, no matter what my zoom setting is. I never knew how much I needed this feature until I saw it in action. Clip Transparency is by far my most favorite new feature of Pro Tools 12.

6. Batch Fade and Fade Presets. When you are working with multiple audio clips on your timeline, fading each of the clips can be time consuming, especially if each fade needs to be treated differently. Now with Batch Fade, you can create presets for fade-ins, fade-outs and crossfades. When multiple audio clips are selected, a much larger dialog window pops up with many more options for you to choose from. Of course, fading between two clips can still be done the old way, and the fade dialog box works the same as in pervious versions. The new Batch Fade is an additional function that allows you to be more selective and have more options for your fades. Batch Fade is a great example how your old workflow is preserved while still adding new features.

7. The Dashboard. Launching a session now includes the Dashboard window at the start, which is an updated version of the Quick Start menu. You can quickly and easily see all of the available templates and your recent sessions. And, of course, you can create a new blank session. I like the new look and feel of Dashboard compared to Quick Start.

8. iPad Control. Pro Tools l Control is a free app now available in the App Store. iPad Control is made possible with the introduction of EuControl v.3.3, which is the driver needed for your workstation. EuControl is a free download using your Avid account after you complete the registration in the Pro Tools l Control iOS app. Even though I do not own an iPad, I can see the advantage of controlling Pro Tools via the iPad when I am monitoring a mix from a distance from my DAW.Avid Pro Tools iPad Control

Mixing a film, for example, would be a great use of the iPad control since that would allow me to sit back farther away from the speakers, thus simulating the distance of the listener in a movie theater. Today, the line between phones and tablets is blurred with the introduction of the “phablet.” As it stands now, the app is only available for iPad. I suspect that will change in the future, but I have no confirmation of that.

9. Included virtual musical instruments. The latest versions of Xpand II and First AIR Instruments Bundle are included with Pro Tools 12. Quite simply, I am blown away with how amazing these instruments sound. I have been a musician all of my life, but surprisingly I have never used any virtual instruments in MIDI in Pro Tools. I have always opted for a dedicated composing program for MIDI dating way back to Studio Vision Pro (for those of you old enough to remember how cool that program was).

I know there are plenty of third-party virtual instruments available for Pro Tools, but these two instrument bundles included with Pro Tools 12 have really opened up my eyes. Before Pro Tools 12, I found myself sharing and swapping files between a MIDI program (for me it’s Apple Logic) and Pro Tools. I have always preferred using a dedicated program for MIDI outside of Pro Tools, but now I am instantly converting using only Pro Tools for MIDI with the addition of these versions of Xpand II and the First AIR Instruments Bundle.

Please visit Avid’s website for a list of the specifics, but some of my favorite virtual instruments are the acoustic pianos, synth basses and of course anything drums or percussion related.

10. Updated I/O and flexibility. I work mostly on TV commercials and media specifically for the web, so I am rarely asked to do surround sound mixing, especially anything in 7.1. Therefore I am not able to explore any of the new surround features, including the new templates for 7.1 mixing.

Even so, I still can mention the addition of the Default Monitor path in Pro Tools 12. Pro Tools will automatically downmix or upmix your sessions’ monitor path to the studio’s monitor path. For example, if an HD session is saved with a 5.1 monitor path and then opened on a system that only has a stereo monitor path available, the session’s 5.1 monitor path is automatically downmixed to the systems’ stereo monitor outputs. This makes for even more flexibility when swapping sessions from one studio to another regardless of whether or not there are surround sound monitoring capabilities.

Another improvement relating to the I/O and surround capabilities is the addition of virtually unlimited busses. This will help anyone who has used up or exceeded previously allowed bus limitations when mixing in surround. The new Commit feature supports multichannel set-ups, which can improve your surround workflow.

And for any of the larger audio post facilities that may use Pro Tools in a much more complex way, such as getting several edit rooms to integrate, sync and play together, there are improvements in the Satellite link workflow. This includes the reset network button, transmit and receive play selection buttons in the transport window.

11. Track Bounce. Track Bounce is another feature I didn’t know I needed that much until I started using it. It is not to be confused with Track Commit. Track Bounce gives you the ability to select and bounce tracks or auxes as audio files when exporting. This can be one track, all the tracks or any combination of the tracks done in one single bounce.

For example, if you select a music track, a VO track and an FX track, you will get all three tracks as three discrete individual audio files in one single bounce using Track Bounce. This is essential for anyone who has to make splits or stems, especially in long format.

Imagine you have an hour program where you have a music track, a VO track and a sound effect track. In the past, you had to bounce each element as one realtime bounce three separate times. That meant it would take over three hours to complete. With Track Bounce in the offline bounce mode, you can output your stems in one single step in just minutes.

One friendly reminder is that if you are using Track Bounce with any layered tracks, such as sound effects or music tracks, it will bounce each track as its own separate track rather than a mix of the specific layers. For example, selecting 10 tracks will result in 10 discreet audio files with one bounce so it is important to know when Track Bounce is useful for you and when it is not.

12. Included Plug-ins. Of course, Pro Tools 12 is all about the plug-ins, and there are more plug-ins included than ever. This includes First AIR Effects Bundle, Eleven Effects and Space. I find that I rarely use any third party plug-ins since I am often going from studio to studio on a single project. Outside of noise reduction and LKFS Metering, I rarely find the need to use anything other than Avid plug-ins that are included with Pro Tools 12.

Cloud Collaboration and Avid Everywhere
In the near future, Avid will be offering Cloud Collaboration and Avid Everywhere. Avid will finally offer the ability to work on Pro Tools remotely using media located on a central cloud server accessible anywhere there is Internet access. When introduced, Cloud Collaboration will allow people in separate locations to access the same Pro Tools 12 session to share and update files instantly. This is perfectly suited for musicians collaborating on a song who do not live near each other.

More exciting to me is the potential of Cloud Collaboration to change the way we work in audio post by allowing access to all of your media remotely. This could benefit any audio facility that has multiple rooms with multiple engineers switching from room to room. Using Cloud Collaboration, there will be one central location for all your media accessible from any audio room. For engineers who need to switch rooms when working on a project, this will eliminate any file transfers or media dumps.

But I think the biggest benefit will be for any audio engineer like myself who is often working on a single project at multiple locations over the duration of the project. I am often working from my home studio, my client’s studio and a large audio post facility on the same project spread over several days, weeks or months. Each time I change studios, I have to make sure I transfer all of my sessions from one place to another using a flash drive, or WeTransfer or Google Drive, etc. I have tried them all and they are all time consuming. And with multiple versions and constant audio revisions, it is very easy to lose track of what and where the most current version is.

Cloud Collaboration will solve this issue with one central location where I can access my session from anywhere that has Internet access. This is a giant leap forward and I am looking forward to exploring this in-depth in a future review here on postProspective.

Ron DiCesare is an audio pro whose spot work includes TV campaigns for Purina, NJ Lotto and Beggin’ Strips. His indie film work includes Con Artist, BAM 150 and Fishing without Nets. He is also involved with audio post for Vice Media on their news reports and web series, including Vice on HBO. You can contact him at rononizer@gmail.com.

Review: Rampant Design Tools

By Brady Betzel

As every editor and VFX artist knows, the toolset shouldn’t define you as an artist, however, in today’s visually intensive world any and all help is welcome in my eyes.

In addition to a couple of After Effects scripts like Newton 2, TypeMonkey, and any Trapcode plug-ins, there are two products that I feel are must-haves being an editor and working in VFX: Video CoPilot’s Element 3D and the Rampant Design Tools entire drag, drop, and go visual effects library.

Continue reading