Tag Archives: Review

Review: CyberPower PC workstation with AMD Ryzen

By Brady Betzel

With the influx of end users searching for alternatives to Mac Pros, as well as new ways to purchase workstation-level computing solutions, there is no shortage of opinions on what brands to buy and who might build it. Everyone has a cousin or neighbor that builds systems, right?

I’ve often heard people say, “I’ve never built a system or used (insert brand name here), but I know they aren’t good.” We’ve all run into people who are dubious by nature. I’m not so cynical, and when it comes to operating and computer systems, I consider myself Switzerland.

When looking for the right computer system, the main question you should ask is, “What do you need to accomplish?” Followed by, “What might you want to accomplish in the future?” I’m a video editor and colorist, so I need the system I build to work fluidly with Avid Media Composer, Blackmagic DaVinci Resolve and Adobe’s Premiere and After Effects. I also want my system to work with Maxon Cinema 4D in case I want to go a little further than Video Copilot’s Element 3D and start modeling in Cinema 4D. My main focus is video editing and color correction but I also need flexibility for other tools.

Lately, I’ve been reaching out to companies in the hopes of testing as many custom-built Windows -based PCs as possible. There have been many Mac OS-to-Windows transplants over the past few years, so I know pros are eager for options. One of the latest seismic shifts have come from the guys over at Greyscalegorilla moving away from Mac to PCs. In particular, I saw that one of the main head honchos over there, Nick Campbell (@nickvegas), went for a build complete with the Ryzen Threadripper 32-core workhorse. You can see the lineup of systems here. This really made me reassess my thoughts on AMD being a workstation-level processor, and while not everyone can afford the latest Intel i9 or AMD Threadripper processors, there are lower-end processors that will do most people just fine. This is where the custom-built PC makers like CyberPower PC, who equip machines with AMD processors, come into play.

So why go with a company like CyberPowerPC? The prices for parts are usually competitive, and the entire build isn’t much more than if you purchased the parts by themselves. Also, you deal with CyberPower PC for Warranty issues and not individual companies for different parts.

My CustomBuild
In my testing of an AMD Ryzen 7 1700x-based system with a Samsung NVMe hard drive and 16GB of RAM, I was able to run all of the software I mentioned before. The best part was the price; the total was around, $1,000! Not bad for someone editing and color correcting. Typically those machines can run anywhere from $2,000 to $10,000. Although the parts in those more expensive systems are more complex and have double to triple the amount of cores, some of that is wasted. And when on a budget you will be hard-pressed to find a better deal than CyberPower PC. If you build a system yourself, you might get close but not far off.

While this particular build isn’t going to beat out the AMD Threadripper’s or Intel i9-based systems, the AMD Ryzen-based systems offer a decent bang for the buck. As I mentioned, I focus on video editing and color correcting so I tested a simple one-minute UHD (3840×2160) 23.98 H.264 export. Using Premiere along with Adobe’s Media Encoder, I used about :30 seconds of Red UHD footage as well as some UHD S-log3/s-gamut3 footage I shot on the Sony a7 III creating a one-minute long sequence.

I then exported it as an H.264 at a bitrate around 10Mb/s. With only a 1D LUT on the Sony a7iii footage, the one-minute sequence took one minute 13 seconds. With added 10% resizes and a “simple” Gaussian blur over all the clips, the sequence exported in one minute and four seconds. This is proof that the AMD GPU is working inside of Premiere and Media Encoder. Inside Premiere, I was able to playback the full-quality sequence on a second monitor without any discernible frames dropping.

So when people tell you AMD isn’t Intel, technically they are right, but overall the AMD systems are performing at a high enough level that for the money you are saving, it might be worth it. In the end, with the right expectations and dollars, an AMD-based system like this one is amazing.

Whether you like to build your own computer or just don’t want to buy a big-brand system, custom-built PCs are a definite way to go. I might be a little partial since I am comfortable opening up my system and changing parts around, but the newer cases allow for pretty easy adjustments. For instance, I installed a Blackmagic DeckLink and four SSD drives for a RAID-0 setup inside the box. Besides wishing for some more internal drive cages, I felt it was easy to find the cables and get into the wiring that CyberPowerPC had put together. And because CyberPowerPC is more in the market for gaming, there are plenty of RGB light options, including the memory!

I was kind of against the lighting since any color casts could throw off color correction, but it was actually kind of cool and made my setup look a little more modern. It actually kind of got my creativity going.

Check out the latest AMD Ryzen processors and exciting improvements to the Radeon line of graphics cards on www.cyberpowerpc.com and www.amd.com. And, hopefully, I can get my hands on a sweet AMD Ryzen Threadripper 2990WX with 32 cores and 64 threads to really burn a hole in my render power.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Razer Blade 15-inch mobile workstation

By Mike McCarthy

I am always looking for the most powerful tools in the smallest packages, so I decided to check out the Razer Blade 15-inch laptop with an Nvidia GeForce RTX 2080 Max-Q graphics card. The Max-Q variants are optimized for better thermals and power usage — at the potential expense of performance — in order to allow more powerful GPUs to be used in smaller laptops. The RTX 2080 is Nvidia’s top-end mobile GPU, with 2,944 CUDA cores and 8GB of DDR6 memory, running at 384GB/s with 13.6 billion transistors on the chip.

The new Razer Blade has a six-core Intel i7-8750H processor with 16GB RAM and a 512GB SSD. It has mDP 1.4, HDMI 2.0b, Thunderbolt 3 and three USB 3.1 ports. Its 15.6-inch screen can run at 144Hz refresh rate but only supports full HD 1920×1080, which is optimized for gaming, not content creation. The past four laptops I have used have all been UHD resolution at various sizes, which gives far more screen real estate for creative applications and better resolution to review your imagery.

I also prefer to have an Ethernet port, but I am beginning to accept that a dongle might be acceptable for that, especially since it opens up the possibility of using 10 Gigabit Ethernet. We aren’t going to see 10GigE on laptops anytime soon due to the excessive power consumption, but you only need 10GigE when at certain locations that support it, so a dongle or docking station is reasonable for those use cases.

Certain functionality on the system required a free account to be registered with Razer, which is annoying, but I’ve found this requirement is becoming the norm these days. That gives access to the Razer Synapse utility for customizing the system settings, setting fan speed and even remapping keyboard functionality. Any other Razer peripherals would be controlled here as well. As part of a top-end modern gaming system, the keyboard has fully controllable color back lighting. While I find most of the default “effects” to be distracting, the option to color code your shortcut keys is interesting. And if you really want to go to the next level, you can customize it further.

For example, when you press the FN key, by default the keys that have function behaviors connected with them light up white, which impressed me. The colors and dimming are generated by blinking the LEDs, but I was able to perceive the flicker when moving my eyes, so I stuck with colors that didn’t involve dimming channels. But that still gave me six options (RGB, CYM) plus white.

This is the color config I was running in the photos, but the camera does not reflect how it actually looks. In pictures, the keys look washed out, but in person they are almost too bright and vibrant. But we are here for more than looks, so it was time to put it through its paces and see what can happen under the hood.

Testing
I ran a number of benchmarks, starting with Adobe Premiere Pro. I now have a consistent set of tests to run on workstations in order to compare each system. The tests involve Red, Sony Venice and ARRI Alexa source files, with various GPU effects applied and exported to compressed formats. It handled the 4K and 8K renders quite well — pretty comparable to full desktop systems — showcasing the power of the RTX GPU. Under the sustained load of rendering for 30 minutes, it did get quite warm, so you will want adequate ventilation … and you won’t want it sitting on your lap.

My next test was RedCine-X Pro, with its new CUDA playback acceleration of files up to 8K. But what is the point of decoding 8K if you can’t see all the pixels you are processing? So for this test, I also connected my Dell UP3218K screen to the Razer Blade’s Mini DisplayPort 1.4 output. Outputting to the monitor does affect performance a bit, but that is a reasonable expectation. It doesn’t matter if you can decode 8K in real time if you can’t display it. Nvidia provides reviewers with links to some test footage, but I have 40TB to choose from, in addition to test clips from all different settings on the various cameras from my Large Format Camera test last year.

The 4K Red files worked great at full res to the external monitor — full screen or pixel for pixel — while the system barely kept up with the 6K and 8K anamorphic files. 8K full frame required half-res playback to view smoothly on the 8K display. Full-frame 8K was barely realtime with the external monitor disabled, but that is still very impressive for a laptop (I have yet to accomplish that on my desktop). The rest of the files played back solidly to the local display. Disabling the CUDA GPU acceleration requires playing back below 1/8th res to do anything on a laptop, so this is where having a powerful GPU makes a big difference.

Blackmagic Resolve is the other major video editing program to consider, and while I do not find it intuitive to use myself, I usually recommend it to others who are looking for a high level of functionality but aren’t ready to pay for Premiere. I downloaded and rendered a test project from Nvidia, which plays Blackmagic Raw files in real time with a variety of effects and renders to H.264 in 40 seconds, but it takes 10 times longer with CUDA disabled in Resolve.

Here, as with the other tests, the real-world significance isn’t how much faster it is with a GPU than without, but how much faster is it with this RTX GPU compared to with other options. Nvidia clams this render takes 2.5 times as long on a Radeon-based MacBook Pro, and 10% longer on a previous-generation GTX 1080 laptop, which seems consistent with my previous experience and tests.

The primary differentiation of Nvidia’s RTX line of GPUs is the inclusion of RT cores to accelerate raytracing and Tensor cores to accelerate AI inferencing, so I wanted to try tasks that used those accelerations. I started by testing Adobe’s AI-based image enhancement in Lightroom Classic CC. Nvidia claims that the AI image enhancement uses the RTX’s Tensor cores, and it is four times faster with the RTX card. The visual results of the process didn’t appear to be much better than I could have achieved with manual development in Photoshop, but it was a lot faster to let the computer figure out what to do to improve the images. I also ran into an issue where certain blocks of the image got corrupted in the process, but I am not sure if Adobe or Nvidia is at fault here.

Raytracing
While I could have used this review as an excuse to go play Battlefield V to experience raytracing in video games, I stuck with the content-creation focus. In looking for a way to test raytracing, Nvidia pointed me to OctaneRender. Otoy has created a utility called OctaneBench for measuring the performance of various hardware configurations with its render engine. It reported that the RTX’s raytracing acceleration was giving me a 3x increase in render performance.

I also tested ProRender in Maxon Cinema 4D, which is not a raytracing renderer but does use GPU acceleration through OpenCL. Apparently, there is a way to use the Arnold ray-tracing engine in Cinema 4D, but I was reaching the limits of my 3D animation expertise and resources, so I didn’t pursue that path, and I didn’t test Maya for the same reason.

With ProRender, I was able to render views of various demo scenes 10 to 20 times faster than I could with a CPU only. I will probably include this as a regular test in future reviews, allowing me to gauge render performance far better than I can with Cinebench (which returned a CPU score of 836). And compiling a list of comparison render times will add more context to raw data. But, for now, I was able to render the demo “Bamboo” scene in 39 seconds and the more complex “Coffee Bean” scene in 188 seconds, beating even the Nvidia marketing team’s expected results.

VR
No test of a top-end GPU would be complete without trying out its VR performance. I connected my Windows-based Lenovo Explorer Mixed Reality headset, installed SteamVR and tested both 360 video editing in Premiere Pro and the true 3D experiences available in Steam. As would be expected, the experience was smooth, making this one of the most portable solutions for full-performance VR.

The RTX 2080 is a great GPU, and I had no issues with it. Outside of true 3D work, the upgrade from the Pascal-based GTX 1080 is minor, but for anyone upgrading from systems older than that, or doing true raytracing or AI processing, you will see a noticeable improvement in performance.

The new Razer Blade is a powerful laptop for its size, and while I did like it, that doesn’t mean I didn’t run into a few issues along the way. Some of those, like the screen resolution, are due to its focus on gaming instead of content creation, but I also had an issue with the touch pad. Touch pad issues are common when switching between devices constantly, but in this case, right-clicking instead of left-clicking and not registering movement when the mouse button was pressed were major headaches. The problems were only alleviated by connecting a mouse and sticking with that, which I frequently do anyway. The power supply has a rather large connector on a cumbersome thick and stiff cord, but it isn’t going to be falling out once you get it inserted. Battery life will vary greatly depending on how much processing power you are using.

These RTX chips are the first mobile GPUs with dedicated RT cores and with Tensor cores, since Volta-based chips never came to laptops. So for anyone with processing needs that are accelerated by those developments, the new RTX chip is obviously worth the upgrade. If you want the fastest thing out there, this is it. (Or at least it was, until Razer added options for 9th Generation Intel processors this week and a 4K OLED screen (an upgrade I would highly recommend for content creators). The model I reviewed goes for $3,000. The new 9th Gen version with a 240Hz screen is the same price, while the 4K OLED Touch version costs an extra $300.

Summing Up
If you are looking for a more balanced solution or are on a more limited budget, you should definitely compare the new Razer Blade to the new Nvidia GTX 16 line of mobile products that was just announced. Then decide which option is a better fit for your particular needs and budget.

The development of eGPUs has definitely shifted this ideal target for my usage. While this system has a Thunderbolt 3 port, it is fast enough that you won’t see significant gains from an eGPU, but that advantage comes at the expense of battery life and price. I am drawn to eGPUs because I only need maximum performance at my desk, but if you need top-end graphics performance totally untethered, RTX Max-Q chips are the solution for you.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Mzed.com’s Directing Color With Ollie Kenchington

By Brady Betzel

I am constantly looking to educate myself, no matter what the source — or subject. Whether I am learning how to make a transition in Adobe After Effects from an eSports editor on YouTube to Warren Eagles teaching color correction in Blackmagic’s DaVinci Resolve on FXPHD.com, I’m always beefing up my skills. I even learn from bad tutorials — they teach you what not to do!

But when you come across a truly remarkable learning experience, it is only fair to share with the rest of the world. Last year I saw an ad for an MZed.com course called “Directing Color With Ollie Kenchington,” and was immediately interested. These days you can pretty much find any technical tutorial you can dream of on YouTube, but truly professional, higher education-like, theory-based education series are very hard to come by. Even ones you need to pay for aren’t always worth their price of admission, which is a huge let down.

Ollie sharing his wisdom.

Once I gained access to MZed.com I wanted to watch every educational series they had. From lighting techniques with ASC member Shane Hurlbut to the ARRI Amira Camera Primer, there are over 150 hours of education available from industry leaders. However, I found my way to Directing Color…

I am often asked if I think people should go to college or a film school. My answer? If you have the money and time, you should go to college followed by film school (or do both together, if the college offers it). Not only will you learn a craft, but you will most likely spend hundreds of hours studying and visualizing the theory behind it. For example, when someone asks me about the science behind camera lenses, I can confidently answer them thanks to my physics class based on lenses and optics from California Lutheran University (yes, a shameless plug).

In my opinion, a two-, four- or even 10-year education allows me to live in the grey. I am comfortable arguing for both sides of a debate, as well as the options that are in between —  the grey. I feel like my post-high school education really allowed me to recognize and thrive in the nuances of debate. Leaving me to play devil’s advocate maybe a little too much, but also having civil and proactive discussions with others without being demeaning or nasty — something we are actively missing these days. So if living in the grey is for you, I really think a college education supplemented by online or film school education is valuable (assuming you make the decision that the debt is worth it like I did).

However, I know that is not an option for everyone since it can be very expensive — trust me, I know. I am almost done paying off my undergraduate fees while still paying off my graduate ones, which I am still two or three classes away from finishing. That being said, Directing Color With Ollie Kenchington is the only online education series I have seen so far that is on the same level as some of my higher education classes. Not only is the content beautifully shot and color corrected, but Ollie gives confident and accessible lessons on how color can be used to draw the viewer’s attention to multiple parts of the screen.

Ollie Kenchington is a UK-based filmmaker who runs Korro Films. From the trailer of his Directing Color series, you can immediately see the beauty of Ollie’s work and know that you will be in safe hands. (You can read more about his background here.)

The course raises the online education bar and will elevate the audiences idea of professional insight. The first module “Creating a Palette” covers the thoughts behind creating a color palette for a small catering company. You may even want to start with the last Bonus Module “Ox & Origin” to get a look at what Ollie will be creating throughout the seven modules and about an hour and a half of content.

While Ollie goes over “looks,” the beauty of this course is that he goes through his internal thought processes including deciding on palettes based on color theory. He didn’t just choose teal and orange because it looks good, he chooses his color palette based on complementary colors.

Throughout the course Ollie covers some technical knowledge, including calibrating monitors and cameras, white balancing and shooting color charts to avoid having wrong color balance in post. This is so important because if you don’t do these simple steps, your color correction session while be much harder. And wasting time on fixing incorrect color balance takes time away from the fun of color grading. All of this is done through easily digestible modules that range from two to 20 minutes.

The modules include Creating a Palette; Perceiving Color; Calibrating Color; Color Management; Deconstructing Color 1 – 3 and the Bonus Module Ox & Origin.

Without giving away the entire content in Ollie’s catalog, my favorite modules in this course are the on-set modules. Maybe because I am not on-set that often, but I found the “thinking out loud” about colors helpful. Knowing why reds represent blood, which raise your heart rate a little bit, is fascinating. He even goes through practical examples of color use in films such as in Whiplash.

In the final “Deconstructing Color” modules, Ollie goes into a color bay (complete with practical candle backlighting) and dives in Blackmagic’s DaVinci Resolve. He takes this course full circle to show how since he had to rush through a scene he can now go into Resolve and add some lighting to different sides of someone’s face since he took time to set up proper lighting on set, he can focus on other parts of his commercial.

Summing Up
I want to watch every tutorial MZed.com has to offer. From “Philip Bloom’s Cinematic Masterclass” to Ollie’s other course “Mastering Color.” Unfortunately, as of my review, you would have to pay an additional fee to watch the “Mastering Color” series. It seems like an unfortunate trend in online education to charge a fee and then when an extra special class comes up, charge more, but this class will supposedly be released to the standard subscribers in due time.

MZed.com has two subscription models: MZed Pro, which is $299 for one year of streaming the standard courses, and MZed Pro Premium for $399. This includes the standard courses for one year and the ability to choose one “Premium” course.

“Philip Bloom’s Cinematic Master Class” was the Premium course I was signed up for initially, but you you can decide between this one and the “Mastering Color” course. You will not be disappointed regardless of which one you choose. Even their first course “How to Photograph Everyone” is chock full of lighting and positioning instruction that can be applied in many aspects of videography.

I really was impressed with Directing Color with Ollie Kenchington, and if the other course are this good MZed.com will definitely become a permanent bookmark for me.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Boris FX’s Continuum and Mocha Pro 2019

By Brady Betzel

I realize I might sound like a broken record, but if you are looking for the best plugin to help with object removals or masking, you should seriously consider the Mocha Pro plugin. And if you work inside of Avid Media Composer, you should also seriously consider Boris Continuum and/or Sapphire, which can use the power of Mocha.

As an online editor, I consistently use Continuum along with Mocha for tight blur and mask tracking. If you use After Effects, there is even a whittled-down version of Mocha built in for free. For those pros who don’t want to deal with Mocha inside of an app, it also comes as a standalone software solution where you can copy and paste tracking data between apps or even export the masks, object removals or insertions as self-contained files.

The latest releases of Continuum and Mocha Pro 2019 continue the evolution of Boris FX’s role in post production image restoration, keying and general VFX plugins, at least inside of NLEs like Media Composer and Adobe Premiere.

Mocha Pro

As an online editor I am alway calling on Continuum for its great Chroma Key Studio, Flicker Fixer and blurring. Because Mocha is built into Continuum, I am able to quickly track (backwards and forwards) difficult shapes and even erase shapes that the built-in Media Composer tools simply can’t do. But if you are lucky enough to own Mocha Pro you also get access to some amazing tools that go beyond planar tracking — such as automated object removal, object insertion, stabilizing and much more.

Boris FX’s latest updates to Boris Continuum and Mocha Pro go even further than what I’ve already mentioned and have resulted in a new version naming, this round we are at 2019 (think of it as Version 12). They have also created the new Application Manager, which makes it a little easier to find the latest downloads. You can find them here. This really helps when jumping between machines and you need to quickly activate and deactivate licenses.

Boris Continuum 2019
I often get offline edits effects from a variety plugins — lens flares, random edits, light flashes, whip transitions, and many more — so I need Continuum to be compatible with offline clients. I also need to use it for image repair and compositing.

In this latest version of Continuum, BorisFX has not only kept plugins like Primatte Studio, they have brought back Particle Illusion and updated Mocha and Title Studio. Overall, Continuum and Mocha Pro 2019 feel a lot snappier when applying and rendering effects, probably because of the overall GPU-acceleration improvements.

Particle Illusion has been brought back from the brink of death in Continuum 2019 for a 64-bit keyframe-able particle emitter system that can even be tracked and masked with Mocha. In this revamp of Particle Illusion there is an updated interface, realtime GPU-based particle generation, expanded and improved emitter library (complete with motion-blur-enabled particle systems) and even a standalone app that can design systems to be used in the host app — you cannot render systems inside of the standalone app.

While Particle Illusion is a part of the entire Continuum toolset that works with OFX apps like Blackmagic’s DaVinci Resolve, Media Composer, After Effects, and Premiere, it seems to work best in applications like After Effects, which can handle composites simply and naturally. Inside the Particle Illusion interface you can find all of the pre-built emitters. If you only have a handful make sure you download additional emitters, which you can find in the Boris FX App Manager.

       
Particle Illusion: Before and After

I had a hard time seeing my footage in a Media Composer timeline inside of Particle Illusion, but I could still pick my emitter, change specs like life and opacity, exit out and apply to my footage. I used Mocha to track some fire from Particle Illusion to a dumpster I had filmed. Once I dialed in the emitter, I launched Mocha and tracked the dumpster.

The first time I went into Mocha I didn’t see the preset tracks for the emitter or the world in which the emitter lives. The second time I launched Mocha, I saw track points. From there you can track where you want your emitter to track and be placed. Once you are done and happy with your track, jump back to your timeline where it should be reflected. In Media Composer I noticed that I had to go to the Mocha options and change the option from Mocha Shape to no shape. Essentially, the Mocha shape will act like a matte and cut off anything outside the matte.

If you are inside of After Effects, most parameters can now be keyframed and parented (aka pick-whipped) natively in the timeline. The Particle Illusion plugin is a quick, easy and good-looking tool to add sparks, Milky Way-like star trails or even fireworks to any scene. Check out @SurfacedStudio’s tutorial on Particle Illusion to get a good sense of how it works in Adobe Premiere Pro.

Continuum Title Studio
When inside of Media Composer (prior to the latest release 2018.12), there were very few ways to create titles that were higher resolution than HD (1920×1080) — the New Blue Titler was the only other option if you wanted to stay within Media Composer.

Title Studio within Media Composer

At first, the Continuum Title Studio interface appeared to be a mildly updated Boris Red interface — and I am allergic to the Boris Red interface. Some of the icons for the keyframing and the way properties are adjusted looks similar and threw me off. I tried really hard to jump into Title Studio and love it, but I really never got comfortable with it.

On the flip side, there are hundreds of presets that could help build quick titles that render a lot faster than New Blue Titler did. In some of the presets I noticed the text was placed outside of 16×9 Title Safety, which is odd since that is kind of a long standing rule in television. In the author’s defense, they are within Action Safety, but still.

If you need a quick way to make 4K titles, Title Studio might be what you want. The updated Title Studio includes realtime playback using the GPU instead of the CPU, new materials, new shaders and external monitoring support using Blackmagic hardware (AJA will be coming at some point). There are some great pre-sets including pre-built slates, lower thirds, kinetic text and even progress bars.

If you don’t have Mocha Pro, Continuum can still access and use Mocha to track shapes and masks. Almost every plugin can access Mocha and can track objects quickly and easily.
That brings me to the newly updated Mocha, which has some new features that are extremely helpful including a Magnetic Spline tool, prebuilt geometric shapes and more.

Mocha Pro 2019
If you loved the previous version of Mocha, you are really going to love Mocha Pro 2019. Not only do you get the Magnetic Lasso, pre-built geometric shapes, the Essentials interface and high-resolution display support, but BorisFX has rewritten the Remove Module code to use GPU video hardware. This increases render speeds about four to five times. In addition, there is no longer a separate Mocha VR software suite. All of the VR tools are included inside of Mocha Pro 2019.

If you are unfamiliar with what Mocha is, then I have a treat for you. Mocha is a standalone planar tracking app as well as a native plugin that works with Media Composer, Premiere and After Effects, or through OFX in Blackmagic’s Fusion, Foundry’s Nuke, Vegas Pro and Hitfilm.

Mocha tracking

In addition (and unofficially) it will work with Blackmagic DaVinci Resolve by way of importing the Mocha masks through Fusion. While I prefer to use After Effects for my work, importing Mocha masks is relatively painless. You can watch colorist Dan Harvey run through the process of importing Mocha masks to Resolve through Fusion, here.

But really, Mocha is a planar tracker, which means it tracks multiple points in a defined area that works best in flat surfaces or at least segmented surfaces, like the side of a face, ear, nose, mouth and forehead tracked separately instead of all at once. From blurs to mattes, Mocha tracks objects like glue and can be a great asset for an online editor or colorist.

If you have read any of my plugin reviews you probably are sick of me spouting off about Mocha, saying how it is probably the best plugin ever made. But really, it is amazing — especially when incorporated with plugins like Continuum and Sapphire. Also, thanks to the latest Media Composer with Symphony option you can incorporate the new Color Correction shapes with Mocha Pro to increase the effectiveness of your secondary color corrections.

Mocha Pro Remove module

So how fast is Mocha Pro 2019’s Remove Module these days? Well, it used to be a very slow process, taking lots of time to calculate an object’s removal. With the latest Mocha Pro 2019 release, including improved GPU support, the render time has been cut down tremendously. In my estimation, I would say three to four times the speed (that’s on the safe side). In Mocha Pro 2019 removal jobs that take under 30 seconds would have taken four to five minutes in previous versions. It’s quite a big improvement in render times.

There are a few changes in the new Mocha Pro, including interface changes and some amazing tool additions. There is a new drop-down tab that offers different workflow views once you are inside of Mocha: Essentials, Classic, Big Picture and Roto. I really wish the Essentials view was out when I first started using Mocha, because it gives you the basic tools you need to get a roto job done and nothing more.

For instance, just giving access to the track motion objects (Translation, Scale, Rotate, Skew and Perspective) with big shiny buttons helps to eliminate my need to watch YouTube videos on how to navigate the Mocha interface. However, if like me you are more than just a beginner, the Classic interface is still available and one I reach for most often — it’s literally the old interface. Big Screen hides the tools and gives you the most screen real estate for your roto work. My favorite after Classic is Roto. The Roto interface shows just the project window and the classic top toolbar. It’s the best of both worlds.

Mocha Pro 2019 Essentials Interface

Beyond the interface changes are some additional tools that will speed up any roto work. This has been one of the longest running user requests. I imagine the most requested feature that BorisFX gets for Mocha is the addition of basic shapes, such as rectangles and circles. In my work, I am often drawing rectangles around license plates or circles around faces with X-splines, so why not eliminate a few clicks and have that done already? Answering my need, Mocha now has elliptical and rectangular shapes ready to go in both X-splines and B-splines with one click.

I use Continuum and Mocha hand in hand. Inside of Media Composer I will use tools like Gaussian Blur or Remover, which typically need tracking and roto shapes created. Once I apply the Continuum effect, I launch Mocha from the Effect Editor and bam, I am inside Mocha. From here I track the objects I want to affect, as well as any objects I don’t want to affect (think of it like an erase track).

Summing Up
I can save tons of time and also improve the effectiveness of my work exponentially when working in Continuum 2019 and Mocha Pro 2019. It’s amazing how much more intuitive Mocha is to track with instead of the built-in Media Composer and Symphony trackers.

In the end, I can’t say enough great things about Continuum and especially Mocha Pro. Mocha saves me tons of time in my VFX and image restoration work. From removing camera people behind the main cast in the wilderness to blurring faces and license plates, using Mocha in tandem with Continuum is a match made in post production heaven.

Rendering in Continuum and Mocha Pro 2019 is a lot faster than previous versions, really giving me a leg up on efficiency. Time is money right?! On top of that, using Mocha Pro’s magic Object removal and Modules takes my image restoration work to the next level, separating me from other online editors who use standard paint and tracking tools.

In Continuum, Primatte Studio gives me the leg up on greenscreen keys with its exceptional ability to auto analyze a scene and perform 80% of the keying work before I dial-in the details. Whenever anyone asks me what tools I couldn’t live without, I without a doubt always say Mocha.
If you want a real Mocha Pro education you need to watch all of Mary Poplin’s tutorials. You can find them on YouTube. Check out this one on how to track and replace a logo using Mocha Pro 2019 in Adobe After Effects. You can also find great videos at Borisfx.com.

Mocha point parameter tracking

I always feel like there are tons of tools inside of the Mocha Pro toolset that go unused simply because I don’t know about them. One I recently learned about in a Surfaced Studio tutorial was the Quick Stabilize function. It essentially stabilizes the video around the object you are tracking allowing you to more easily rotoscope your object with it sitting still instead of moving all over the screen. It’s an amazing feature that I just didn’t know about.

As I was finishing up this review I saw that Boris FX came out with a training series, which I will be checking out. One thing I always wanted was a top-down set of tutorials like the ones on Mocha’s YouTube page but organized and sent along with practical footage to practice with.

You can check out Curious Turtle’s “More Than The Essentials: Mocha in After Effects” on their website where I found more Mocha training. There is even a great search parameter called Getting Started on BorisFX.com. Definitely check them out. You can never learn enough Mocha!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: G-Tech’s G-Speed Shuttle using a Windows PC

By Barry Goch

When I was asked to review the G-Technology G-Speed Shuttle SSD drive, I was very excited. I’ve always had great experiences with G-Tech and was eager to try out this product with my MSI 17.3-inch GT73VR Titan PC laptop… and this is where the story gets interesting.

I’ve been a Mac fan for years. I’ve owned Macs going back to the Mac Classic in the ‘90s. But a couple of years ago I reached a tipping point. My 17-inch MacBook Pro didn’t have the horsepower to support VR video, and I was looking to upgrade to a new Mac. But when I started looking deeper, comparing specifications and performance, specifically looking to harness the power of industry-leading GPUs for Adobe Premiere with its VR capabilities, I bought the MSI Titan VR because it shipped with the Nvidia GTX1070 graphics card.

The laptop is a beast and has all the power and portability I needed but couldn’t find in a Mac laptop at the time. I wanted to give you my Mac-to-PC background before we jump in, because to be clear: The G-Speed Shuttle SSD will provide the best performance when used with Thunderbolt 3 Macs. That doesn’t mean it won’t be great on a PC; it just won’t be as good as when used on a Mac.

G-Tech makes the PC configuration software easy to find on their website… and easy to use. I did find, though, that I could only configure the drive NTFS with RAID-5 on the PC. But, I was also able to speed test the G-Speed Shuttle SSD as a Mac-formatted drive on the PC, as well as using MacDrive that enables Mac drive formatting and mounting.

We actually reached out to G-Tech, which is a Western Digital brand, about the Mac vs. PC equation. This is what Matthew Bennion, director of product line management at G-Technology said: “Western Digital is committed to providing high-speed, reliable storage solutions to both PC and Mac power users. G Utilities, formatted for Windows computers, is constantly being added to more of our products, including most recently our G-Speed Shuttle products. The addition of G Utilities makes our full portfolio Windows-friendly.”

Digging In
The packaging of the G-Speed Shuttle SSD is very clean and well laid out. There is a parts box that has the Thunderbolt cable, power cable and instructions. Underneath the perfectly formed plastic box insert, wrapped in a plastic bag, was the drive itself. The drive has a lightweight polycarbonate chassis. I was surprised how light it was when I pulled it out of the box.

There are four drive bays, each with an SSD drive. The first things I noticed was the drive’s weight and sound — it’s very lightweight for so much storage, and it’s very quiet with no spinning disks. SSDs run quieter, cooler and uses less power than traditional spinning disks. I think this would be a perfect companion for a DIT looking for a fast, lightweight and low-power-consumption RAID for doing dailies.

I used the drive with Red RAW files inside of Resolve and RedCine-X. I set up a transcode project to make Avid offline files that the G-Speed Shuttle SSD handled muscularly. I left the laptop running overnight working on the files on more than one occasion and didn’t have any issues with the drive at all.

The main shortcoming of using a PC setup using the G-Shuttle is the lack of ability to create Apple ProRes codec QuickTime files. I’ve become accustomed to working with ProRes files created with my Blackmagic Ursa Mini camera, and PCs read those files fine. If you’re delivering to YouTube or Vimeo, it’s not a big deal. It is a bit of an obstacle if you need to deliver ProRes. For this review, I worked around this by rendering out a DPX sequence to the Mac-formatted G-Speed Shuttle SSD drive in Resolve (I also used Premiere) and made ProRes files using Autodesk Flame on my venerable 17-inch MacBook Pro. The Flame is the clear winner in quality of file delivery. So, yes, not being able to write ProRes is a pain, but there are ways around it. And, again, if you’re delivering just for the Web, it’s no big deal.

The Speed
My main finding involves the speed of the drive on a PC. In their marketing material for the drive, G-Tech advertises a speed of 2880 MB/sec with Thunderbolt 3. Using the AJA speed test, I was able to get 1590MB/sec — a speed more comparable with Thunderbolt 2. Perhaps it had something to do with the fact that using the G-Tech PC drive configuration program? I could only set up the drive as RAID-5, and not the faster RAID-0 or RAID-1. I did also run speed tests on the Mac-formatted G-Speed Shuttle SSD and I found similar speeds. I am certain that if I had a newer Thunderbolt 3 Mac, I would have gotten speeds closer to their advertised Mac speed specifications.

Summing Up
Overall, I really liked the G-Speed Shuttle SSD. It looks cool on the desk, it’s lightweight and very quiet. I wish I didn’t have to give it back!

And the cost? It’s 16TB for $7499.95, and 8TB for $4999.95.


Barry Goch is a Finishing Artist at The Foundation and a Post Production Instructor at UCLA Extension. You can follow him on Twitter at @gochya.

Review: HP DreamColor Z31x studio display for cinema 4K

By Mike McCarthy

Not long ago, HP sent me their newest high-end monitor to review, and I was eager to dig in. The DreamColor Z31x studio display is a 31-inch true 4K color-critical reference monitor. It has many new features that set it apart from its predecessors, which I have examined and will present here in as much depth as I can.

It is challenging to communicate the nuances of color quality through writing or any other form on the Internet, as some things can only be truly appreciated firsthand. But I will attempt to communicate the experience of using the new DreamColor as best I can.

First, we will start with a little context…

Some DreamColor History
HP revolutionized the world of color-critical displays with the release of the first DreamColor in June 2008. The LP2480zx was a 24-inch 1920×1200 display that had built-in color processing with profiles for standard color spaces and the ability to calibrate it to refine those profiles as the monitor aged. It was not the first display with any of these capabilities, but the first one that was affordable, by at least an order of magnitude.

It became very popular in the film industry, both sitting on desks in post facilities — as it was designed — and out in the field as a live camera monitor, which it was not designed for. It had a true 10-bit IPS pane and the ability to reproduce incredible detail in the darks. It could only display 10-bit sources from the brand-new DisplayPort input or the HDMI port, and the color gamut remapping only worked for non-interlaced RGB sources.

So many people using the DreamColor as a “video monitor” instead of a “computer monitor” weren’t even using the color engine — they were just taking advantage of the high-quality panel. It wasn’t just the color engine but the whole package, including the price, that led to its overwhelming success. This was helped by the lack of better options, even at much higher price points, since this was the period after CRT production ended but before OLED panels had reached the market. This was similar to (and in the same timeframe as) Canon’s 5D MarkII revolutionizing the world of independent filmmaking with its HDSLRs. The combination gave content creators amazing tools for moving into HD production at affordable price points.

It took six years for HP to release an update to the original model DreamColor in the form of the Z27x and Z24x. These had the same color engine but different panel technology. They never had the same impact on the industry as the original, because the panels didn’t “wow” people, and the competition was starting to catch up. Dell has PremierColor and Samsung and BenQ have models featuring color accuracy as well. The Z27x could display 4K sources by scaling them to its native 2560×1440 resolution, while the Z24x’s resolution was decreased to 1920×1080 with a panel that was even less impressive.

Fast forward a few more years, and the Z24x was updated to Gen2, and the Z32x was released with UHD resolution. This was four times the resolution of the original DreamColor and at half the price. But with lots of competition in the market, I don’t think it has had the reach of the original DreamColor, and the industry has matured to the point where people aren’t hooking them to 4K cameras because there are other options better suited to that environment, specifically battery powered OLED units.

DreamColor at 4K
Fast forward a bit and HP has released the Z31x DreamColor studio display. The big feature that this unit brings to the table is true cinema 4K resolution. The label 4K gets thrown around a lot these days, but most “4K” products are actually UHD resolution, at 3840×2160, instead of the full 4096×2160. This means that true 4K content is scaled to fit the UHD screen, or in the case of Sony TVs, cropped off the sides. When doing color critical work, you need to be able to see every pixel, with no scaling, which could hide issues. So the Z31x’s 4096×2160 native resolution will be an important feature for anyone working on modern feature films, from editing and VFX to grading and QC.

The 10-bit 4K Panel
The true 10-bit IPS panel is the cornerstone of what makes a DreamColor such a good monitor. IPS monitor prices have fallen dramatically since they were first introduced over a decade ago, and some of that is the natural progression of technology, but some of that has come at the expense of quality. Most displays offering 10-bit color are accomplishing that by flickering the pixels of an 8-bit panel in an attempt to fill in the remaining gradations with a technique called frame rate control (FRC). And cheaper panels are as low as 6-bit color with FRC to make them close to 8-bit. There are a variety of other ways to reduce cost with cheaper materials, and lower-quality backlights.

HP claims that the underlying architecture of this panel returns to the quality of the original IPS panel designs, but then adds the technological advances developed since then, without cutting any corners in the process. In order to fully take advantage of the 10-bit panel, you need to feed it 10-bit source content, which is easier than it used to be but not a forgone conclusion. Make sure you select 10-bit output color in your GPU settings.

In addition to a true 10-bit color display, it also natively refreshes at the rate of the source image, from 48Hz-60Hz, because displaying every frame at the right time is as important as displaying it in the right color. They say that the darker blacks are achieved by better crystal alignment in the LCD (Liquid Crystal Display) blocking out the backlight more fully. This also gives a wider viewing angle, since washing out the blacks is usually the main issue with off-axis viewing. I can move about 45 degrees off center, vertically or horizontally, without seeing any shift in the picture brightness or color. Past that I start to see the mid levels getting darker.

Speaking of brighter and darker, the backlight gives the display a native brightness of 250 nits. That is over twice the brightness needed to display SDR content, but this not an HDR display. It can be adjusted anywhere from 48 to 250 nits, depending on the usage requirements and environment. It is not designed to be the brightest display available, it is aiming to be the most accurate.

Much effort was put into the front surface, to get the proper balance of reducing glare and reflections as much as possible. I can’t independently verify some of their other claims without a microscope and more knowledge than I currently have, but I can easily see that the matte surface of the display is much better than other monitors in regards to fewer reflections and less glare for the surrounding environment, allowing you to better see the image on the screen. That is one of the most apparent strengths of the monitor, obviously visible at first glance.

Color Calibration
The other new headline feature is an integrated colorimeter for display calibration and verification, located in the top of the bezel. It can swing down and measure the color parameters of the true 10-bit IPS panel, to adjust the color space profiles, allowing the monitor to more accurately reproduce colors. This is a fully automatic feature, independent of any software or configuration on the host computer system. It can be controlled from the display’s menu interface, and the settings will persist between multiple systems. This can be used to create new color profiles, or optimize the included ones for DCI P3, BT.709, BT.2020, sRGB and Adobe RGB. It also includes some low-blue-light modes for use as an interface monitor, but this negates its color accurate functionality. It can also input and output color profiles and all other configuration settings through USB and its network connection.

The integrated color processor also supports using external colorimeters and spectroradiometers to calibrate the display, and even allows the integrated XYZ colorimeter itself to be calibrated by those external devices. And this is all accomplished internally in the display, independent of using any software on the workstation side. The supported external devices currently include:
– Klein Instruments: K10, K10-A (colorimeters)
– Photo Research: PR-655, PR-670, PR-680, PR-730, PR-740, PR-788 (spectroradiometers)
– Konica Minolta: CA-310 (colorimeter)
– X-Rite: i1Pro 2 (spectrophotometer), i1Display (colorimeter)
– Colorimetry Research: CR-250 (spectroradiometer)

Inputs and Ports
There are five main display inputs on the monitor: two DisplayPort 1.2, two HDMI 2.0 and one DisplayPort over USB-C. All support HDCP and full 4K resolution at up to 60 frames per second. It also has an 1/8-inch sound jack and a variety of USB options. There are four USB 3.0 ports that are shared via KVM switching technology between the USB-C host connection and a separate USB-B port to a host system. These are controlled by another dedicated USB keyboard port, giving the monitor direct access to the keystrokes. There are two more USB ports that connect to the integrated DreamColor hardware engine, for connecting external calibration instruments, and for loading settings from USB devices.

My only complaint is that while the many USB ports are well labeled, the video ports are not. I can tell which ones are HDMI without the existing labels, but what I really need is to know which one the display views as HDMI1 and which is HDMI2. The Video Input Menu doesn’t tell you which inputs are active, which is another oversight, given all of the other features they added to ease the process of sharing the display between multiple inputs. So I recommend labeling them yourself.

Full-Screen Monitoring Features
I expect the Z31x will most frequently be used as a dedicated full-resolution playback monitor, and HP has developed a bunch of new features that are very useful and applicable for that use case. The Z31x can overlay mattes (with variable opacity) for Flat and Scope cinema aspect ratios (1.85 and 2.39). It also can display onscreen markers for those sizes, as well as 16×9 or 3×4, including action and title safe, including further options for center and thirds markers with various colors available. The markers can be further customized with HP’s StudioCal.XML files. I created a preset that gives you 2.76:1 aspect ratio markers that you are welcome to download and use or modify. These customized XMLs are easy to create and are loaded automatically when you insert a USB stick containing them into the color engine port.

The display also gives users full control over the picture scaling, and has a unique 2:1 pixel scaling for reviewing 2K and HD images at pixel-for-pixel accuracy. It also offers compensation for video levels and overscan and controls for de-interlacing, cadence detection, panel overdrive and blue-channel-only output. You can even control the function of each bezel button, and their color and brightness. These image control features will definitely be significant to professional users in the film and video space. Combined with the accurate reproduction of color, resolution and frame rate, this makes for an ideal display for monitoring nearly any film or video content at the highest level of precision.

Interface Display Features
Most people won’t be using this as an interface monitor, due to the price and because the existing Z32x should suffice when not dealing with film content at full resolution. Even more than the original DreamColor, I expect it will primarily be used as a dedicated full-screen playback monitor and users will have other displays for their user interface and controls. That said, HP has included some amazing interface and sharing functionality in the monitor, integrating a KVM switch for controlling two systems on any of the five available inputs. They also have picture-in-picture and split screen modes that are both usable and useful. HD or 2K input can be displayed at full resolution over any corner of the 4K master shot.

The split view supports two full-resolution 2048×2160 inputs side by side and from separate sources. That resolution has been added as a default preset for the OS to use in that mode, but it is probably only worth configuring for extended use. (You won’t be flipping between full screen and split very easily in that mode.) The integrated KVM is even more useful in these configurations. It can also scale any other input sizes in either mode but at a decrease in visual fidelity.

HP has included every option that I could imagine needing for sharing a display between two systems. The only problem is that I need that functionality on my “other” monitor for the application UI, not on my color critical review monitor. When sharing a monitor like this, I would just want to be able to switch between inputs easily to always view them at full screen and full resolution. On a related note, I would recommend using DisplayPort over HDMI anytime you have a choice between the two, as HDMI 2.0 is pickier about 18Gb cables, occasionally preventing you from sending RGB input and other potential issues.

Other Functionality
The monitor has an RJ-45 port allowing it to be configured over the network. Normally, I would consider this to be overkill but with so many features to control and so many sub-menus to navigate through, this is actually more useful than it would be on any other display. I found myself wishing it came with a remote control as I was doing my various tests, until I realized the network configuration options would offer even better functionality than a remote control would have. I should have configured that feature first, as it would have made the rest of the tests much easier to execute. It offers simple HTTP access to the controls, with a variety of security options.

I also had some issues when using the monitor on a switched power outlet on my SmartUPS battery backup system, so I would recommend using an un-switched outlet whenever possible. The display will go to sleep automatically when the source feed is shut off, so power saving should be less of an issue that other peripherals.

Pricing and Options
The DreamColor Z31x is expected to retail for $4,000 in the US market. If that is a bit out of your price range, the other option is the new Z27x G2 for half of that price. While I have not tested it myself, I have been assured that the newly updated 27-inch model has all of the same processing functionality, just in a smaller form-factor, with a lower-resolution panel. The 2560×1440 panel is still 10-bit, with all of the same color and frame rate options, just at a lower resolution. They even plan to support scaling 4K inputs in the next firmware update, similar to the original Z27x.

The new DreamColor studio displays are top-quality monitors, and probably the most accurate SDR monitors in their price range. It is worth noting that with a native brightness of 250 nits, this is not an HDR display. While HDR is an important consideration when selecting a forward-looking display solution, there is still a need for accurate monitoring in SDR, regardless of whether your content is HDR compatible. And the Z31x would be my first choice for monitoring full 4K images in SDR, regardless of the color space you are working in.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: HP’s ZBook Studio G4 mobile workstation

By Brady Betzel

It seems like each year around this time, I offer my thoughts on an HP mobile workstation and how it serves multimedia professionals. This time I am putting the HP ZBook Studio G4 through its paces. The ZBook Studio line of HP’s mobile workstations seems to fit right in the middle between ease of mobility, durability and power. The ZBook 14u and 15u are the budget series mobile workstations that run Intel i5/i7 processors with AMD FirePro graphics and top out at around $1,600. The ZBook 15 and 17 are the more powerful mobile workstations in the line with the added ability to include Intel Xeon processors, ECC memory, higher-end Nvidia Quadro graphics cards and more. But in the this review we will take the best of all models and jam them into the light and polished ZBook Studio G4.

The HP ZBook Studio G4 I was sent to test out had the following components:
– Windows 10 64 bit
– Intel Xeon 1535M (7th gen) quad-core processor – 3.10GHz with 4.2 Turbo Boost
– 4K UHD DreamColor/15.6-inch IPS screen
– 32GB ECC (2x16GB)
– Nvidia Quadro M1200 (4GB)
– 512GB HP Z Turbo Drive PCIe (MLC)
– 92Whr fast charging battery
– Intel vPro WLAN
– Backlit keyboard
– Fingerprint reader

According to the info I was sent directly from HP, the retail price is $3,510 on hp.com (US webstore). I built a very similar workstation on http://store.hp.com and was able to get the price at $3,301.65 before shipping and taxes, and $3,541.02 with taxes and free shipping. So actually pretty close.

So, besides the natural processor, memory and hard drive upgrades from previous generations, the ZBook Studio G4 has a few interesting updates, including the higher-wattage batteries with fast charge and the HP Sure Start Gen3 technology. The new fast charge is similar to the feature that some products like the GoPro Hero 5/6 cameras and Samsung Galaxy phones have, where they charge quicker than “normal.” The ZBook Studio, as well as the rest of the ZBook line, will charge 50% of your battery in around 30 minutes when in standby mode. Even when using the computer, I was able to charge the first 50% in around 30 minutes, a feature I love. After the initial 50% charge is complete, the charging will be at a normal rate, which wasn’t half bad and only took a few hours to get it to about 100%.

The battery I was sent was the larger of the two options and provided me with an eight-hour day with decent usage. When pushed using an app like Resolve I would say it lasted more like four hours. Nonetheless it lasted a while and I was happy with the result. Keep in mind the batteries are not removable, but they do have a three-year warranty, just like the rest of the mobile workstation.

When HP first told me about its Sure Start Gen 3, I thought maybe it was just a marketing gimmick, but then I experienced its power — and it’s amazing. Essentially, it is a hardware function available on only 7th generation Intel processors that allows the BIOS to repair itself upon identification of malware or corruption. While using the ZBook Studio G4, I was installing some software and had a hard crash (blue screen). I noticed when it restarted the BIOS was running through the Sure Start protocol, and within minutes I was back up and running. It was reassuring and would really set my mind at ease if deciding between a workstation-level solution or retail store computing solution.

You might be asking yourself why you should buy an enterprise-level mobile workstation when you could go buy a laptop for cheaper and almost as powerful at Best Buy or on Amazon? Technically, what really sets apart workstation components is their ability to run 24/7 and 365 days a year without downtime. This is helped by Intel Xeon processors that allow for ECC (Error Correcting Code memory), essentially bits don’t get flipped as they can with non-ECC memory. Or for laymen, like me, ECC memory prevents crashing by fixing errors itself before we see any repercussions.

Another workstation-level benefit is the environmental testing that HP runs the ZBooks through to certify their equipment as military grade, also known as MIL-810G testing. Essentially, they run multiple extreme condition tests such as high and low temperatures, salt, fog and even high-vibration testing like gunfire. Check out a more in-depth description on Wikipedia. Finally, HP prides itself on its ISV (Independent Software Vendors) verification. ISV certification means that HP spends a lot of time working with software vendors like Adobe, Avid, Autodesk and others to ensure compatibility with their products and HP’s hardware so you don’t have to. They even release certified drivers that help to ensure compatibility regularly.

In terms of warranty, HP gives you a three-year limited warranty. This includes on-site service within the Americas, and as mentioned earlier it covers the battery, which is a nice bonus. Much like other warranties it covers problems arising from faulty manufacturing, but not intentional or accidental damage. Luckily for anyone who purchases a Zbook, these systems can take a beating. Physically, the computer weighs in around 4.6lbs and is 18mm thin. It is machined aluminum that isn’t sharp, but it can start to dig into your wrists when typing for long periods. Around the exterior you get two Thunderbolt 3 ports, an HDMI port, three USB 3.1 ports (one on left and two on the right), an Ethernet port and Kensington Lock port. On the right side, you also get a power port — I would love for HP to design some sort of break-away cable like the old Magsafe cables on the MacBook Pros — and there is also a headphone/mic input.

DreamColor Display
Alright, so now I’ll go through some of the post-nerd specs that you might be looking for. Up first is the HP DreamColor display, which is a color-critical viewing solution. With a couple clicks in the Windows toolbar on the lower right you will find a colored flower — click on that and you can immediately adjust the color space you want to view your work in: AdobeRGB, sRGB, BT.709, DCI-P3 or Native. You can even calibrate or backup your own calibration for later use. While most colorists or editors use an external calibrated monitoring solution and don’t strictly rely on your viewing monitor as the color-critical source, using the DreamColor display will get you close to a color critical display without purchasing additional hardware.

In addition, DreamColor displays can play back true 24fps without frame rate conversion. One of my favorite parts of DreamColor is that if you use an external DreamColor monitor through Thunderbolt 3 (not using an SDI card), you can load your color profile onto the second or third monitor and in theory they should match. The ZBook Studio G4 seems to have been built as a perfect DIT (digital imaging technician) solution for color critical work in any weather-challenged or demanding environment without you having to worry about failure.

Speed & Testing
Now let’s talk about speed and how the system did with speed tests. When running a 24TB (6TB-4TB drives) G-Speed ShuttleXL with Thunderbolt 3 from G-Technology, I was able to get write speeds of around 1450MB/s and read speeds of 960MB/s when running the AJA System Test using a 4GB test file running RAID-0. For comparison, I ran the same test on the internal 512GB HP Z Turbo Drive, which had a write speed of 1310MB/s and read speed 1524MB/s. Of course, you need to keep in mind that the internal drive is a PCIe SSD whereas the RAID is 7200RPM drives. Finally, I ran the standard benchmarking app Cinebench R15 that comes from the makers of Maxon Cinema 4D, a 3D modeling app. For those interested, the OpenGL test ran at 138.85fps with a Ref. Match of 99.6%, CPU 470cb and CPU (Single Core) 177cb with an MP Ratio of 2.65x.

I also wanted to run the ZBook through some practical and real-world tests, and I wanted to test the rendering and exporting speeds. I chose to use Blackmagic’s DaVinci Resolve 14.2 software because it is widely used and an easily accessible app for many of today’s multimedia pros. For a non-scientific yet important benchmark, I needed to see how well the ZBook G4 played back R3D files (Red camera files), as well as QuickTimes with typical codecs you would find in a professional environment, such as ProRes and DNxHD. You can find a bunch of great sample R3D files on Red’s website. The R3D I chose was 16 seconds in length, shot on a Red Epic Dragon at 120fps and UHD resolution (3840×2160). To make sure I didn’t have anything skewing the results, I decided to clear all optimized media, if there was any, delete any render cache, uncheck “Use Optimized Media If Available” and uncheck “Performance Mode” just in case that did any voodoo I wasn’t aware of.

First was a playback test where I wanted to see at what decode quality I could playback in at realtime without dropping frames when I performed a slight color correction and added a power window. For this clip, I was able to get it to playback in a 23.98/1080p timeline in realtime when it was set to Half Resolution Good. At Half Resolution Premium I was dropping one or two frames. While playing back and at Full Resolution Premium, I was dropping five or six frames —playing back at around 17 or 18fps. Playing back at Half Resolution Good is actually great playback quality for such a high-quality R3D with all the head room you get when coloring a raw camera file and not a transcode. This is also when the fans inside the ZBook really kicked in. I then exported a ProRes4444 version of the same R3D clip from RedCine-X Pro with the LUT info from the camera baked in. I played the clip back in Resolve with a light color treatment and one power window with no frames dropped. When playing back the ProRes4444 file the fans stayed at a low pitch.

The second test was a simple DNxHD 10-bit export from the raw R3D. I used the DNxHD 175x codec — it took about 29 seconds, which was a little less than double realtime. I then added spatial noise reduction on my first node using the following settings: Mode: Better, Radius: Medium, Spatial Threshold (luma/chroma locked): 25. I was able to playback the timeline at around 5fps and exported the same DNxHD 175x file, but it took about 1 minute 27 seconds, about six times realtime. Doing the same DNxHD 175x export test with the ProRes4444 file, it took about 12 seconds without noise reduction and with the noise reduction about 1 minute and 16 seconds — about 4.5 times realtime. In both cases when using Noise Reduction, the fans kicked on.

Lastly, I wanted to see how Resolve would handle a simple one minute, 1080p, ProRes QuickTime in various tests. I don’t think it’s a big surprise but it played back without dropping any frames with one node of color correction, one power window and as a parallel node with a qualifier. When adding spatial noise reduction I started to get bogged down to about 6fps. The same DNxHD 175x export took about 27 seconds or a little less than half realtime. With the same spatial noise reduction as above it took about 4 minutes and 21 seconds, about 4.3 times realtime.

Summing Up
The HP ZBook Studio G4 is a lightweight and durable enterprise-level mobile workstation that packs the punch of a color-critical 4K (UHD — 3840×2160) DreamColor display, powered by an Nvidia Quadro M1200, and brought together by an Intel Xeon processor that will easily power many color, editing or other multimedia jobs. With HP’s MIL-810G certification, you have peace of mind that even with some bumps, bruises and extreme weather your workstation will work. At under 5lbs and 18mm thin with a battery that will charge 50% in 30 minutes, you can bring your professional apps like DaVinci Resolve, Adobe Premiere and Avid Media Composer anywhere and be working.

I was able to use the ZBook along with some of my Tangent Element color correction panels in a backpack and have an instant color critical DIT solution without the need for a huge cart — all capable of color correction and transcoding. The structural design of the ZBook is an incredibly sturdy, machined aluminum chassis that is lightweight enough to easily go anywhere quickly. The only criticisms are I would often miss the left click of the trackpad leaving me in a right-click scenario, the Bang & Olufsen speakers sound a little tin-like to me and, finally, it doesn’t have a touch bar… just kidding.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Red Giant Trapcode Suite 14

By Brady Betzel

Every year we get multiple updates to Red Giant’s Adobe After Effects plug-in behemoth, Trapcode Suite. The 14th update to the Trapcode suite is small but powerful and brings significant updates to Version 3 of Trapcode as well as Form (Trapcode Form 3 is a particle system generator much like Particular, but instead of the particles living and dying they stay alive forever as grids, 3D objects and other organic shapes). If you have the Trapcode Suite from a previous purchase the update will cost $199, and if you are new the suite costs $999, or $499 with an academic discount.

Particular 3 UI

There are three updates to the Suite that warrant the $199 upgrade fee: Trapcode 3, Form 3 and Tao 1.2 update. However, you still get the rest of the products with the Trapcode Suite 14: Mir 2.1, Shine 2.0, Lux 1.4, 3D Stroke 2.6, Echospace 1.1, Starglow 1.7, Sound Keys 1.1 and Horizon 1.1

First up is the Tao 1.2 update. Trapcode Tao allows you to create 3D geometric patterns along a path in After Effects. If you do a quick YouTube search of Tao you will find some amazing examples of what it can do. In the Tao 1.2 update Red Giant has added a Depth-of-Field tool to create realistic bokeh effects on your Tao objects. It’s a simple but insanely powerful update that really gives your Tao creations a sense of realism and beauty. To enable the new Depth-of-Field, wander over to the Rendering twirl-down menu under Tao and either select “off” or “Camera Settings.” It’s pretty simple. From there it is up to your After Effects camera skills and Tao artistry.

Trapcode Particular 3
Trapcode Particular is one of Red Giant’s flagship plugins and it’s easy to see why. Particular allows you to create complex particle animations within After Effects. From fire to smoke to star trails, it can pretty much do whatever your mind can come up with, and Version 3 has some powerful updates, including the overhauled Trapcode Particular Designer.

The updated designer window is very reminiscent of the Magic Bullet Designer window, easy and natural to use. Here you design your particle system, including the look, speed and overall lifespan of your system. While you can also adjust all of these parameters in the Effects Window dialog, the Designer gives an immediate visual representation of your particle systems that you can drag around and see how it interacts with movement. In addition you can see any presets that you want to use or create.

Particular 3

In Particular 3, you can now use OBJ objects as emitters. An OBJ is essentially a 3D object. You can use the OBJ’s faces, vertices, edges, and the volume inside the object to create your particle system.

The largest and most important update to the entire Trapcode Suite 14 is found within Particular 3, and it is the ability to add up to eight particle systems per instance of Particular. What does that mean? Well, your particle systems will now interact in a way that you can add details such as dust or a bright core that can carry over properties from other particle systems in the same same instance, adding the ability to create way more intricate systems than before.

Personally, the newly updated Designer is what allows me to dial in these details easily without trying to twirl down tons of menus in the Effect Editor window. A specific use of this is that you want to duplicate your system and inherit the properties, but change the blend mode and/or colors, simply you click the drop down arrow under system and click “duplicate.” Another great update within the multiple particle system update is the ability to create and load “multi-system” presets quickly and easily.

Now, with all of these particle systems mashed together you probably are wondering, “How in the world will my system be able to handle all of these when it’s hard to even playback a system in the older Trapcode Suite?” Well, lucky for us Trapcode Particular 3 is now OpenGL — GPU-accelerated and allowing for sometimes 4x speed increases. To access these options in the Designer window, click the cogwheel on the lower edge of the window towards the middle. You will find the option to render using the CPU or the GPU. There are some limitations to the GPU acceleration. For instance, when using mixed blend modes you might not be able to use certain GPU acceleration types — it will not reflect the proper blend mode that you selected. Another limitation can be with Sprites that are QuickTime movies; you may have to use the CPU mode.

Last but not least, Particular 3’s AUX system (a particle system within the main particle system) has been re-designed. You can now choose custom Sprites as well as keyframe many parameters that could not be keyframed before.

Form 3 UI

Trapcode Form 3
For clarification, Trapcode Particular can create particle emitters that emit particles that have a life, so basically they are born and they die. Trapcode Form is a particle system that does not have a life — it is not born and it does not die. Some practical examples can be a ribbon like background or a starfield. These particle systems can be made from 3D models and even be dynamically driven by an audio track. And much like Particular’s updated Designer, Form 3 has an updated designer that will help you build you particle array quickly and easily. Once done inside the Designer you can hop out and adjust parameters in the Effects Panel. If you want to use pre-built objects or images as your particles you can load those as Sprites or Textured Polygons and animate their movement.

Another really handy update in Trapcode Form 3 is the addition of the Graphing System. This allows you to animate controls like color, size, opacity and dispersion over time.

Just like Particular, Form reacts to After Effect’s cameras and lights, completely immersing them into any scene that you’ve built. For someone like me, who loves After Effects and the beauty of creations from Form and Particular but who doesn’t necessarily have the time to create from scratch, there is a library of over 70 pre-built elements. Finally, Form has added a new rendering option called Shadowlet rendering which adds light falloff to your particle grid or array.

Form 3

Summing Up
In the end, the Trapcode Suite 14 has significantly updated Trapcode Particular 3 with multiple particle systems, Trapcode Form 3 with a beautiful new Designer, and Trapcode Tao with Depth-of-Field, all for an upgrade price of $199. Some Trapcode Particular users have been asking for the ability to build and manipulate multiple particle systems together, and Red Giant has answered their wishes.

If you’ve never used the Trapcode Suite you should also check out the rest of the mega-bundle which includes apps like Shine, 3D Stroke, Starglow, MIr, Lux, Sound Keys, Horizon and Echospace here. And if you want to get more in-depth rundowns of each of these programs check out Harry Frank’s (@graymachine) and Chad Perkin’s tutorials on the Red Giant News website. Then immediately follow @trapcode_lab and @RedGiantNews on Twitter.

If you want to find out more about the other tools in the Trapcode Suite check out my previous two-part review of Suite 13 here on postPerspective: https://postperspective.com/review-red-giants-trapcode-suite-13-part-1 and https://postperspective.com/review-red-giant-trapcode-suite-13-part-2.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Boxx’s Apexx 4 7404 workstation

By Brady Betzel

The professional workstation market has been blown open recently with companies like HP, Apple, Dell, Lenovo and others building systems containing i3/i5/i7/i9 and Xeon processors, and  AMD’s recent re-inauguration into the professional workstation market with their Ryzen line of processors.

There are more options than ever, and that’s a great thing for working pros, but for this review, I’m going to take a look at Boxx Technologies Apexx 4 7404, which the company sent me to run through its paces over a few months, and it blew me away.

The tech specs of the Apexx 4 7404 are:
– Processor: Intel i7-6950X CPU (10 cores/20 threads)
– One core is overclocked to 4.3GHz while the remaining nine cores can run at 4.1GHz
– Memory: 64GB DDR4 2400MHz
– GPUs: Nvidia Quadro P5000 (2560 CUDA cores, 16GB GDDR5X)
– Storage drive: NVMe Samsung SSD 960 (960GB)
– Operating system drive: NVMe Intel SSDPEDMW400 (375GB)
– Motherboard: ASUS X99-E WS/USB3.1

On the front of the workstation, you get two USB 3.0, two USB 2.0, audio out/mic in, and on the rear of the 7404 there are eight USB 3.0, two USB 3.1, two Gigabit Ethernet, audio out/mic in, line in, one S/PDIF out and two eSATA. Depending on the video card(s) you choose, you will have some more fun options.

This system came with a DVD-RW drive, which is a little funny these days but I suppose still necessary for some people. If you need more parts or drives there is plenty of room for all that you could ever want, both inside and out. While these are just a few of the specs, they really are the most important, in my opinion. If you purchase from Boxx all of these can be customized. Check out all of the different Boxx Apexx 4 flavors here.

Specs
Right off the bat you will notice the Intel i7-6950X CPU, which is a monster of a processor and retails for around $1,500, just by itself. With its hefty price tag, this Intel i7 lends itself to niche use cases like multimedia processing. Luckily for me (and you), that is exactly what I do. One of the key differences between a system like the Boxx workstation and ones from companies like HP is that Boxx takes advantage of the X or K series Intel processors and overclocks them, getting the most from your processors all while still being backed by Boxx’s three-year warranty. The 7404 has one core overclocked to 4.3GHz which can sometimes provide a speed increase for apps that don’t use multiple cores. While this isn’t a lot of cases it doesn’t hurt to have that extra boost.

The Apexx 4 case is slender (at 6.85-inches wide) and quiet. Boxx embraces liquid cooling systems to keep your enterprise-class components made by companies like Samsung, Intel, etc. running smoothly. Boxx systems are built and fabricated in Texas from aircraft grade aluminum parts and steel strengthening components.

When building your own system you might pick a case because the price is right or it is all that is available for your components (or that is what pcpartpicker.com tells you that is what fits). This can mean giving up build quality and potentially bad airflow. Boxx knows this and has gone beyond just purchasing other companies cases — they forge their own workstation case masterpieces.

Boxx’s support is based in Austin – no outsourcing — and their staff knows the apps we use such as Autodesk, Adobe and others.

Through Its Paces
I tested the Apexx 4 7404 using Adobe Premiere Pro and Adobe Media Encoder since they are really the Swiss Army knives of the multimedia content creation world. I edited together a 10-minute UHD (3840×2160) sequence using an XAVC MP4 I shot using a Sony a6300. I did a little color correction with the Lumetri Color tools, scaled the image up to 110% and exported the file through Media Encoder. I exported it as a 10-bit DNxHQX, UHD, QuickTime MOV.

It took seven minutes and 40 seconds to export to the OS drive (Intel) and about six minutes and 50 seconds to go to the internal storage drive (Samsung). Once I hit export I finally got the engines to rev up inside of the Boxx, the GPU fans seemed to kick on a little; they weren’t loud but you could hear a light breeze start up. On my way out of Premiere I exported an XML to give me a headstart in Resolve for my next test.

My next test was to import my Premiere XML into Blackmagic’s Resolve 14 Studio and export with essentially the same edits, reproduce the color correction, and apply the same scaling. It took a few minutes to get Resolve 14 up and running, but after doing a few uninstalls, installing Resolve 12.5.6 and updating my Nvidia drivers, Resolve 14 was up and running. While this isn’t a Boxx problem, I did encounter this during my testing so I figured someone might run into the same issue, so I wanted to mention it.

I then imported my XML, applied a little color correction, and double checked that my 110% scaling came over in the XML (which it did), and exported using the same DNxHQX settings that I used in Premiere. Exporting from Resolve 14 to the OS drive took about six minutes and 15 seconds, running at about 41 frames per second. When exporting to the internal storage drive it took about six minutes and 11 seconds, running between 40-42 frames per second. For those keeping track of testing details, I did not cache any of the QuickTimes and turned Performance Mode off for these tests (in case Blackmagic had any sneaky things going on in that setting).

After this, I went a little further and exported the same sequence with some Spatial Noise Reduction set across the entire 10-minute timeline using these settings: Mode: Better; Radius: Medium; Spatial Threshold: 15 on both Luma and Chroma; and Blend: 0. It ran at about nine frames per second and took about 25 minutes and 25 seconds to export.

Testing
Finally, I ran a few tests to get some geeky nerd specs that you can compare to other users’ experiences to see where this Boxx Apexx 4 7404 stands. Up first was the AJA System Test, which tests read and write speeds to designated disks. In addition, you can specify different codecs and file sizes to base this test off of. I told the AJA System Test to run its test using the 10-bit Avid DNxHQX codec, 16GB file size and UHD frame size (3860×2140). I ran it a few times, but the average was around 2100/2680 MB/sec write and read to the OS drive and 1000/1890 MB/sec write and read to the storage drive.

To get a sense of how this system would hold up to a 3D modeling test, I ran the classic Cinebench R15 app. OpenGL was 215.34 frames per second with 99.6% ref. match, CPU scored 2121cb and CPU (single core) cored 181cb with MP Ratio of 11.73x. What the test really showed me when I Googled Cinebench scores to compare mine to was that the Boxx Apexx 4 7404 was in the top of the heap for all categories. Specifically, within the top 20 for overall render speed being beaten only by systems with more cores and placed in the top 15 for single core speed — the OpenGL fps is pretty incredible at over 215fps.

Summing Up
In the end, the Boxx Apexx 4 7404 custom-built workstation is an incredible powerhouse for any multimedia workflow. From rendering to exporting to transcoding, the Boxx Apexx 4 7404 with dual Nvidia Quadro P5000s will chew through anything you throw at it.

But with this power comes a big price: the 7404 series starts at $7,246! The price of the one I tested lands much higher north though, more like just under $14,000 — those pesky Quadros bump the price up quite a bit. But if rendering, color correcting, editing and/or transcoding is your business, Boxx will make sure you are up and running and chewing through every gigabyte of video and 3D modeling you can run through it.

If you have any problems and are not up and running, their support will get you going as fast as possible. If you need parts replaced they will get that to you fast. Boxx’s three-year warranty, which is included with your purchase, includes getting next day on-site repair for the first year but this is a paid upgrade if you want it to continue for years two and three of your warranty. But don’t worry. If you don’t upgrade your warranty you still have two years of great support.

In my opinion, you should really plan for the extended on-site repair upgrade for all three years of your warranty — you will save time, which will make you more money. If you can afford a custom-built Boxx system, you will get a powerhouse workstation that makes working in apps like Premiere and Resolve 14 snappy and fluid.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.