Tag Archives: Mike McCarthy

New codec, workflow options via Red, Nvidia and Adobe

By Mike McCarthy

There were two announcements last week that will impact post production workflows. The first was the launch of Red’s new SDK, which leverages Nvidia’s GPU-accelerated CUDA framework to deliver realtime playback of 8K Red footage. I’ll get to the other news shortly. Nvidia was demonstrating an early version of this technology at Adobe Max in October, and I have been looking forward to this development since I am about to start post on a feature film shot on the Red Monstro camera. This should effectively render the RedRocket accelerator cards obsolete, replacing them with cheaper, multipurpose hardware that can also accelerate other computational tasks.

While accelerating playback of 8K content at full resolution requires a top-end RTX series card from Nvidia (Quadro RTX 6000, Titan RTX or GeForce RTX 2080Ti), the technology is not dependent on RTX’s new architecture (RT and Tensor cores), allowing earlier generation hardware to accelerate smooth playback at smaller frame sizes. Lots of existing Red footage is shot at 4K and 6K, and playback of these files will be accelerated on widely deployed legacy products from previous generations of Nvidia GPU architecture. It will still be a while before this functionality is in the hands of end users, because now Adobe, Apple, Blackmagic and other software vendors have to integrate the new SDK functionality into their individual applications. But hopefully we will see those updates hitting the market soon (targeting late Q1 of 2019).

Encoding ProRes on Windows via Adobe apps
The other significant update, which is already available to users as of this week, is Adobe’s addition of ProRes encoding support on its video apps in Windows. Developed by Apple, ProRes encoding has been available on Mac for a long time, and ProRes decoding and playback has been available on Windows for over 10 years. But creating ProRes files on Windows has always been a challenge. Fixing this was less a technical challenge than a political one, as Apple owns the codec and it is not technically a standard. So while there were some hacks available at various points during that time, Apple has severely restricted the official encoding options available on Windows… until now.

With the 13.0.2 release of Premiere Pro and Media Encoder, as well as the newest update to After Effects, Adobe users on Windows systems can now create ProRes files in whatever flavor they happen to need. This is especially useful since many places require delivery of final products in the ProRes format. In this case, the new export support is obviously a win all the way around.

Adobe Premiere

Now users have yet another codec option for all of their intermediate files, prompting another look at the question: Which codec is best for your workflow? With this release, Adobe users have at least three major options for high-quality intermediate codecs: Cineform, DNxHR and now ProRes. I am limiting the scope to integrated cross-platform codecs supporting 10-bit color depth, variable levels of image compression and customizable frame sizes. Here is a quick overview of the strengths and weaknesses of each option:

ProRes
ProRes was created by Apple over 10 years ago and has become the de-facto standard throughout the industry, regardless of the fact that it is entirely owned by Apple. ProRes is now fully cross-platform compatible, has options for both YUV and RGB color and has six variations, all of which support at least 10-bit color depth. The variable bit rate compression scheme scales well with content complexity, so encoding black or static images doesn’t require as much space as full-motion video. It also supports alpha channels with compression, but only in the 444 variants of the codec.

Recent tests on my Windows 10 workstation resulted in ProRes taking 3x to 5x as much CPU power to playback as similar DNxHR of Cineform files, especially as frame sizes get larger. The codec supports 8K frame sizes but playback will require much more processing power. I can’t even playback UHD files in ProRes 444 at full resolution, while the Cineform and DNxHR files have no problem, even at 444. This is less of concern if you are only working at 1080p.

Multiply those file sizes by four for UHD content (and by 16 for 8K content).

Cineform
Cineform, which has been available since 2004, was acquired by GoPro in 2011. They have licensed the codec to Adobe, (among other vendors) and it is available as “GoPro Cineform” in the AVI or QuickTime sections of the Adobe export window. Cineform is a wavelet compression codec, with 10-bit YUV and 12-bit RGB variants, which like ProRes support compressed alpha channels in the RGB variant. The five levels of encoding quality are selected separately from the format, so higher levels of compression are available for 4444 content compared to the limited options available in the other codecs.

It usually plays back extremely efficiently on Windows, but my recent tests show that encoding to the format is much slower than it used to be. And while it has some level of support outside of Adobe applications, it is not as universally recognized as ProRes or DNxHD.

DNxHD
DNxHD was created by Avid for compressed HD playback and has now been extended to DNxHR (high resolution). It is a fixed bit rate codec, with each variant having a locked multiplier based on resolution and frame rate. This makes it easy to calculate storage needs but wastes space for files that are black or contain a lot of static content. It is available in MXF and Mov wrappers and has five levels of quality. The top option is 444 RGB, and all variants support alpha channels in Mov but uncompressed, which takes a lot of space. For whatever reason, Adobe has greatly optimized DNxHR playback in Premiere Pro, of all variants, in both MXF and Mov wrappers. On my project 6Below, I was able to get 6K 444 files to playback, with lots of effects, without dropping frames. The encodes to and from DNxHR are faster in Adobe apps as well.

So for most PC Adobe users, DNxHR-LB (low bandwidth) is probably the best codec to use for intermediate work. We are using it to offline my current project, with 2.2K DNxHR-LB, Mov files. People with a heavy Mac interchange may lean toward ProRes, but up your CPU specs for the same level of application performance.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

A Technologist’s Data Storage Primer

By Mike McCarthy

Storage is the concept of keeping all of the files for a particular project or workflow, but they may not all be stored in the same place — different types of data have different requirements and different storage solutions have different strengths and features.

At a fundamental level, most digital data is stored on HDDs or SSDs. HDDs, or hard disk drives, are mechanical devices that store the data on a spinning magnetic surface and move read/write heads over that surface to access the data. They currently max out around 200MB/s and 5ms latency.

SSDs, or solid-state drives, involve no moving parts. SSDs can be built with a number of different architectures and interfaces, but most are based on the same basic Flash memory technology as the CF or SD card in your camera. Some SSDs are SATA drives that use the same interface and form factor as a spinning disk for easy replacement in existing HDD-compatible devices. These devices are limited to SATA’s bandwidth of 600MB/s. Other SSDs use the PCIe interface, either in full-sized PCIe cards or the smaller M.2 form factor. These have much higher potential bandwidths, up to 3000MB/s.

Currently, HDDs are much cheaper for storing large quantities of data but require some level of redundancy for security. SSDs are also capable of failure, but it is a much more rare occurrence. Data recovery for either is very expensive. SSDs are usually cheaper for achieving high bandwidth, unless large capacities are also needed.

RAIDs
Traditionally, hard drives used in professional contexts are grouped together for higher speeds and better data security. These are called RAIDs, which stands for redundant array of independent disks. There are a variety of different approaches to RAID design that are very different from one another.

RAID-0 or striping is technically not redundant, but every file is split across each disk, so each disk only has to retrieve its portion of a requested file. Since these happen in parallel, the result is usually faster than if a single disk had read the entire file, especially for larger files. But if one disk fails, every one of your files will be missing a part of its data, making the remaining partial information pretty useless. The more disks in the array, the higher the chances of one failing, so I rarely see striped arrays composed of more than four disks. It used to be popular to create striped arrays for high-speed access to restorable data, like backed-up source footage, or temp files, but now a single PCIe SSD is far faster, cheaper, smaller and more efficient in most cases.

Sugar

RAID-1 or mirroring is when all of the data is written to more than one drive. This limits the array’s capacity to the size of the smallest source volume, but the data is very secure. There is no speed benefit to writes since each drive must write all of the data, but reads can be distributed across the identical drives with similar performance as RAID-0.

RAID-3, -5 and -6 try to achieve a balance between those benefits for larger arrays with more disks (minimum three). They all require more complicated controllers, so they are more expensive for the same levels of performance. RAID-3 stripes data across all but one drive and then calculates and stores parity (odd/even) data across the data drives and stores it on the last drive. This allows the data from any single failed drive to be restored, based on the parity data. RAID-5 is similar, but the parity volume is alternated depending on the block, allowing the reads to be shared across all disks, not just the “data drives.”

So the capacity of a RAID-3 or RAID-5 array will be the minimum individual disk capacity times the number of disks minus one. RAID-6 is similar but stores two drives worth of parity data, which via some more advanced math than odd/even, allows it to restore the data even if two drives fail at the same time. RAID-6 capacity will be the minimum individual disk capacity times the number of disks minus two, and is usually only used on arrays with many disks. RAID-5 is the most popular option for most media storage arrays, although RAID-6 becomes more popular as the value of the data stored increases and the price of extra drives decreases over time.

Storage Bandwidth
Digital data is stored as a series of ones and zeros, each of which is a bit. One byte is 8 bits, which frequently represents one letter of text, or one pixel of an image (8-bit single channel). Bits are frequently referenced in large quantities to measure data rates, while bytes are usually referenced when measuring stored data. I prefer to use bytes for both purposes, but it is important to know the difference. A Megabit (Mb) is one million bits, while a Megabyte (MB) is one million bytes, or 8 million bits. Similar to metric, Kilo is thousand, Mega is million, Giga is billion, and Tera is trillion. Anything beyond that you can learn as you go.

Networking speeds are measured in bits (Gigabits), but with headers and everything else, it is safer to divide by 10 when converting speed into bytes per second. Estimate 100MB/s for Gigabit, up to 1000MB/s on 10GB, and around 500MB/s for the new N-BaseT standard. Similarly, when transferring files over a 30Mb Internet connection, expect around 3MB/s, then multiple by 60 or 3,600 to get to minutes or hours (180MB/min or 9600MB/hr in this case). So if you have to download a 10GB file on that connection, come back to check on it in an hour.

Magnopus

Because networking standards are measured in bits, and because networking is so important for sharing video files, many video file types are measured in bits as well. An 8Mb H.264 stream is 1MB per second. DNxHD36 is 36Mb/s (or 4.5MB/s when divided by eight), DV and HDV are 25Mb, DVCProHD is 100Mb, etc. Other compression types have variable bit rates depending on the content, but there are still average rates we can make calculations from. Any file’s size divided by its duration will reveal its average data rate. It is important to make sure that your storage has the bandwidth to handle as many streams of video as you need, which will be that average data rate times the number of streams. So 10 streams of DNxHD36 will be 360Mb or 45MB/s.

The other issue to account for is IO requests and drive latency. Lots of small requests require not just high total transfer rates, but high IO performance as well. Hard drives can only fulfill around 100 individual requests per second, regardless of how big those requests are. So while a single drive can easily sustain a 45MB/s stream, satisfying 10 different sets of requests may keep it so busy bouncing between the demands that it can’t keep up. You may need a larger array, with a higher number of (potentially) smaller disks to keep up with the IO demands of multiple streams of data. Audio is worse in this regard in that you are dealing with lots of smaller individual files as your track count increases, even though the data rate is relatively low. SSDs are much better at handling larger numbers of individual requests, usually measured in the thousands or tens of thousands per second per drive.

Storage Capacity
Capacity on the other hand is simpler. Megabytes are usually the smallest increments of data that we have to worry about calculating. A media type’s data rate (in MB/sec) times its duration (in seconds) will give you its expected file size. If you are planning to edit a feature film with 100 hours of offline content in DNxHD36, that is 3600×100 seconds, times 4.5MB/s, equaling 1620000MB, 1620GB, or simply about 1.6TB. But you should add some headroom for unexpected needs, and then a 2TB disk is about 1.8TB when formatted, so it will just barely fit. It is probably worth sizing up to at least 3TB if you are planning to store your renders and exports on there as well.

Once you have a storage solution of the required capacity there is still the issue of connecting it to your system. The most expensive options connect through the network to make them easier to share (although more is required for true shared storage), but that isn’t actually the fastest option or the cheapest. A large array can be connected over USB3 or Thunderbolt, or via the SATA or SAS protocol directly to an internal controller.

There are also options for Fibre Channel, which can allow sharing over a SAN, but this is becoming less popular as 10GbE becomes more affordable. Gigabit Ethernet and USB3 won’t be fast enough for high-bandwidth files to playback, but 10GbE, multichannel SAS, Fibre Channel and Thunderbolt can all handle almost anything up to uncompressed 4K.

Direct attached storage will always have the highest bandwidth and lowest latency, as it has the fewest steps between the stored files and the user. Using Thunderbolt or USB adds another controller and hop, Ethernet even more so.

Different Types of Project Data
Now that we know the options for storage, let’s look at the data we anticipate needing to store. First off we will have lots of video footage of source media (either camera original files, transcoded editing dailies, or both). This is usually in the Terabytes, but the data rates vary dramatically — from 1Mb H.264 files to 200Mb ProRes files to 2400Mb Red files. The data rate for the files you are playing back, combined with the number of playback streams you expect to use, will determine the bandwidth you need from your storage system. These files are usually static in that they don’t get edited or written to in any way after creation.

The exceptions would be sidecar files like RMD and XML files, which will require write access to the media volume. If a certain set of files is static, as long as a backup of the source data exists, they don’t need to be backed up on a regular basis and don’t even necessarily need redundancy. Although if the cost of restoring that data would be high, in regards to lost time during that process, some level of redundancy is still recommended.

Another important set of files we will have is our project files, which actually record the “work” we do in our application. They contain instructions for manipulating our media files during playback or export. The files are usually relatively small, and are constantly changing as we use them. That means they need to be backed up on a regular basis. The more frequent the backups, the less work you lose when something goes wrong.

We will also have a variety of exports and intermediate renders over the course of the project. Whether they are flattened exports for upload and review, VFX files or other renders, these are a more dynamic set of files than our original source footage. And they are generated on our systems instead of being imported from somewhere else. These can usually be regenerated from their source projects, if necessary, but the time and effort required usually makes it worth it to invest in protecting or backing them up. In most workflows, these files don’t change once they are created, which makes it easier to back them up if desired.

There will also be a variety of temp files generated by most editing or VFX programs. Some of these files need high-speed access for best application performance, but they rarely need to be protected or backed up because they can be automatically regenerated by the source applications on the fly if needed.

Choosing the Right Storage for Your Needs
Ok, so we have source footage, project files, exports and temp files that we need to find a place for. If you have a system or laptop with a single data volume, the answer is simple: It all goes on the C drive. But we can achieve far better performance if we have the storage infrastructure to break those files up onto different devices. Newer laptops frequently have both a small SSD and a larger hard disk. In that case we would want our source footage on the (larger) HDD, while the project files should go on the (safer) SSD.

Usually your temp file directories should be located on the SSD as well since it is faster, and your exports can go either place, preferably the SSD if they fit. If you have an external drive of source footage connected, you can back all files up there, but you should probably work from projects stored on the local system, playing back media from the external drive.

A professional workstation can have a variety of different storage options available. I have a system with two SSDs and two RAIDs, so I store my OS and software on one SSD, my projects and temp files on the other SSD, my source footage on one RAID and my exports on the other. I also back up my project folder to the exports RAID on a daily basis, since the SSDs have no redundancy.

Individual Store Solution Case Study Examples
If you are natively editing a short film project shot on Red, then R3Ds can be 300MB/s. That is 1080GB/hour, so five hours of footage will be just over 5TB. It could be stored on a single 6TB external drive, but that won’t give you the bandwidth to play back in real-time (hard drives usually top out around 200MB/s).

Striping your data across two drives in one of those larger external drives would probably provide the needed performance, but with that much data you are unlikely to have a backup elsewhere. So data security becomes more of a concern, leading us toward a RAID-5-based solution. A four-disk array of 2TB drives provides 6TB of usable storage at RAID-5 (4x2TB = 8TB raw capacity, minus 2TB of parity data equals 6TB of usable storage capacity). Using an array of 8 1TB drives would provide higher performance, and 7TB of space before formatting (8x1TB = 8TB raw capacity, minus 1TB of parity, because a single drive failure would only lose 1TB of data in this configuration) but will cost more. (eight-port RAID controller, eight-bay enclosure, and two 1TB drives are usually more expensive than one 2TB drive.)

Larger projects deal with much higher numbers. Another project has 200TB of Red footage that needs to be accessible on a single volume. A 24-bay enclosure with 12TB drives provides 288TB of space, minus two drives worth of data for RAID-6 redundancy (288TB raw-[2x12TB for parity]=264TB usable capacity), which will be more like 240TB of space available in Windows once it is formatted.

Sharing Storage and Files With Others
As Ethernet networking technology has improved, the benefits of expensive SAN (storage area network) solutions over NAS (network attached storage) solutions has diminished. 10Gigabit Ethernet (10GbE) transfers over 1GB of data a second and is relatively cheap to implement. NAS has the benefit of a single host system controlling the writes, usually with software included in the OS. This prevents data corruption and also isolates the client devices from the file system allowing PC, Mac and Linux devices to all access the same files. This comes at the cost of slightly increased latency and occasionally lower total bandwidth, but the prices and complexity of installation are far lower.

So now all but the largest facilities and most demanding workflows are being deployed with NAS-based shared storage solutions. This can be as simple as a main editing system with a large direct attached array sharing its media with an assistant station, over a direct 10GbE link, for about $50. This can be scaled up by adding a switch and connecting more users to it, but the more users sharing the data, the greater the impact on the host system, and the lower the overall performance. Over 3-4 users, it becomes prudent to have a dedicated host system for the storage, for both performance and stability. Once you are buying a dedicated system, there are a variety of other functionalities offered by different vendors to improve performance and collaboration.

Bin Locking and Simultaneous Access
The main step to improve collaboration is to implement what is usually referred to as a “bin locking system.” Even with a top-end SAN solution and strict permissions controls there is still the possibility of users overwriting each other’s work, or at the very least branching the project into two versions that can’t easily be reconciled.

If two people are working on the same sequence at the same time, only one of their sets of changes is going to make it to the master copy of the file without some way of combining the changes (and solutions are being developed). But usually the way to avoid that is to break projects down into smaller pieces and make sure that no two people are ever working on the exact same part. This is accomplished by locking the part (or bin) of the project that a user is editing so that no one else may edit it at the same time. This usually requires some level of server functionality because it involves changes that are not always happening at the local machine.

Avid requires specific support for that from the storage host in order for it to enable that feature. Adobe on the other hand has implemented a simpler storage-based solution, which is effective but not infallible, that works on any shared storage device that offers users write access.

A Note on iSCSI
iSCSI arrays offer some interesting possibilities for read-only data, like source footage, as iSCSI gives block-level access for maximum performance and runs on any network without expensive software. The only limit is that only one system can copy new media to the volume, and there must be a secure way to ensure the remaining systems have read-only access. Projects and exports must be stored elsewhere, but those files require much less capacity and bandwidth than source media. I have not had the opportunity to test out this hybrid SAN theory since I don’t have iSCSI appliances to test with.

A Note on Top-End Ethernet Options
40Gb Ethernet products have been available for a while and we are now seeing 25GB and 100Gb Ethernet products as well. 40Gb cards can be gotten quite cheaply, and I was tempted to use them for direct connect, hoping to see 4GB/s to share fast SSDs between systems. But 40Gb Ethernet is actually a trunk of four parallel 10Gb links and each individual connection is limited to 10Gb. It is easy to share the 40Gb of aggregate bandwidth across 10 systems accessing a 40Gb storage host, but very challenging to get more than 10Gb to a single client system. Having extra lanes on the highway doesn’t get you to work any faster if there are no other cars on the road, it only helps when there is lots of competing traffic.

25Gb Ethernet on the other hand will give you access to nearly 3GB/s for single connections, but as that is newer technology, the prices haven’t come down yet ($500 instead of $50 for a 10GbE direct link). 100Gb Ethernet is four 25Gb links trunked together, and subject to the same aggregate limitations as 40Gb.

Main Image: Courtesy of Sugar Studios LA


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Adobe Max 2018: Creative Cloud updates and more

By Mike McCarthy

I attended my first Adobe Max 2018 last week in Los Angeles. This huge conference takes over the LA convention center and overflows into the surrounding venues. It began on Monday morning with a two-and-a-half-hour keynote outlining the developments and features being released in the newest updates to Adobe’s Creative Cloud. This was followed by all sorts of smaller sessions and training labs for attendees to dig deeper into the new capabilities of the various tools and applications.

The South Hall was filled with booths from various hardware and software partners, with more available than any one person could possibly take in. Tuesday started off with some early morning hands-on labs, followed by a second keynote presentation about creative and career development. I got a front row seat to hear five different people, who are successful in their creative fields — including director Ron Howard — discuss their approach to work and life. The rest of the day was so packed with various briefings, meetings and interviews that I didn’t get to actually attend any of the classroom sessions.

By Wednesday, the event was beginning to wind down, but there was still a plethora of sessions and other options for attendees to split their time. I presented the workflow for my most recent project Grounds of Freedom at Nvidia’s booth in the community pavilion, and spent the rest of the time connecting with other hardware and software partners who had a presence there.

Adobe released updates for most of its creative applications concurrent with the event. Many of the most relevant updates to the video tools were previously announced at IBC in Amsterdam last month, so I won’t repeat those, but there are still a few new video ones, as well as many that are broader in scope in regards to media as a whole.

Adobe Premiere Rush
The biggest video-centric announcement is Adobe Premiere Rush, which offers simplified video editing workflows for mobile devices and PCs.  Currently releasing on iOS and Windows, with Android to follow in the future, it is a cloud-enabled application, with the option to offload much of the processing from the user device. Rush projects can be moved into Premiere Pro for finishing once you are back on the desktop.  It will also integrate with Team Projects for greater collaboration in larger organizations. It is free to start using, but most functionality will be limited to subscription users.

Let’s keep in mind that I am a finishing editor for feature films, so my first question (as a Razr-M user) was, “Who wants to edit video on their phone?” But what if the user shot the video on their phone? I don’t do that, but many people do, so I know this will be a valuable tool. This has me thinking about my own mentality toward video. I think if I was a sculptor I would be sculpting stone, while many people are sculpting with clay or silly putty. Because of that I would have trouble sculpting in clay and see little value in tools that are only able to sculpt clay. But there is probably benefit to being well versed in both.

I would have no trouble showing my son’s first-year video compilation to a prospective employer because it is just that good — I don’t make anything less than that. But there was no second-year video, even though I have the footage because that level of work takes way too much time. So I need to break free from that mentality, and get better at producing content that is “sufficient to tell a story” without being “technically and artistically flawless.” Learning to use Adobe Rush might be a good way for me to take a step in that direction. As a result, we may eventually see more videos in my articles as well. The current ones took me way too long to produce, but Adobe Rush should allow me to create content in a much shorter timeframe, if I am willing to compromise a bit on the precision and control offered by Premiere Pro and After Effects.

Rush allows up to four layers of video, with various effects and 32-bit Lumetri color controls, as well as AI-based audio filtering for noise reduction and de-reverb and lots of preset motion graphics templates for titling and such.  It should allow simple videos to be edited relatively easily, with good looking results, then shared directly to YouTube, Facebook and other platforms. While it doesn’t fit into my current workflow, I may need to create an entirely new “flow” for my personal videos. This seems like an interesting place to start, once they release an Android version and I get a new phone.

Photoshop Updates
There is a new version of Photoshop released nearly every year, and most of the time I can’t tell the difference between the new and the old. This year’s differences will probably be a lot more apparent to most users after a few minutes of use. The Undo command now works like other apps instead of being limited to toggling the last action. Transform operates very differently, in that they made proportional transform the default behavior instead of requiring users to hold Shift every time they scale. It allows the anchor point to be hidden to prevent people from moving the anchor instead of the image and the “commit changes” step at the end has been removed. All positive improvements, in my opinion, that might take a bit of getting used to for seasoned pros. There is also a new Framing Tool, which allows you to scale or crop any layer to a defined resolution. Maybe I am the only one, but I frequently find myself creating new documents in PS just so I can drag the new layer, that is preset to the resolution I need, back into my current document. For example, I need a 200x300px box in the middle of my HD frame — how else do you do that currently? This Framing tool should fill that hole in the features for more precise control over layer and object sizes and positions (As well as provide its easily adjustable non-destructive masking.).

They also showed off a very impressive AI-based auto selection of the subject or background.  It creates a standard selection that can be manually modified anywhere that the initial attempt didn’t give you what you were looking for.  Being someone who gives software demos, I don’t trust prepared demonstrations, so I wanted to try it for myself with a real-world asset. I opened up one of my source photos for my animation project and clicked the “Select Subject” button with no further input and got this result.  It needs some cleanup at the bottom, and refinement in the newly revamped “Select & Mask” tool, but this is a huge improvement over what I had to do on hundreds of layers earlier this year.  They also demonstrated a similar feature they are working on for video footage in Tuesday night’s Sneak previews.  Named “Project Fast Mask,” it automatically propagates masks of moving objects through video frames and, while not released yet, it looks promising.  Combined with the content-aware background fill for video that Jason Levine demonstrated in AE during the opening keynote, basic VFX work is going to get a lot easier.

There are also some smaller changes to the UI, allowing math expressions in the numerical value fields and making it easier to differentiate similarly named layers by showing the beginning and end of the name if it gets abbreviated.  They also added a function to distribute layers spatially based on the space between them, which accounts for their varying sizes, compared to the current solution which just evenly distributes based on their reference anchor point.

In other news, Photoshop is coming to iPad, and while that doesn’t affect me personally, I can see how this could be a big deal for some people. They have offered various trimmed down Photoshop editing applications for iOS in the past, but this new release is supposed to be based on the same underlying code as the desktop version and will eventually replicate all functionality, once they finish adapting the UI for touchscreens.

New Apps
Adobe also showed off Project Gemini, which is a sketch and painting tool for iPad that sits somewhere between Photoshop and Illustrator. (Hence the name, I assume) This doesn’t have much direct application to video workflows besides being able to record time-lapses of a sketch, which should make it easier to create those “white board illustration” videos that are becoming more popular.

Project Aero is a tool for creating AR experiences, and I can envision Premiere and After Effects being critical pieces in the puzzle for creating the visual assets that Aero will be placing into the augmented reality space.  This one is the hardest for me to fully conceptualize. I know Adobe is creating a lot of supporting infrastructure behind the scenes to enable the delivery of AR content in the future, but I haven’t yet been able to wrap my mind around a vision of what that future will be like.  VR I get, but AR is more complicated because of its interface with the real world and due to the variety of forms in which it can be experienced by users.  Similar to how web design is complicated by the need to support people on various browsers and cell phones, AR needs to support a variety of use cases and delivery platforms.  But Adobe is working on the tools to make that a reality, and Project Aero is the first public step in that larger process.

Community Pavilion
Adobe’s partner companies in the Community Pavilion were showing off a number of new products.  Dell has a new 49″ IPS monitor, the U4919DW, which is basically the resolution and desktop space of two 27-inch QHD displays without the seam (5120×1440 to be exact). HP was displaying their recently released ZBook Studio x360 convertible laptop workstation, (which I will be posting a review of soon), as well as their Zbook X2 tablet and the rest of their Z workstations.  NVidia was exhibiting their new Turing-based cards with 8K Red decoding acceleration, ray tracing in Adobe Dimension and other GPU accelerated tasks.  AMD was demoing 4K Red playback on a MacBookPro with an eGPU solution, and CPU based ray-tracing on their Ryzen systems.  The other booths spanned the gamut from GoPro cameras and server storage devices to paper stock products for designers.  I even won a Thunderbolt 3 docking station at Intel’s booth. (Although in the next drawing they gave away a brand new Dell Precision 5530 2-in-1 convertible laptop workstation.)   Microsoft also garnered quite a bit of attention when they gave away 30 MS Surface tablets near the end of the show.  There was lots to see and learn everywhere I looked.

The Significance of MAX
Adobe MAX is quite a significant event, especially now that I have been in the industry long enough to start to see the evolution of certain trends — things are not as static as we may expect.  I have attended NAB for the last 12 years, and the focus of that show has shifted significantly away from my primary professional focus. (No Red, Ncidia, or Apple booths, among many other changes)  This was the first year that I had the thought “I should have gone to Sundance,” and a number of other people I know had the same impression. Adobe Max is similar, although I have been a little slower to catch on to that change.  It has been happening for over ten years, but has grown dramatically in size and significance recently.  If I still lived in LA, I probably would have started attending sooner, but it was hardly on my radar until three weeks ago.  Now that I have seen it in person, I probably won’t miss it in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

The benefits of LTO

By Mike McCarthy

LTO stands for Linear Tape Open, and was initially developed nearly 20 years ago as an “open” format technology that allows manufacturing by any vendor that wishes to license the technology. It records any digital files onto half-inch magnetic tapes, stored in square single reel cartridges. The capacity started at 100GB and has increased by a factor of two nearly every generation; the most recent LTO-8 cartridges store 12TB of uncompressed data.

If you want to find out more about LTO, you should check out the LTO Consortium, which is made up of Hewlett Packard Enterprises, IBM and Quantum, although there are other companies that make LTO drives and tape cartridges. You might be familiar with their LTO Ultrium logo.

‘Tapeless’ Workflows
While initially targeting server markets, with the introduction of “tapeless workflows” in the media and entertainment industry, there became a need for long-term media storage. Since the first P2 cards and SxS sticks were too expensive for single write operations, they were designed to be reused repeatedly once their contents had been offloaded to hard drives. But hard drives are not ideal for long-term data storage, and insurance and bonding companies wanted their clients to have alternate data archiving solutions.

So, by the time the Red One and Canon 5D were flooding post facilities with CF cards, LTO had become the default archive solution for most high-budget productions. But this approach was not without limitations and pitfalls. The LTO archiving solutions being marketed at the time were designed around the Linux-based Tar system of storing files, while most media work is done on Windows and Mac OS X. Various approaches were taken by different storage vendors to provide LTO capabilities to M&E customers. Some were network appliances running Linux under the hood, while others wrote drivers and software to access the media from OS X or, in one case, Windows. Then there was the issue that Tar isn’t a self-describing file system, so you needed a separate application to keep track of what was on each tape in your library. All of these aspects cost lots of money, so the initial investment was steep, even though the margin cost of tape cartridges was the cheapest way to store data per GB.

LTFS
Linear Tape File System (LTFS) was first introduced with LTO-5 and was intended to make LTO tapes easier to use and interchange between systems. A separate partition on the tape stores the index of data in XML and other associated metadata. It was intended to be platform independent, although it took a while for reliable drivers and software to be developed for use in Windows and OS X.

At this point, LTFS-formatted tapes in LTO tape drives operate very similarly to old 3.5-inch floppy drives. You insert a cartridge, it makes some funny noises, and then after a minute it asks you to format a new tape, or it displays the current contents of the tape as a new drive letter. If you drag files into that drive, it will start copying the data to the tape, and you can hear it grinding away. The biggest difference is when you hit eject it will take the computer a minute or two to rewind the tape, write the updated index to the first partition and then eject the cartridge for you. Otherwise it is a seamless drag and drop, just like any other removable data storage device.

LTO Drives
All you need in order to use LTO in your media workflow — for archive or data transfer — is an LTO drive. I bought one last year on Amazon for $1,600, which was a bit of a risk considering that I didn’t know if I was going to be able to get it to work on my Windows 7 desktop. As far as I know, all tape drives are SAS devices, although you can buy ones that have adapted the SAS interface to Thunderbolt or Fibre Channel.

Most professional workstations have integrated SAS controllers, so internal LTO drives fit into a 5.25-inch bay and can just connect to those, or any SAS card. External LTO drives usually use Small Form Factor cables (SFF-8088) to connect to the host device. Internal SAS ports can be easily adapted to SFF-8088 ports, or a dedicated eSAS PCIe card can be installed in the system.

Capacity & Compression
How much data do LTO tapes hold? This depends on the generation… and the compression options. The higher capacity advertised on any LTO product assumes a significant level of data compression, which may be achievable with uncompressed media files (DPX, TIFF, ARRI, etc.) The lower value advertised is the uncompressed data capacity, which is the more accurate estimate of how much data it will store. This level of compression is achieved using two different approaches, eliminating redundant data segments and eliminating the space between files. LTO was originally designed for backing up lots of tiny files on data servers, like credit card transactions or text data, and those compression approaches don’t always apply well to large continuous blocks of unique data found in encoded video.

Using data compression on media files which are already stored in a compressed codec doesn’t save much space (there is little redundancy in the data, and few gaps between individual files).

Uncompressed frame sequences, on the other hand, can definitely benefit from LTO’s hardware data compression. Regardless of compression, I wouldn’t count on using the full capacity of each cartridge. Due to the way the drives are formatted, and the way storage vendors measure data, I have only been able to copy 2.2TB of data from Windows onto my 2.5TB LTO-6 cartridges. So keep that in mind when estimating real-world capacity, like with any other data storage medium.

Choosing the ‘Right’ Version to Use
So which generation of LTO is the best option? That depends on how much data you are trying to store. Since most media files that need to be archived these days are compressed, either as camera source footage or final deliverables, I will be calculating based on the uncompressed capacities. VFX houses using DPX frames, or vendors using DCDMs might benefit from calculating based on the compressed capacities.

Prices are always changing, especially for the drives, but these are the numbers as of summer 2018. On the lowest end, we have LTO-5 drives available online for $600-$800, which will probably store 1-1.2TB of data on a $15 tape. So if you have less than 10TB of data to backup at a time, that might be a cost-effective option. Any version lower than LTO-5 doesn’t support the partitioning required for LTFS, and is too small to be useful in modern workflows anyway.

As I mentioned earlier, I spent $1,600 on an LTO-6 drive last year, and while that price is still about the same, LTO-7 and LTO-8 drives have come down in cost since then. My LTO-6 drive stores about 2.2TB of data per $23 tape. That allowed me to backup 40TB of Red footage onto 20 tapes in 90 hours, or an entire week. Now I am looking at using the same drive to ingest 250TB of footage from a production in China, but that would take well over a month, so LTO-6 is not the right solution for that project. But the finished deliverables will probably be a similar 10TB set of DPX and TIFF files, so LTO-6 will still be relevant for that application.

I see prices as low as $2,200 for LTO-7 drives, so they aren’t much more expensive than LTO-6 drives at this point, but the 6TB tapes are. LTO-7 switched to a different tape material, which increased the price of the media. At $63 they are just over $10 per TB, but that is higher than the two previous generations.

LTO-8 drives are available for as low as $2,600, and store up to 12TB on a single $160 tape. LTO-8 drives can also write up to 9TB onto a properly formatted LTO-7 tape in a system called “LTO-7 Type M” This is probably the cheapest cost per TB approach at the moment, since 9TB on a $63 tape is $7/TB.

Compatibility Between Generations
One other consideration is backwards compatibility. What will it take to read your tapes back in the future? The standard for LTO has been that drives can write the previous generation tapes and read tapes from two generations back.

So if you invested in an LTO-2 drive and have tons of tapes, they will still work when you upgrade to an LTO-4 drive. You can then copy them to newer cartridges with the same hardware at a 4:1 ratio since the capacity will have doubled twice. The designers probably figured that after two generations (about five years) most data will have been restored at some point, or be irrelevant (the difference between backups and archives).

If you need your media archived longer than that, it would probably be wise to transfer it to fresh media of a newer generation to ensure it is readable in the future. The other issue is transfer if you are using LTO cartridges to move data from one place to another. You must use the same generation of tape and be within one generation to go both ways. If I want to send data to someone who has an LTO-5 drive, I have to use an LTO-5 tape, but I can copy the data to the tape with my LTO-6 drive (and be subject to the LTO-5 capacity and performance limits). If they then sent that LTO-5 tape to someone with an LTO-7 drive, they would be able to read the data, but wouldn’t be able to write to the tape. The only exception to this is that the LTO-8 drives won’t read LTO-6 tapes (of course, because I have a bunch of LTO-6 tapes now, right?).

So for my next 250TB project, I have to choose between a new LTO-7 drive with backwards compatibility to my existing gear or an LTO-8 drive that can fit 50% more data on a $63 cartridge, and use the more expensive 12TB ones as well. Owning both LTO-6 and LTO-8 drives would allow me to read or write to any LTFS cartridge (until LTO-9 is released), but the two drives couldn’t exchange tapes with each other.

Automated Backup Software & Media Management
I have just been using HPE’s free StoreOpen Utility to operate my internal LTO drive and track what files I copy to which tapes. There are obviously much more expensive LTO-based products, both in hardware with robotic tape libraries and in software with media and asset management programs and automated file backup solutions.

I am really just exploring the minimum investment that needs to be made to take advantage of the benefits of LTO tape, for manually archiving your media files and backing up your projects. The possibilities are endless, but the threshold to start using LTO is much lower than it used to be, especially with the release of LTFS support.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: HP DreamColor Z31x studio display for cinema 4K

By Mike McCarthy

Not long ago, HP sent me their newest high-end monitor to review, and I was eager to dig in. The DreamColor Z31x studio display is a 31-inch true 4K color-critical reference monitor. It has many new features that set it apart from its predecessors, which I have examined and will present here in as much depth as I can.

It is challenging to communicate the nuances of color quality through writing or any other form on the Internet, as some things can only be truly appreciated firsthand. But I will attempt to communicate the experience of using the new DreamColor as best I can.

First, we will start with a little context…

Some DreamColor History
HP revolutionized the world of color-critical displays with the release of the first DreamColor in June 2008. The LP2480zx was a 24-inch 1920×1200 display that had built-in color processing with profiles for standard color spaces and the ability to calibrate it to refine those profiles as the monitor aged. It was not the first display with any of these capabilities, but the first one that was affordable, by at least an order of magnitude.

It became very popular in the film industry, both sitting on desks in post facilities — as it was designed — and out in the field as a live camera monitor, which it was not designed for. It had a true 10-bit IPS pane and the ability to reproduce incredible detail in the darks. It could only display 10-bit sources from the brand-new DisplayPort input or the HDMI port, and the color gamut remapping only worked for non-interlaced RGB sources.

So many people using the DreamColor as a “video monitor” instead of a “computer monitor” weren’t even using the color engine — they were just taking advantage of the high-quality panel. It wasn’t just the color engine but the whole package, including the price, that led to its overwhelming success. This was helped by the lack of better options, even at much higher price points, since this was the period after CRT production ended but before OLED panels had reached the market. This was similar to (and in the same timeframe as) Canon’s 5D MarkII revolutionizing the world of independent filmmaking with its HDSLRs. The combination gave content creators amazing tools for moving into HD production at affordable price points.

It took six years for HP to release an update to the original model DreamColor in the form of the Z27x and Z24x. These had the same color engine but different panel technology. They never had the same impact on the industry as the original, because the panels didn’t “wow” people, and the competition was starting to catch up. Dell has PremierColor and Samsung and BenQ have models featuring color accuracy as well. The Z27x could display 4K sources by scaling them to its native 2560×1440 resolution, while the Z24x’s resolution was decreased to 1920×1080 with a panel that was even less impressive.

Fast forward a few more years, and the Z24x was updated to Gen2, and the Z32x was released with UHD resolution. This was four times the resolution of the original DreamColor and at half the price. But with lots of competition in the market, I don’t think it has had the reach of the original DreamColor, and the industry has matured to the point where people aren’t hooking them to 4K cameras because there are other options better suited to that environment, specifically battery powered OLED units.

DreamColor at 4K
Fast forward a bit and HP has released the Z31x DreamColor studio display. The big feature that this unit brings to the table is true cinema 4K resolution. The label 4K gets thrown around a lot these days, but most “4K” products are actually UHD resolution, at 3840×2160, instead of the full 4096×2160. This means that true 4K content is scaled to fit the UHD screen, or in the case of Sony TVs, cropped off the sides. When doing color critical work, you need to be able to see every pixel, with no scaling, which could hide issues. So the Z31x’s 4096×2160 native resolution will be an important feature for anyone working on modern feature films, from editing and VFX to grading and QC.

The 10-bit 4K Panel
The true 10-bit IPS panel is the cornerstone of what makes a DreamColor such a good monitor. IPS monitor prices have fallen dramatically since they were first introduced over a decade ago, and some of that is the natural progression of technology, but some of that has come at the expense of quality. Most displays offering 10-bit color are accomplishing that by flickering the pixels of an 8-bit panel in an attempt to fill in the remaining gradations with a technique called frame rate control (FRC). And cheaper panels are as low as 6-bit color with FRC to make them close to 8-bit. There are a variety of other ways to reduce cost with cheaper materials, and lower-quality backlights.

HP claims that the underlying architecture of this panel returns to the quality of the original IPS panel designs, but then adds the technological advances developed since then, without cutting any corners in the process. In order to fully take advantage of the 10-bit panel, you need to feed it 10-bit source content, which is easier than it used to be but not a forgone conclusion. Make sure you select 10-bit output color in your GPU settings.

In addition to a true 10-bit color display, it also natively refreshes at the rate of the source image, from 48Hz-60Hz, because displaying every frame at the right time is as important as displaying it in the right color. They say that the darker blacks are achieved by better crystal alignment in the LCD (Liquid Crystal Display) blocking out the backlight more fully. This also gives a wider viewing angle, since washing out the blacks is usually the main issue with off-axis viewing. I can move about 45 degrees off center, vertically or horizontally, without seeing any shift in the picture brightness or color. Past that I start to see the mid levels getting darker.

Speaking of brighter and darker, the backlight gives the display a native brightness of 250 nits. That is over twice the brightness needed to display SDR content, but this not an HDR display. It can be adjusted anywhere from 48 to 250 nits, depending on the usage requirements and environment. It is not designed to be the brightest display available, it is aiming to be the most accurate.

Much effort was put into the front surface, to get the proper balance of reducing glare and reflections as much as possible. I can’t independently verify some of their other claims without a microscope and more knowledge than I currently have, but I can easily see that the matte surface of the display is much better than other monitors in regards to fewer reflections and less glare for the surrounding environment, allowing you to better see the image on the screen. That is one of the most apparent strengths of the monitor, obviously visible at first glance.

Color Calibration
The other new headline feature is an integrated colorimeter for display calibration and verification, located in the top of the bezel. It can swing down and measure the color parameters of the true 10-bit IPS panel, to adjust the color space profiles, allowing the monitor to more accurately reproduce colors. This is a fully automatic feature, independent of any software or configuration on the host computer system. It can be controlled from the display’s menu interface, and the settings will persist between multiple systems. This can be used to create new color profiles, or optimize the included ones for DCI P3, BT.709, BT.2020, sRGB and Adobe RGB. It also includes some low-blue-light modes for use as an interface monitor, but this negates its color accurate functionality. It can also input and output color profiles and all other configuration settings through USB and its network connection.

The integrated color processor also supports using external colorimeters and spectroradiometers to calibrate the display, and even allows the integrated XYZ colorimeter itself to be calibrated by those external devices. And this is all accomplished internally in the display, independent of using any software on the workstation side. The supported external devices currently include:
– Klein Instruments: K10, K10-A (colorimeters)
– Photo Research: PR-655, PR-670, PR-680, PR-730, PR-740, PR-788 (spectroradiometers)
– Konica Minolta: CA-310 (colorimeter)
– X-Rite: i1Pro 2 (spectrophotometer), i1Display (colorimeter)
– Colorimetry Research: CR-250 (spectroradiometer)

Inputs and Ports
There are five main display inputs on the monitor: two DisplayPort 1.2, two HDMI 2.0 and one DisplayPort over USB-C. All support HDCP and full 4K resolution at up to 60 frames per second. It also has an 1/8-inch sound jack and a variety of USB options. There are four USB 3.0 ports that are shared via KVM switching technology between the USB-C host connection and a separate USB-B port to a host system. These are controlled by another dedicated USB keyboard port, giving the monitor direct access to the keystrokes. There are two more USB ports that connect to the integrated DreamColor hardware engine, for connecting external calibration instruments, and for loading settings from USB devices.

My only complaint is that while the many USB ports are well labeled, the video ports are not. I can tell which ones are HDMI without the existing labels, but what I really need is to know which one the display views as HDMI1 and which is HDMI2. The Video Input Menu doesn’t tell you which inputs are active, which is another oversight, given all of the other features they added to ease the process of sharing the display between multiple inputs. So I recommend labeling them yourself.

Full-Screen Monitoring Features
I expect the Z31x will most frequently be used as a dedicated full-resolution playback monitor, and HP has developed a bunch of new features that are very useful and applicable for that use case. The Z31x can overlay mattes (with variable opacity) for Flat and Scope cinema aspect ratios (1.85 and 2.39). It also can display onscreen markers for those sizes, as well as 16×9 or 3×4, including action and title safe, including further options for center and thirds markers with various colors available. The markers can be further customized with HP’s StudioCal.XML files. I created a preset that gives you 2.76:1 aspect ratio markers that you are welcome to download and use or modify. These customized XMLs are easy to create and are loaded automatically when you insert a USB stick containing them into the color engine port.

The display also gives users full control over the picture scaling, and has a unique 2:1 pixel scaling for reviewing 2K and HD images at pixel-for-pixel accuracy. It also offers compensation for video levels and overscan and controls for de-interlacing, cadence detection, panel overdrive and blue-channel-only output. You can even control the function of each bezel button, and their color and brightness. These image control features will definitely be significant to professional users in the film and video space. Combined with the accurate reproduction of color, resolution and frame rate, this makes for an ideal display for monitoring nearly any film or video content at the highest level of precision.

Interface Display Features
Most people won’t be using this as an interface monitor, due to the price and because the existing Z32x should suffice when not dealing with film content at full resolution. Even more than the original DreamColor, I expect it will primarily be used as a dedicated full-screen playback monitor and users will have other displays for their user interface and controls. That said, HP has included some amazing interface and sharing functionality in the monitor, integrating a KVM switch for controlling two systems on any of the five available inputs. They also have picture-in-picture and split screen modes that are both usable and useful. HD or 2K input can be displayed at full resolution over any corner of the 4K master shot.

The split view supports two full-resolution 2048×2160 inputs side by side and from separate sources. That resolution has been added as a default preset for the OS to use in that mode, but it is probably only worth configuring for extended use. (You won’t be flipping between full screen and split very easily in that mode.) The integrated KVM is even more useful in these configurations. It can also scale any other input sizes in either mode but at a decrease in visual fidelity.

HP has included every option that I could imagine needing for sharing a display between two systems. The only problem is that I need that functionality on my “other” monitor for the application UI, not on my color critical review monitor. When sharing a monitor like this, I would just want to be able to switch between inputs easily to always view them at full screen and full resolution. On a related note, I would recommend using DisplayPort over HDMI anytime you have a choice between the two, as HDMI 2.0 is pickier about 18Gb cables, occasionally preventing you from sending RGB input and other potential issues.

Other Functionality
The monitor has an RJ-45 port allowing it to be configured over the network. Normally, I would consider this to be overkill but with so many features to control and so many sub-menus to navigate through, this is actually more useful than it would be on any other display. I found myself wishing it came with a remote control as I was doing my various tests, until I realized the network configuration options would offer even better functionality than a remote control would have. I should have configured that feature first, as it would have made the rest of the tests much easier to execute. It offers simple HTTP access to the controls, with a variety of security options.

I also had some issues when using the monitor on a switched power outlet on my SmartUPS battery backup system, so I would recommend using an un-switched outlet whenever possible. The display will go to sleep automatically when the source feed is shut off, so power saving should be less of an issue that other peripherals.

Pricing and Options
The DreamColor Z31x is expected to retail for $4,000 in the US market. If that is a bit out of your price range, the other option is the new Z27x G2 for half of that price. While I have not tested it myself, I have been assured that the newly updated 27-inch model has all of the same processing functionality, just in a smaller form-factor, with a lower-resolution panel. The 2560×1440 panel is still 10-bit, with all of the same color and frame rate options, just at a lower resolution. They even plan to support scaling 4K inputs in the next firmware update, similar to the original Z27x.

The new DreamColor studio displays are top-quality monitors, and probably the most accurate SDR monitors in their price range. It is worth noting that with a native brightness of 250 nits, this is not an HDR display. While HDR is an important consideration when selecting a forward-looking display solution, there is still a need for accurate monitoring in SDR, regardless of whether your content is HDR compatible. And the Z31x would be my first choice for monitoring full 4K images in SDR, regardless of the color space you are working in.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

postPerspective names NAB Impact Award MVPs and winners

NAB is a bear. Anyone who has attended this show can attest to that. But through all the clutter, postPerspective sought to seek out the best of the best for our Impact Awards. So we turned to a panel of esteemed industry pros (to whom we are very grateful!) to cast their votes on what they thought would be most impactful to their day-to-day workflows, and those of their colleagues.

In addition to our Impact Award winners, this year we are also celebrating two pieces of technology that not only caused a big buzz around the show, but are also bringing things a step further in terms of technology and workflow: Blackmagic’s DaVinci Resolve 15 and Apple’s ProRes RAW.

With ProRes RAW, Apple has introduced a new, high-quality video recording codec that has already been adopted by three competing camera vendors — Sony, Canon and Panasonic. According to Mike McCarthy, one of our NAB bloggers and regular contributors, “ProRes RAW has the potential to dramatically change future workflows if it becomes even more widely supported. The applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.”

Fairlight’s audio tools are now embedded in the new Resolve 15.

With Resolve 15, Blackmagic has launched the product further into a wide range of post workflows, and they haven’t raised the price. This standalone app — which comes in a free version — provides color grading, editing, compositing and even audio post, thanks to the DAW Fairlight, which is now built into the product.

These two technologies are Impact Award winners, but our judges felt they stood out enough to be called postPerspective Impact Award MVPs.

Our other Impact Award winners are:

• Adobe for Creative Cloud

• Arri for the Alexa LF

• Codex for Codex One Workflow and ColorSynth

• FilmLight for Baselight 5

• Flanders Scientific for the XM650U monitor

• Frame.io for the All New Frame.io

• Shift for their new Shift Platform

• Sony for their 8K CLED display

In a sea of awards surrounding NAB, the postPerspective Impact Awards stand out, and are worth waiting for, because they are voted on by working post professionals.

Flanders Scientific’s XM650U monitor.

“All of these technologies from NAB are very worthy recipients of our postPerspective Impact Awards,” says Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually have an impact on workflows as well as the ability to make users’ working lives easier and their projects better. This year we have honored 10 different products that span the production and post pipeline.

“We’re very proud of the fact that companies don’t ‘submit’ for our awards,” continues Altman. “We’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We feel it makes our awards quite special.”

With our Impact Awards, postPerspective is also hoping to help those who weren’t at the show, or who were unable to see it all, with a starting point for their research into new gear that might be right for their workflows.

postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at SIGGRAPH 2018.

NAB Day 2 thoughts: AJA, Sharp, QNAP

By Mike McCarthy

During my second day walking the show floor at NAB, I was able to follow up a bit more on a few technologies that I found intriguing the day before.

AJA released a few new products and updates at the show. Their Kumo SDI switchers now have options supporting 12G SDI, but their Kona cards still do not. The new Kona 1 is a single channel of 3G SDI in and out, presumably to replace the aging Kona LHe since analog is being phased out in many places.

There is also a new Kona HDMI, which just has four dedicated HDMI inputs for streaming and switching. This will probably be a hit with people capturing and streaming competitive video gaming. Besides a bunch of firmware updates to existing products, they are showing off the next step in their partnership with ColorFront in the form of a 1RU HDR image analyzer. This is not a product I need personally, but I know it will have an important role to fill as larger broadcast organizations move into HDR production and workflows.

Sharp had an entire booth dedicated to 8K video technologies and products. They were showing off 8Kp120 playback on what I assume is a prototype system and display. They also had 8K broadcast-style cameras on display in operation, outputting Quad 12G SDI that eventually fed an 8K TV with Quad HDMI. They also had a large curved video wall, composed of eight individual 2Kx5K panels. It obviously had large seams, but it had a more immersive feel that the LED based block walls I see elsewhere.

I was pleasantly surprised to discover that NAS vendor QNAP has released a pair of 10GbE switches, with both SFP+ and RJ45 ports. I was quoted a price under $600, but I am not sure if that was for the eight- or 12-port version. Either way, that is a good deal for users looking to move into 10GbE, with three to 10 clients — two clients can just direct connect. It also supports the new NBASE-T standard that connects at 2.5Gb or 5Gb instead of 10Gb, depending on the cables and NICs involved in the link. It is of course compatible with 1Gb and 100Mb connections as well.

On a related note, the release of 25GbE PCIe NICs allows direct connections between two systems to be much faster, for not much more cost than previous 10GbE options. This is significant for media production workflows, as uncompressed 4K requires slightly more bandwidth than 10GbE provides. I also learned all sorts of things about the relationship between 10GbE and its quad-channel variant 40GbE, which with the newest implementations is 25GbE, allowing 100GbE when four channels are combined.

I didn’t previously know that 40GbE ports and 100GB ports on switches could be broken into four independent connections with just a splitter cable, which offers some very interesting infrastructure design options — especially as facilities move towards IP video workflows, and SDI over IP implementations and products.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB First Thoughts: Fusion in Resolve, ProRes RAW, more

By Mike McCarthy

These are my notes from the first day I spent browsing the NAB Show floor this year in Las Vegas. When I walked into the South Lower Hall, Blackmagic was the first thing I saw. And, as usual, they had a number of new products this year. The headline item is the next version of DaVinci Resolve, which now integrates the functionality of their Fusion visual effects editor within the program. While I have never felt Resolve to be a very intuitive program for my own work, it is a solution I recommend to others who are on a tight budget, as it offers the most functionality for the price, especially in the free version.

Blackmagic Pocket Cinema Camera

The Blackmagic Pocket Cinema Camera 4K looks more like a “normal” MFT DSLR camera, although it is clearly designed for video instead of stills. Recording full 4K resolution in RAW or ProRes to SD or CFast cards, it has a mini-XLR input with phantom power and uses the same LP-E6 battery as my Canon DSLR. It uses the same camera software as the Ursa line of devices and includes a copy of Resolve Studio… for $1,300.  If I was going to be shooting more live-action video anytime soon, this might make a decent replacement for my 70D, moving up to 4K and HDR workflows. I am not as familiar with the Panasonic cameras that it is closely competes with in the Micro Four Thirds space.

AMD Radeon

Among other smaller items, Blackmagic’s new UpDownCross HD MiniConverter will be useful outside of broadcast for manipulating HDMI signals from computers or devices that have less control over their outputs. (I am looking at you, Mac users.) For $155, it will help interface with projectors and other video equipment. At $65, the bi-directional MicroConverter will be a cheaper and simpler option for basic SDI support.

AMD was showing off 8K editing in Premiere Pro, the result of an optimization by Adobe that uses the 2TB SSD storage in AMD’s Radeon Pro SSG graphics card to cache rendered frames at full resolution for smooth playback. This change is currently only applicable to one graphics card, so it will be interesting to see if Adobe did this because it expects to see more GPUs with integrated SSDs hit the market in the future.

Sony is showing crystal light emitting diode technology in the form of a massive ZRD video wall of incredible imagery. The clarity and brightness were truly breathtaking, but obviously my camera rendered to the web hardly captures the essence of what they were demonstrating.

Like nearly everyone else at the show, Sony is also pushing HDR in the form of Hybrid Log Gamma, which they are developing into many of their products. They also had an array for their tiny RX0 cameras on display with this backpack rig from Radiant Images.

ProRes RAW
At a higher level, one of the most interesting things I have seen at the show is the release of ProRes RAW. While currently limited to external recorders connected to cameras from Sony, Panasonic and Canon, and only supported in FCP-X, it has the potential to dramatically change future workflows if it becomes more widely supported. Many people confuse RAW image recording with the log gamma look, or other low-contrast visual interpretations, but at its core RAW imaging is a single-channel image format paired with a particular bayer color pattern specific to the sensor it was recorded with.

This decreases the amount of data to store (or compress) and gives access to the “source” before it has been processed to improve visual interpretation — in the form of debayering and adding a gamma curve to reverse engineer the response pattern of the human eye, compared to mechanical light sensors. This provides more flexibility and processing options during post, and reduces the amount of data to store, even before the RAW data is compressed, if at all. There are lots of other compressed RAW formats available; the only thing ProRes actually brings to the picture is widespread acceptance and trust in the compression quality. Existing compressed RAW formats include R3D, CinemaDNG, CineformRAW and Canon CRM files.

None of those caught on as a widespread multi-vendor format, but this ProRes RAW is already supported by systems from three competing camera vendors. And the applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.

There is no technical reason that ProRes RAW couldn’t be implemented on future Arri, Red or BMD cameras, which are all currently capable of recording ProRes and RAW data (but not the combination, yet). And since RAW is inherently a playback-only format, (you can’t alter a RAW image without debayering it), I anticipate we will see support in other applications, unless Apple wants to sacrifice the format in an attempt to increase NLE market share.

So it will be interesting to see what other companies and products support the format in the future, and hopefully it will make life easier for people shooting and producing HDR content.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.