Tag Archives: NAB 2018

VR at NAB 2018: A Parisian’s perspective

By Alexandre Regeffe

Even though my cab driver from the airport to my hotel offered these words of wisdom — “What happens in Vegas, stays in Vegas” — I’ve decided not to listen to him and instead share with you the things that impressed for the VR world at NAB 2018.

Back in September of 2017, I shared with you my thoughts on the VR offerings at the IBC show in Amsterdam. In case you don’t remember my story, I’m a French guy who jumped into the VR stuff three years ago and started a cinematic VR production company called Neotopy with a friend. Three years is like a century in VR. Indeed, this medium is constantly evolving, both technically and financially.

So what has become of VR today? Lots of different things. VR is a big bag where people throw AR, MR, 360, LBE, 180 and 3D. And from all of that, XR (Extended Reality) was born, which means everything.

Insta360 Titan

But if this blurred concept leads to some misunderstanding, is it really good for consumers? Even us pros are finding it difficult to explain what exactly VR is, currently.

While at NAB, I saw a presentation from Nick Bicanic during which he used the term “frameless media.” And, thank you, Nick, because I think that is exactly what‘s in this big bag called VR… or XR. Today, we consume a lot of content through a frame, which is our TV, computer, smartphone or cinema screen. VR allows us to go beyond the frame, and this is a very important shift for cinematographers and content creators.

But enough concepts and ideas, let us start this journey on the NAB show floor! My first stop was the VR pavilion, also called the “immersive storytelling pavilion” this year.

My next stop was to see SGO Mistika. For over a year, the SGO team has been delivering an incredible stitching software with its Mistika VR. In my opinion, there is a “before” and an “after” this tool. Thanks to its optical flow capacities, you can achieve a seamless stitching 99% of the time, even with very difficult shooting situations. The last version of the software provided additional features like stabilization, keyframe capabilities, more cameras presets and easy integration with Kandao and Insta360 camera profiles. VR pros used Mistika’s booth as sort of a base camp, meeting the development team directly.

A few steps from Misitka was Insta360, with a large, yellow booth. This Chinese company is a success story with the consumer product Insta360 One, a small 360 camera for the masses. But I was more interested in the Insta360 Pro, their 8K stereoscopic 3D360 flagship camera used by many content creators.

At the show, Insta360’s big announcement was Titan, a premium version of the Insta360 Pro offering better lenses and sensors. It’s available later this year. Oh, and there was the lightfield camera prototype, the company’s first step into the volumetric capture world.

Another interesting camera manufacturer at the show was Human Eyes Technology, presenting their Vuze+. With this affordable 3D360 camera you can dive into stereoscopic 360 content and learn the basics about this technology. Side note: The Vuze+ was chosen by National Geographic to shoot some stunning sequences in the International Space Station.

Kandao Obsidian

My favorite VR camera company, Kandao, was at NAB showing new features for its Obsidian R and S cameras. One of the best is the 6DoF capabilities. With this technology, you can generate a depth map from the camera directly in Kandao Studio, the stitching software, which comes free when you buy an Obsidian. With the combination of a 360 stitched image and depth map, you can “walk” into your movie. It’s an awesome technique for better immersion. For me this was by far the best innovation in VR technology presented on the show floor

The live capabilities of Obsidian cameras have been improved, with a dedicated Kandao Live software, which allows you to live stream 4K stereoscopic 360 with optical flow stitching on the fly! And, of course, do not forget their new Qoocam camera. With its three-lens-equipped little stick, you can either do VR 180 stereoscopic or 360 monoscopic, while using depth map technology to refocus or replace the background in post — all with a simple click. Thanks to all these innovations, Kandao is now a top player in the cinematic VR industry.

One Kandao competitor is ZCam. They were there with a couple of new products: the ZCam V1, a 3D360 camera with a tiny form factor. It’s very interesting for shooting scenes where things are very close to the camera. It keeps a good stereoscopy even on nearby objects, which is a major issue with most of VR cameras and rigs. The second one is the small E2 – while it’s not really a VR camera, it can be used as an underwater rig, for example.

ZCam K1 Pro

The ZCam product range is really impressive and completely targeting professionals, from ZCam S1 to ZCam V1 Pro. Important note: take a look at their K1 Pro, a VR 180 camera, if you want to produce high-end content for the Google VR180 ecosystem.

Another VR camera at NAB was Samsung’s Round, offering stereoscopic capabilities. This relatively compact device comes with a proprietary software suite for stitching and viewing 360 shots. Thanks to IP65 normalization, you can use this camera outdoors in difficult weather conditions, like rain, dust or snow. It was great to see the live streaming 4K 3D360 operating on the show floor, using several Round cameras combined with powerful Next Computing hardware.

VR Post
Adobe Creative Cloud 2018 remains the must-have tool to achieve VR post production without losing your mind. Numerous 360-specific functionalities have been added during the last year, after Adobe bought the Mettle Skybox suite. The most impressive feature is that you can now stay in your 360 environment for editing. You just put your Oculus rift headset on and manipulate your Premiere timeline with touch controllers and proceed to edit your shots. Think of it as a Minority Report-style editing interface! I am sure we can expect more amazing VR tools from Adobe this year.

Google’s Lightfield technology

Mettle was at the Dell booth showing their new Adobe CC 360 plugin, called Flux. After an impressive Mantra release last year, Flux is now available for VR artists, allowing them to do 3D volumetric fractals and to create entire futuristic worlds. It was awesome to see the results in a headset!

Distributing VR
So once you have produced your cinematic VR content, how can you distribute it? One option is to use the Liquid Cinema platform. They were at NAB with a major update and some new features, including seamless transitions between a “flat” video and a 360 video. As a content creator you can also manage your 360 movies in a very smart CMS linked to your app and instantly add language versions, thumbnails, geoblocking, etc. Another exciting thing is built-in 6DoF capability right in the editor with a compatible headset — allowing you to walk through your titles, graphics and more!

I can’t leave without mentioning Voysys for live-streaming VR; Kodak PixPro and its new cameras ; Google’s next move into lightfield technology ; Bonsai’s launch of a new version of the Excalibur rig ; and many other great manufacturers, software editors and partners.

See you next time, Sin City.

NAB: Imagine Products and StorageDNA enhance LTO and LTFS

By Jonathan S. Abrams

That’s right. We are still taking NAB. There was a lot to cover!

So, the first appointment I booked for NAB Show 2018, both in terms of my show schedule (10am Monday) and the vendors I was in contact with, was with StorageDNA’s Jeff Krueger, VP of worldwide sales. Weeks later, I found out that StorageDNA was collaborating with Imagine Products on myLTOdna, so I extended my appointment. Doug Hynes, senior director of business development for StorageDNA, and Michelle Maddox, marketing director of Imagine Products, joined me to discuss what they had ready for the show.

The introduction of LTFS during NAB 2010 allowed LTO tape to be accessed as if it was a hard drive. Since LTO tape is linear, executing multiple operations at once and treating it like a hard drive results in performance falling off of a cliff. It also could cause the drive to engage in shoeshining, or shuttling of the tape back-and-forth over the same section.

Imagine Products’ main screen.

Eight years later, these performance and operation issues have been addressed by StorageDNA’s creation of HyperTape, which is their enhanced Linear File Transfer System that is part of Imagine Products’ myLTOdna application. My first question was “Is HyperTape yet another tape format?” Fortunately for myself and other users, the answer is “No.”

What is HyperTape? It is a workflow powered by dnaLTFS. The word “enhanced” in the description of HyperTape as an enhanced Linear File Transfer System refers to a middleware in their myLTOdna application for Mac OS. There are three commands that can be executed to put an LTO drive into either read-only, write-only or training mode. Putting the LTO drive into an “only mode” allows it to achieve up to 300MB/s of throughput. This is where the Hyper in HyperTape comes from. These modes can also be engaged from the command line.

Training mode allows for analyzing the files stored on an LTO tape and then storing that information in a Random Access Database (RAD). The creation of the RAD can be automated using Imagine Products’ PrimeTranscoder. Otherwise, each file on the tape must be opened in order to train myLTOdna and create a RAD.

As for shoeshining, or shuttling of the tape back-and-forth over the same section, this is avoided by intelligently writing files to LTO tape. This intelligence is proprietary and is built into the back-end of the software. The result is that you can load a clip in Avid’s Media Composer, Blackmagic’s DaVinci Resolve or Adobe’s Premiere Pro and then load a subclip from that content into your project. You still should not load a clip from tape and just press play. Remember, this is LTO tape you are reading from.

The target customer for myLTOdna is a DIT with camera masters who wants to reduce how much time it takes to backup their footage. Previously, DITs would transfer the camera card’s contents to a hard drive using an application such as Imagine Products’ ShotPut Pro. Once the footage had been transferred to a hard drive, it could then be transferred to LTO tape. Using myLTOdna in read-only mode allows a DIT to bypass the hard drive and go straight from the camera card to an LTO tape. Because the target customer is already using ShotPut Pro, the UI for myLTOdna was designed to be comfortable and not difficult to use or understand.

The licensing for dnaLTFS is tied to the serial number of an LTO drive. StorageDNA’s Krueger explained that, “dnaLTFS is the drive license that works with stand alone mac LTO drives today.” Purchasing a license for dnaLTFS allows the user to later upgrade to StorageDNA’s DNAevolution M Series product if they need automation and scheduling features without having to purchase another drive license if the same LTO drive is used.

Krueger went on to say, “We will have (dnaLTFS) integrated into our DNAevolution product in the future.” DNAevolution’s cost of entry is $5,000. A single LTO drive license starts at $1,250. Licensing is perpetual, and updates are available without a support contract. myLTOdna, like ShotPut Pro and PrimeTranscoder, is a one-time purchase (perpetual license). It will phone home on first launch. Remote support is available for $250 per year.

I also envision myLTOdna being useful outside of the DIT market. Indeed, this was the thinking when the collaboration between Imagine Products and StorageDNA began. If you do not mind doing manual work and want to keep your costs low, myLTOdna is for you. If you later need automation and can budget for the efficiencies that you get with it, then DNAevolution is what you can upgrade to.


Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource, located in New York City.

postPerspective names NAB Impact Award MVPs and winners

NAB is a bear. Anyone who has attended this show can attest to that. But through all the clutter, postPerspective sought to seek out the best of the best for our Impact Awards. So we turned to a panel of esteemed industry pros (to whom we are very grateful!) to cast their votes on what they thought would be most impactful to their day-to-day workflows, and those of their colleagues.

In addition to our Impact Award winners, this year we are also celebrating two pieces of technology that not only caused a big buzz around the show, but are also bringing things a step further in terms of technology and workflow: Blackmagic’s DaVinci Resolve 15 and Apple’s ProRes RAW.

With ProRes RAW, Apple has introduced a new, high-quality video recording codec that has already been adopted by three competing camera vendors — Sony, Canon and Panasonic. According to Mike McCarthy, one of our NAB bloggers and regular contributors, “ProRes RAW has the potential to dramatically change future workflows if it becomes even more widely supported. The applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.”

Fairlight’s audio tools are now embedded in the new Resolve 15.

With Resolve 15, Blackmagic has launched the product further into a wide range of post workflows, and they haven’t raised the price. This standalone app — which comes in a free version — provides color grading, editing, compositing and even audio post, thanks to the DAW Fairlight, which is now built into the product.

These two technologies are Impact Award winners, but our judges felt they stood out enough to be called postPerspective Impact Award MVPs.

Our other Impact Award winners are:

• Adobe for Creative Cloud

• Arri for the Alexa LF

• Codex for Codex One Workflow and ColorSynth

• FilmLight for Baselight 5

• Flanders Scientific for the XM650U monitor

• Frame.io for the All New Frame.io

• Shift for their new Shift Platform

• Sony for their 8K CLED display

In a sea of awards surrounding NAB, the postPerspective Impact Awards stand out, and are worth waiting for, because they are voted on by working post professionals.

Flanders Scientific’s XM650U monitor.

“All of these technologies from NAB are very worthy recipients of our postPerspective Impact Awards,” says Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually have an impact on workflows as well as the ability to make users’ working lives easier and their projects better. This year we have honored 10 different products that span the production and post pipeline.

“We’re very proud of the fact that companies don’t ‘submit’ for our awards,” continues Altman. “We’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We feel it makes our awards quite special.”

With our Impact Awards, postPerspective is also hoping to help those who weren’t at the show, or who were unable to see it all, with a starting point for their research into new gear that might be right for their workflows.

postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at SIGGRAPH 2018.

High-performance flash storage at NAB 2018

By Tom Coughlin

After years of watching the development of flash memory-based storage for media and entertainment applications, especially for post, it finally appears that these products are getting some traction. This is driven by the decreasing cost of flash memory and also the increase in 4K up to 16K workflows with high frame rates and multi-camera video projects. The performance demanded for working storage to support multiple UHD raw video streams makes high performance storage attractive. Examples of 8K workflows were everywhere at the 2018 NAB show.

Flash memory is the clear leader in professional video camera media, increasing from 19% in 2009 to 66% in 2015, 54% in 2016 and 59% in 2017. The 2017 media and entertainment professional survey results are shown below.

Flash memory capacity used in M&E applications is believed to have been about 3.1% in 2016, but will be larger in coming years. Overall, revenues for flash memory in M&E should increase by more than 50% in the next few years as flash prices go down and it becomes a more standard primary storage for many applications.

At the 2018 NAB Show, and the NAB ShowStoppers, there were several products geared for this market and in discussion with vendors it appears that there is some real traction for solid state memory for some post applications, in addition to cameras and content distribution. This includes solid-state storage systems built with SAS, SATA and the newer NVMe interface. Let’s look at some of these products and developments.

Flash-Based Storage Systems
Excelero reports that its NVMe software-defined block storage solution with its low-latency and high-bandwidth improves the interactive editing process and enables customers to stream high-resolution video without dropping frames. Technicolor has said that it achieved 99.8% of local NVMe storage server performance across the network in an initial use of Excelero’s NVMesh. Below is the layout of the Pixit Media Excelero demonstration for 8K+ workflows at the NAB show.

“The IT infrastructure required to feed dozens of workstations of 4K files at 24ps is mindboggling — and that doesn’t even consider what storage demands we’ll face with 8K or even 16K formats,” says Amir Bemanian, engineering director at Technicolor. “It’s imperative that we can scale to future film standards today. Now, with innovations like the shared NVMe storage such as Excelero provides, Technicolor can enjoy a hardware-agnostic approach, enabling flexibility for tomorrow while not sacrificing performance.”

Excelero was showcasing 16K post production workflows with the Quantum StorNext storage and data management platform and Intel on the Technicolor project and at Mellanox with its 100Gb Ethernet switch.

Storbyte, a company based in Washington, DC, was showing its Eco Flash servers at the NAB show. Their product featured hot-swappable and accessible flash storage bays and redundant hot-swappable server controllers. The product features the company’s Hydra Dispersed Algorithmic Modeling (HDAM) that allows them to avoid having a flash transition layer, garbage collection, as well as dirty block management resulting in less performance overhead. Their Data Remapping Accelerator Core (DRACO) is said to offer up to a 4X performance increase over conventional flash architectures that can maintain peak performance even at 100% drive capacity and life and thus eliminates a write cliff and other problems that flash memory is subject to.

DDN was showing its ExaScaler DGX solution that combined a DDN ExaScaler ES14KX high-performance all-flash array integrated with a single Nvidia DGX-1 GPU server (initially announced at the 2018 GPU Technology Conference). Performance of the combination achieved up to 33GB/s of throughput. The company was touting this combination to accelerate machine learning, reducing the load times of large datasets to seconds for faster training. According to DDN, the combination also allows massive ingest rates and cost-effective capacity scaling and achieved more than 250,000 random read 4K IOPS. In addition to HDD-based storage, DDN offers hybrid HDD/SSD as well as all-flash array products. The new DDN SFA200NV all-flash platform product was on display at the 2018 NAB show

Dell EMC was showing its Isilon F800 all-flash scale-out NAS for creative applications. According to the company, the Isilon all-flash array gives visual effects artists and editors the power to work with multiple streams of uncompressed, full-aperture 4K material, enabling collaborative, global post and VFX pipelines for episodic and feature projects.

 

Dell EMC said this allows a true scale-out architecture with high concurrency and super-fast all-flash network-attached storage with low latency for high-throughput and random-access workloads. The company was demonstrating 4K editing of uncompressed DPX files with Adobe Premiere using a shared Isilon F800 all-flash array. They were also showing 4K and UHD workflows with Blackmagic’s DaVinci Resolve.

NetApp had a focus on solid-state storage for media workflows in their “Lunch and Learn sessions,” co-hosted by Advanced Systerms Group (ASG). The sessions discussed how NVMe and Storage Class Memory (SCM) are reshaping the storage industry. NetApp provides SSD-based E-series products that are used in the media and entertainment industry.

Promise Technology had its own NVMe SSD-based products. The company had data sheets on two NVMe fabric products. One was an HA storage appliance in a 2RU form factor (NVF-9000) with 24 NVMe drive slots and 100GbE ports offering up to 15M IOPS and 40GB/s throughout and many other enterprise features. The company said that its fabric allows servers to connect to a pool of storage nodes as if they had local NVMe SSDs. Promise’s NVMe Intelligent Storage is a 1U appliance (NVF-7000) with multiple 100 GbE connectors offering up to 5M IOPS and 20GB/s throughput. Both products offer RAID redundancy and end-to-end RDMA memory access.

Qumulo was showing its Qumulo P-Series NVMe all-flash solution. The P-series combines Qumulo File Fabric (QF2) software with high-speed NVMe, Intel Skylake SP processors and high-bandwidth Intel SSDs and 100GbE networking. It offers 16GB/s in a minimum four-node configuration (4GB/s per node). The P-series nodes come in 23 and 92TB size. According to Qumulo, QF2 provides realtime visibility and control regardless of the size of the file system, realtime capacity quotas, continuous replication, support for both SMB and NFS protocols, complete programmability with REST API and fast rebuild times. Qumulo says the P-series can run on-premise or in the cloud and can create a data fabric that interconnects every QF2 cluster whether it is all-flash, hybrid SSD/HDD or running on EC2 instances in AWS.

AIC was at the show with its J2024-04 2U 24-bay NVMe all-flash array using a Broadcom PCIe switch. The product includes dual hot-swap redundant 1.3 KW power supplies. AIC was also showing this AFA product providing a Storage Software Fabric platform with EXTEN smart NICs using Broadcom chips to create a storage software fabric platform, as well as an NVMe JBOF.

Companies such as Luma Forge were showing various hierarchical storage options, including flash memory, as shown in the image below.

Some other solid-state products included the use of two SATA SSDs for performance improvements for the SoftIron HyperDrive Ceph-based object storage appliance. Scale Logic has a hybrid SSD SAN/NAS product called Genesis Unlimited, which can support multiple 4K streams with a combination of HDDs and SSDs. Another NVMe offering was the RAIDIX NVMEXP software RAID engine for building NVMe-based arrays offering 4M IOPS and 30GB/s per 1U and offering RAID levels 5, 6 and 7.3. Nexsan has all-flash versions of its Unity storage products. Pure Storage had a small booth in the back of the South Hall lower showing their flash array products. Spectra Logic was showing new developments in its flash-based Black Pearl product, but we will cover that in another blog.

External Flash Storage Products
Other World Computing (OWC) was showing its solid-state and HDD-based products. They had a line-up of Thunderbolt 3 storage products, including the ThunderBlade and the Envoy Pro EX (VE) with Thunderbolt 3. The ThunderBlade uses a combination of M.2 SSDs to achieve transfer speeds up to 2.8 GB/s read and 2.45 GB/s write (pretty symmetrical R/W) with 1TB to 8TB storage capacity. It is fanless and has a dimmable LED so it won’t interfere with production work. OWC’s mobile bus-powered SSD product, Envoy Pro EX (VE) with Thunderbolt 3 provides sustained data rates up to 2.6 GB/s read and 1.6 GB/s write. This small 1TB to 2TB drive can be carried in a backpack or coat pocket.

Western Digital and Seagate had external SSD drives they were showing. Below is shown the G-Drive Mobile SSD-R, introduced late in 2017.

Memory Cards and SSDs
Samsung was at the NAB showing their EVO 860 2.5-inch. These SATA SSDs provide up to 4TB capacity and 550MB/s sequential read and 520MB/s sequential write speeds for media workstation applications. However, there were also showings of the product used in all-flash arrays as shown below.

ProGrade was showing its line of professional memory cards for high-end digital cameras. These included their SFExpress 1.0 memory card with 1TB capacity and 1.4GB/s read data transfer speed as well as burst write speed greater than 1GB/s. This new Compact Flash standard is a successor to both the C FAST and XQD formats. The product uses two lanes of PCIe and includes NVMe support. The product is interoperable with the XQD form factor. They also announced their V90 premium line of SDXC UHS-II memory cards with sustained read speeds of up to 250MB/s and sustained write speeds up to 200MB/s.

2018 Creative Storage Conference
For those who love storage, the 12th Annual Creative Storage Conference (CS 2018) will be held on June 7 at the Double Tree Hotel West Los Angeles in Culver City. This event brings together digital storage providers, equipment and software manufacturers and professional media and entertainment end users to explore the conference theme: “Enabling Immersive Content: Storage Takes Off.”

Also, my company Coughlin Associate is conducting a survey of digital storage requirements and practices for media and entertainment professionals with results presented at the 2018 Creative Storage Conference. M&E professionals can participate in the survey through this link. Those who complete the survey, with their contact information, will receive a free full pass to the conference.

Our main image: Seagate products in an editing session, including products in a Pelican case for field work. 


Tom Coughlin is president of Coughlin Associates, a digital storage analyst and  technology consultant. He has over 35 years in the data storage industry. He is also the founder of the Annual Storage Visions Conference and the Creative Storage Conference.

 

NAB 2018: My key takeaways

By Twain Richardson

I traveled to NAB this year to check out gear, software, technology and storage. Here are my top takeaways.

Promise Atlas S8+
First up is storage and the Promise Atlas S8+. The Promise Atlas S8+ is a network attached storage solution for small groups that features easy and fast NAS connectivity over Thunderbolt3 and 10GB Ethernet.

The Thunderbolt 3 version of the Atlas S8+ offers two Thunderbolt 3 ports, four 1Gb Ethernet ports, five USB 3.0 ports and one HMDI output. The 10g BaseT version swaps in two 10Gb/s Ethernet ports for the Thunderbolt 3 connections. It can be configured up to 112TB. The unit comes empty, and you will have to buy hard drives for it. The Atlas S8+ will be available later this year.

Lumaforge

Lumaforge Jellyfish Tower
The Jellyfish is designed for one thing and one thing only: collaborative video workflow. That means high bandwidth, low latency and no dropped frames. It features a direct connection, and you don’t need a 10GbE switch.

The great thing about this unit is that it runs quiet, and I mean very quiet. You could place it under your desk and you wouldn’t hear it running. It comes with two 10GbE ports and one 1GbE port. It can be configured for more ports and goes up to 200TB. The unit starts at $27,000 and is available now.

G-Drive Mobile Pro SSD
The G-Drive Mobile Pro SSD is blazing-fast storage with data transfer rates of up to 2800MB/s. It was said that you could transfer as much as a terabyte of media in seven minutes or less. That’s fast. Very fast.

It provides up to three-meter drop protection and comes with a single Thunderbolt 3 port and is bus powered. It also features a 1000lb crush-proof rating, which makes it ideal for being used in the field. It will be available in May with a capacity of 500GB. 1TB and 2TB versions will be available later this year.

OWC Thunderblade
Designed to be rugged and dependable as well as blazing fast, the Thunderblade has a rugged and sleek design, and it comes with a custom-fit ballistic hard-shell case. With capacities of up 8TB and data transfer rates of up to 2800MB/s, this unit is ideal for on-set workflows. The unit is not bus powered, but you can connect two ThunderBlades that can reach speeds of up to 3800MB/s. Now that’s fast.

OWC Thunderblade

It starts at $1,199 for the 1TB and is available now for purchase.

OWC Mercury Helios FX External Expansion Chassis
Add the power of a high-performance GPU to your Mac or PC via Thunderbolt 3. Performance is plug-and-play, and upgrades are easy. The unit is quiet and runs cool, making it a great addition to your environment.

It starts at $319 and is available now.

Flanders XM650U
This display is beautiful, absolutely beautiful.

The XM650U is a professional reference monitor designed for color-critical monitoring of 4K, UHD, and HD signals. It features the latest large-format OLED panel technology, offering outstanding black levels and overall picture performance. The monitor also features the ability to provide a realtime downscaled HD resolution output.

The FSI booth was showcasing the display playing HD, UHD, and UHD HDR content, which demonstrates how versatile the device is.

The monitor goes for $12,995 and is available for purchase now.

DaVinci Resolve 15
What could arguably be the biggest update yet to Resolve is version 15. It combines editing, color correction, audio and now visual effects all in one software tool with the addition of Fusion. Other additions include ADR tools in Fairlight and a sound library. The color and edit page has additions such as a LUT browser, shared grades, stacked timelines, closed captioning tools and more.

You can get DR15 for free — yes free — with some restrictions to the software and you can purchase DR15 Studio for $299. It’s available as a beta at the moment.

Those were my top take aways from NAB 2018. It was a great show, and I look forward to NAB 2019.


Twain Richardson is a co-founder of Frame of Reference, a boutique post production company located on the beautiful island of Jamaica. Follow the studio and Twain on Twitter: @forpostprod @twainrichardson

NAB 2018: A closer look at Firefly Cinema’s suite of products

By Molly Hill

Firefly Cinema, a French company that produces a full set of post production tools, premiered Version 7 of its products at NAB 2018. I visited with co-founder Philippe Reinaudo and head of business development Morgan Angove at the Flanders Scientific booth. They were knowledgeable and friendly, and they helped me to better understand their software.

Firefly’s suite includes FirePlay, FireDay, FirePost and the brand-new FireVision. All the products share the same database and Éclair color management, making for a smooth and complete workflow. However, Reinaudo says their programs were designed with specific UI/UXs to better support each product’s purpose.

Here is how they break down:
FirePlay: This is an on-set media player that supports most any format or file. The player is free to use, but there’s a paid option to include live color grading.

FireDay: Firefly Cinema’s dailies software includes a render tree for multiple versions and supports parallel processing.

FirePost: This is Firefly Cinema’s proprietary color grading software. One of its features was a set of “digital filters,” which were effects with adjustable parameters (not just pre-set LUTs). I was also excited to see the inclusion of curve controls similar to Adobe Lightroom’s Vibrance setting, which increases the saturation of just the more muted colors.

FireVision: This new product is a cloud-based review platform, with smooth integration into FirePost. Not only do tags and comments automatically move between FirePost and FireVision, but if you make a grading change in the former and hit render, the version in FireVision automatically updates. While other products such as Frame.io have this feature, Firefly Cinema offers all of these in the same package. The process was simple and impressive.

One of the downsides of their software package is its lack of support for HDR, but Raynaud says that’s a work in progress. I believe this will likely begin with ÉclairColor HDR, as Reinaudo and his co-founder Luc Geunard are both former Éclair employees. It’s also interesting that they have products for every step after shooting except audio and editing, but perhaps given the popularity of Avid Media Composer, Adobe Premiere and Avid Pro Tools, those are less of a priority for a young company.

Overall, their set of products was professional, comprehensive and smooth to operate, and I look forward to seeing what comes next for Firefly Cinema.


Molly Hill is a motion picture scientist and color nerd, soon-to-be based out of San Francisco. You can follow her on Twitter @mollymh4.

NAB 2018: How Fortium’s MediaSeal protects your content

By Jonathan Abrams

Having previously used Fortium‘s MediaSeal, and seeing it as the best solution for protecting content, I set up a meeting with the company’s CEO, Mathew Gilliat-Smith, at NAB 2018. He talked with me about the product’s history and use cases, and he demonstrated the system in action.

Fortium’s MediaSeal was created at the request of NBCUniversal in 2014, so it was a product born out of need. NBCUniversal did not want any unencrypted files to be in use on sound stages. The solution was to create a product that works on any file residing on any file system and that easily fits existing workflows. The use of encryption on the files would eliminate human error and theft as methods of obtaining usable content.

MediaSeal’s decryptor application works on Mac OS, Linux and Windows (oh my!). The decryptor application runs at the file level of the OS. This is where the objective of easily fitting an existing workflow is achieved. By running on the file level of the OS, any file can be handed off to any application. The application being used to open a file has no idea that the file it is opening has been encrypted.

Authentication is the process of proving who you are to the decryptor application. This can be done three ways. The simplest way is to only use a password. But if this is the only method that is used, anyone with the password can decrypt the file. This is important in terms of protection because nothing prevents the person with the password from sharing both the file and the decryptor password with someone else. “But this is clearly a lot better than having sensitive files sitting unprotected and vulnerable,” explained Gilliat-Smith during my demo.

The second and more secure method of authenticating with the decryptor application is to use an iLok license. Even if a user shares the decryptor password, the user would need an iLok with the appropriate asset attached to their computer in order to decrypt the file.

The third and most secure method of authenticating with the decryptor application is to use a key server. This can be hosted either locally or on Amazon Web Services (AWS). “Authentication on AWS is secure following MPAA guidelines,” said Gilliat-Smith. The key server has an address book of authorized users and allows the content owner to dictate who can access the protected content and when. With the password and the iLok license combined, this gives the person protecting their content great control. A user would need to know the decryption password, have the iLok license and be authorized by the key server in order to access the protected file.

Once a file is decrypted, the decryptor application sends access logs to a key server. These log entries include file copy and export/save operations. Can a file be saved out of encryption while it is in a decrypted state? Yes it can. The operation will be logged with the key server. A rogue user will have the content they seek, though the owners of the content will know that the security has been circumvented. There is no such thing as perfect security. This scenario shows the balance between a strong level of security, where the user has to provide up to three authentication levels for access, and usability, where the OS has no idea that an encrypted file is being decrypted for access.

During the demonstration, the iLok with the decryption license was removed from the computer (Windows OS). Within seconds, a yellow window with black text appeared and access to the encrypted asset was revoked. MediaSeal also works with iLok licenses assigned to a machine instead of a physical iLok. This would make transferring the asset more difficult. Each distributed decryptor asset is unique.

For content providers looking to encrypt their assets, the process is as simple as right-clicking a file and selecting encrypt. Those looking to encrypt multiple files can choose to encrypt a folder recursively. If content is added to a watch folder, it is encrypted without user intervention. Encryption can also be nested. This allows the content provider to send a folder of files to
users and allow one set of users access to some files while allowing a second set of users access to additional files. “MediaSeal uses AES (Advanced Encryption Standard) encryption, which is tested by NGS Secure and ISE,” said Gilliat-Smith. He went on to explain that “Fortium has a system for monitoring the relatively easy steps of getting users onboard and helping them out as
needed.”

MediaSeal can also be integrated with Aspera Faspex. The use of MediaSeal would allow a vendor to meet MPAA DS 11.4, which is to encrypt content at rest and in motion using a scalable approach where full file system encryption (such as\ FileVault 2 on Mac OS) is not desirable. Content providers who want their key server on premises can setup an MPAA Approved system with firewalls and two proxy servers. Vendors have a similar setup when the content provider uses a key server.

While there are many use cases for MediaSeal, the one use case we discussed was localization. If a content provider needs multiple language versions of their content, they can distribute the mix-minus language to localization vendors and assign each vendor a unique decryptor key. If the content provider uses all three authentication methods (password, iLok, key server), they can control the duration of the localization vendor’s access.

My own personal experience with MediaSeal was as simple as one could hope for. I downloaded an iLok license to the iLok being used to decrypt the content, and Avid’s Pro Tools worked with the decrypted asset as if it were any other file.

Fortium’s MediaSeal achieves the directive that NBCUniversal issued in 2014 with aplomb. It is my hope that more content providers who trust vendors with their content adopt this system because it allows the work to flow, and that benefits everyone involved in the creative process.


Jonathan S. Abrams is the chief technical engineer at Nutmeg, a New York City-based creative marketing, production and post studio.

Colorfront supports HDR, UHD, partners again with AJA

By Molly Hill

Colorfront released new products and updated current product support as part of NAB 2018, expanding their partnership with AJA. Both companies had demos of the new HDR Image Analyzer for UHD, HDR and WCG analysis. It can handle 4K, HDR and 60fps in realtime and shows information in various view modes including parade, pixel picker, color gamut and audio.

Other software updates include support for new cameras in On-Set Dailies and Express Dailies, as well as the inclusion of HDR analysis tools. QC Player and Transkoder 2018 were also released, with the latter now optimized for HDR and UHD.

Colorfront also demonstrated its tone-mapping capabilities (SDR/HDR) right in the Transkoder software, without the FS-HDR hardware (which is meant more for broadcast). Static (one light) or dynamic (per shot) mapping is available in either direction. Customization is available for different color gamuts, as well as peak brightness on a sliding scale, so it’s not limited to a pre-set LUT. Even just the static mapping for SDR-to-HDR looked great, with mostly faithful color reproduction.

The only issues were some slight hue shifts from blue to green, and clipping in some of the highlights in the HDR version, despite detail being available in the original SDR. Overall, it’s an impressive system that can save time and money for low-budget films when there isn’t the budget to hire a colorist to do a second pass.

Samsung’s 360 Round for 3D video

Samsung showed an enhanced Samsung 360 Round camera solution at NAB, with updates to its live streaming and post production software. The new solution gives professional video creators the tools they need — from capture to post — to tell immersive 360-degree and 3D stories for film and broadcast.

“At Samsung, we’ve been innovating in the VR technology space for many years, including introducing the 360 Round camera with its ruggedized design, superior low light and live streaming capabilities late last year,” says Eric McCarty of Samsung Electronics America.

The Samsung 360 Round offers realtime 3D video to PCs using the 360 Round’s bundled software so video creators can now view live video on their mobile devices using the 360 Round live preview app. In addition, the 360 Round live preview app allows creators to remotely control the camera settings, via Wi-Fi router, from afar. The updated 360 Round PC software now provides dual monitor support, which allows the editor to make adjustments and show the results on a separate monitor dedicated to the director.

Limiting luminance levels to 16-135, noise reduction and sharpness adjustments, as well as a hardware IR filter make it possible to get a clear shot in almost no light. The 360 Round also offers advanced stabilization software and the ability to color-correct on the fly, with an intuitive, easy-to-use histogram. In addition, users can set up profiles for each shot and save the camera settings, cutting down on the time required to prep each shot.

The 360 Round comes with Samsung’s advanced Stitching software, which weaves together video from each of the 360 Round’s 17 lenses. Creators can stitch, preview and broadcast in one step on a PC without the need for additional software. The 360 Round also enables fine-tuning of seamlines during a live production, such as moving them away from objects in realtime and calibrating individual stitchlines to fix misalignments. In addition, a new local warping feature allows for individual seamline calibrations in post, without requiring a global adjustment to all seamlines, giving creators quick and easy, fine-grain control of the final visuals.

The 360 Round delivers realtime 4K x 4K (3D) streaming with minimal latency. SDI capture card support enables live streaming through multiple cameras and broadcasting equipment with no additional encoding/decoding required. The newest update further streamlines the switching workflow for live productions with audio over SDI, giving producers less complex events (one producer managing audio and video switching) and a single switching source as the production transitions from camera to camera.

Additional new features:

  • Ability to record, stream and save RAW files simultaneously, making the process of creating dailies and managing live productions easier. Creators can now save the RAW files to make further improvements to live production recordings and create a higher quality post version to distribute as VOD.
  • Live streaming support for HLS over HTTP, which adds another transport streaming protocol in addition to the RTMP and RTSP protocols. HLS over HTTP eliminates the need to modify some restrictive enterprise firewall policies and is a more resilient protocol in unreliable networks.
  • Ability to upload direct (via 360 Round software) to Samsung VR creator account, as well as Facebook and YouTube, once the files are exported.

NAB Day 2 thoughts: AJA, Sharp, QNAP

By Mike McCarthy

During my second day walking the show floor at NAB, I was able to follow up a bit more on a few technologies that I found intriguing the day before.

AJA released a few new products and updates at the show. Their Kumo SDI switchers now have options supporting 12G SDI, but their Kona cards still do not. The new Kona 1 is a single channel of 3G SDI in and out, presumably to replace the aging Kona LHe since analog is being phased out in many places.

There is also a new Kona HDMI, which just has four dedicated HDMI inputs for streaming and switching. This will probably be a hit with people capturing and streaming competitive video gaming. Besides a bunch of firmware updates to existing products, they are showing off the next step in their partnership with ColorFront in the form of a 1RU HDR image analyzer. This is not a product I need personally, but I know it will have an important role to fill as larger broadcast organizations move into HDR production and workflows.

Sharp had an entire booth dedicated to 8K video technologies and products. They were showing off 8Kp120 playback on what I assume is a prototype system and display. They also had 8K broadcast-style cameras on display in operation, outputting Quad 12G SDI that eventually fed an 8K TV with Quad HDMI. They also had a large curved video wall, composed of eight individual 2Kx5K panels. It obviously had large seams, but it had a more immersive feel that the LED based block walls I see elsewhere.

I was pleasantly surprised to discover that NAS vendor QNAP has released a pair of 10GbE switches, with both SFP+ and RJ45 ports. I was quoted a price under $600, but I am not sure if that was for the eight- or 12-port version. Either way, that is a good deal for users looking to move into 10GbE, with three to 10 clients — two clients can just direct connect. It also supports the new NBASE-T standard that connects at 2.5Gb or 5Gb instead of 10Gb, depending on the cables and NICs involved in the link. It is of course compatible with 1Gb and 100Mb connections as well.

On a related note, the release of 25GbE PCIe NICs allows direct connections between two systems to be much faster, for not much more cost than previous 10GbE options. This is significant for media production workflows, as uncompressed 4K requires slightly more bandwidth than 10GbE provides. I also learned all sorts of things about the relationship between 10GbE and its quad-channel variant 40GbE, which with the newest implementations is 25GbE, allowing 100GbE when four channels are combined.

I didn’t previously know that 40GbE ports and 100GB ports on switches could be broken into four independent connections with just a splitter cable, which offers some very interesting infrastructure design options — especially as facilities move towards IP video workflows, and SDI over IP implementations and products.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB First Thoughts: Fusion in Resolve, ProRes RAW, more

By Mike McCarthy

These are my notes from the first day I spent browsing the NAB Show floor this year in Las Vegas. When I walked into the South Lower Hall, Blackmagic was the first thing I saw. And, as usual, they had a number of new products this year. The headline item is the next version of DaVinci Resolve, which now integrates the functionality of their Fusion visual effects editor within the program. While I have never felt Resolve to be a very intuitive program for my own work, it is a solution I recommend to others who are on a tight budget, as it offers the most functionality for the price, especially in the free version.

Blackmagic Pocket Cinema Camera

The Blackmagic Pocket Cinema Camera 4K looks more like a “normal” MFT DSLR camera, although it is clearly designed for video instead of stills. Recording full 4K resolution in RAW or ProRes to SD or CFast cards, it has a mini-XLR input with phantom power and uses the same LP-E6 battery as my Canon DSLR. It uses the same camera software as the Ursa line of devices and includes a copy of Resolve Studio… for $1,300.  If I was going to be shooting more live-action video anytime soon, this might make a decent replacement for my 70D, moving up to 4K and HDR workflows. I am not as familiar with the Panasonic cameras that it is closely competes with in the Micro Four Thirds space.

AMD Radeon

Among other smaller items, Blackmagic’s new UpDownCross HD MiniConverter will be useful outside of broadcast for manipulating HDMI signals from computers or devices that have less control over their outputs. (I am looking at you, Mac users.) For $155, it will help interface with projectors and other video equipment. At $65, the bi-directional MicroConverter will be a cheaper and simpler option for basic SDI support.

AMD was showing off 8K editing in Premiere Pro, the result of an optimization by Adobe that uses the 2TB SSD storage in AMD’s Radeon Pro SSG graphics card to cache rendered frames at full resolution for smooth playback. This change is currently only applicable to one graphics card, so it will be interesting to see if Adobe did this because it expects to see more GPUs with integrated SSDs hit the market in the future.

Sony is showing crystal light emitting diode technology in the form of a massive ZRD video wall of incredible imagery. The clarity and brightness were truly breathtaking, but obviously my camera rendered to the web hardly captures the essence of what they were demonstrating.

Like nearly everyone else at the show, Sony is also pushing HDR in the form of Hybrid Log Gamma, which they are developing into many of their products. They also had an array for their tiny RX0 cameras on display with this backpack rig from Radiant Images.

ProRes RAW
At a higher level, one of the most interesting things I have seen at the show is the release of ProRes RAW. While currently limited to external recorders connected to cameras from Sony, Panasonic and Canon, and only supported in FCP-X, it has the potential to dramatically change future workflows if it becomes more widely supported. Many people confuse RAW image recording with the log gamma look, or other low-contrast visual interpretations, but at its core RAW imaging is a single-channel image format paired with a particular bayer color pattern specific to the sensor it was recorded with.

This decreases the amount of data to store (or compress) and gives access to the “source” before it has been processed to improve visual interpretation — in the form of debayering and adding a gamma curve to reverse engineer the response pattern of the human eye, compared to mechanical light sensors. This provides more flexibility and processing options during post, and reduces the amount of data to store, even before the RAW data is compressed, if at all. There are lots of other compressed RAW formats available; the only thing ProRes actually brings to the picture is widespread acceptance and trust in the compression quality. Existing compressed RAW formats include R3D, CinemaDNG, CineformRAW and Canon CRM files.

None of those caught on as a widespread multi-vendor format, but this ProRes RAW is already supported by systems from three competing camera vendors. And the applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.

There is no technical reason that ProRes RAW couldn’t be implemented on future Arri, Red or BMD cameras, which are all currently capable of recording ProRes and RAW data (but not the combination, yet). And since RAW is inherently a playback-only format, (you can’t alter a RAW image without debayering it), I anticipate we will see support in other applications, unless Apple wants to sacrifice the format in an attempt to increase NLE market share.

So it will be interesting to see what other companies and products support the format in the future, and hopefully it will make life easier for people shooting and producing HDR content.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB: AJA intros HDR Image Analyzer, Kona 1, Kona HDMI

AJA Video Systems is exhibiting a tech preview of its new waveform, histogram, vectorscope and Nit level HDR monitoring solution at NAB. The HDR Image Analyzer simplifies monitoring and analysis of 4K/UltraHD/2K/HD, HDR and WCG content in production, post, quality control and mastering. AJA has also announced two new Kona cards, as well as Desktop Software v14.2. Kona HDMI is a PCIe card for multi-channel HD and single-channel 4K HDMI capture for live production, streaming, gaming, VR and post production. Kona1 is a PCIe card for single-channel HD/SD 3G-SDI capture/playback. Desktop Software v14.2 adds support for Kona 1 and Kona HDMI, plus new improvements for AJA Kona, Io and T-TAP products.

HDR Image Analyzer
A waveform, histogram, vectorscope and Nit level HDR monitoring solution, the HDR Image Analyzer combines AJA’s video and audio I/O with HDR analysis tools from Colorfront in a compact 1RU chassis. The HDR Image Analyzer is a flexible solution for monitoring and analyzing HDR formats including Perceptual Quantizer, Hybrid Log Gamma and Rec.2020 for 4K/UltraHD workflows.

The HDR Image Analyzer is the second technology collaboration between AJA and Colorfront, following the integration of Colorfront Engine into AJA’s FS-HDR. Colorfront has exclusively licensed its Colorfront HDR Image Analyzer software to AJA for the HDR Image Analyzer.

Key features include:

— Precise, high-quality UltraHD UI for native-resolution picture display
— Advanced out-of-gamut and out-of-brightness detection with error intolerance
— Support for SDR (Rec.709), ST2084/PQ and HLG analysis
— CIE graph, Vectorscope, Waveform, Histogram
— Out-of-gamut false color mode to easily spot out-of-gamut/out-of-brightness pixels
— Data analyzer with pixel picker
— Up to 4K/UltraHD 60p over 4x 3G-SDI inputs
— SDI auto-signal detection
— File base error logging with timecode
— Display and color processing look up table (LUT) support
— Line mode to focus a region of interest onto a single horizontal or vertical line
— Loop-through output to broadcast monitors
— Still store
— Nit levels and phase metering
— Built-in support for color spaces from ARRI, Canon, Panasonic, RED and Sony

“As 4K/UltraHD, HDR/WCG productions become more common, quality control is key to ensuring a pristine picture for audiences, and our new HDR Image Analyzer gives professionals an affordable and versatile set of tools to monitor and analyze HDR productions from start to finish, allowing them to deliver more engaging visuals for viewers,” says Rashby.

Adds Aron Jazberenyi, managing director of Colorfront, “Colorfront’s comprehensive UHD HDR software toolset optimizes the superlative performance of AJA video and audio I/O hardware, to deliver a powerful new solution for the critical task of HDR quality control.”

HDR Image Analyzer is being demonstrated as a technology preview only at NAB 2018.

Kona HDMI
An HDMI video capture solution, Kona HDMI supports a range of workflows, including live streaming, events, production, broadcast, editorial, VFX, vlogging, video game capture/streaming and more. Kona HDMI is highly flexible, designed for four simultaneous channels of HD capture with popular streaming and switching applications including Telestream Wirecast and vMix.

Additionally, Kona HDMI offers capture of one channel of UltraHD up to 60p over HDMI 2.0, using AJA Control Room software, for file compatibility with most NLE and effects packages. It is also compatible with other popular third-party solutions for live streaming, projection mapping and VR workflows. Developers use the platform to build multi-channel HDMI ingest systems and leverage VL42 compatibility on Linux. Features include: four full-size HDMI ports; the ability to easily switch between one channel of UltraHD or four channels of 2K/HD; and embedded HDMI audio in, up to eight embedded channels per input.

Kona 1
Designed for broadcast, post production and ProAV, as well as OEM developers, Kona 1 is a cost-efficient single-channel 3G-SDI 2K/HD 60p I/O PCIe card. Kona 1 offers serial control and reference/LTC, and features standard application plug-ins, as well as AJA SDK support. Kona 1 supports 3G-SDI capture, monitoring and/or playback with software applications from AJA, Adobe, Avid, Apple, Telestream and more. Kona 1 enables simultaneous monitoring during capture (pass-through) and includes: full-size SDI ports supporting 3G-SDI formats, embedded 16-channel SDI audio in/out, Genlock with reference/ LTC input and RS-422.

Desktop Software v14.2
Desktop Software v14.2 introduces support for Kona HDMI and Kona 1, as well as a new SMPTE ST 2110 IP video mode for Kona IP, with support for AJA Control Room, Adobe Premiere Pro CC, part of the Adobe Creative Cloud, and Avid Media Composer. The free software update also brings 10GigE support for 2K/HD video and audio over IP (uncompressed SMPTE 2022-6/7) to the new Thunderbolt 3-equipped Io IP and Avid DNxIP, as well as additional enhancements to other Kona, Io and T-TAP products, including HDR capture with Io 4K Plus. Io 4K Plus and DNxIV users also benefit from a new feature allowing all eight analog audio channels to be configured for either output, input or a 4-In/4-Out mode for full 7.1 ingest/monitoring, or I/O for stereo plus VO and discrete tracks.

“Speed, compatibility and reliability are key to delivering high-quality video I/O for our customers. Kona HDMI and Kona 1 give video professionals and enthusiasts new options to work more efficiently using their favorite tools, and with the reliability and support AJA products offer,” says Nick Rashby, president of AJA.

Kona HDMI will be available this June for $895, and Kona 1 will be available in May for $595. Both are available for pre-order now. Desktop Software v14.2 will also be available in May, as a free download from AJA’s support page.

CatDV MAM expands support for enterprise workflows

Square Box Systems has introduced several enhancements geared to larger-scale enterprise use of its flagship CatDV media asset management (MAM) solution. These include expanded customization capabilities for tailored MAM workflows, new enhancements for cloud and hybrid installations, and expanded support for micro-services and distributed deployments.

CatDV now can operate seamlessly in hybrid IT environments consisting of both on-premises and cloud-based resources, enabling transparent management and movement of content across NAS, SAN, cloud or object storage tiers.

New customization features include enhanced JavaScript support and an all-new custom user interface toolkit. Both the desktop and web versions of CatDV and the system’s Worker automation engine now support JavaScript, and the user interface toolkit enables customers to build completely new user experiences for every CatDV component. Recent CatDV customizations, built on these APIs, include a document analyzer that can extract text from PDFs, photos, and MS Office documents for indexing by CatDV; and a tool for uploading assets to YouTube.

CatDV’s new cloud/hybrid enhancements include integration with file acceleration tools from Aspera, as well as extended support for AWS S3 archive, such as KMS encryption and Glacier support with configurable expedited restores. CatDV has also built an all-new AWS deployment template with proxy playback from S3. CatDV also now includes support for Backblaze B2 archive and Contigo object storage.

In addition, the latest version of CatDV now supports deployment of server plug-in components on separate servers. Examples include data movers for archive plug-ins such as Black Pearl, S3, Azure, and B2.

EditShare intros software-only Flow MAM, more at NAB

During NAB 2018, EditShare launched a new standalone version of its Flow MAM software, designed for non-EditShare storage environments such as Avid Nexis, Storage DNA and Amazon S3. Flow adds an intelligent media management layer to an existing storage infrastructure that can manage millions of assets across multiple storage tiers in different locations.

EditShare will spotlight the new Flow version as well as a new family of solutions in its QScan Automated Quality Control (AQC) software line, offering cost-effective compliance and delivery check capabilities and integration across production, post and delivery. In addition, EditShare will unveil its new XStream EFS auditing dashboard, aligned with Motion Picture Association of America (MPAA) best practices to promote security in media-engineered EFS storage platforms.

The Flow suite of apps helps users manage content and associated metadata from ingest through to archive. At the core of Flow are workflow engines that enable collaboration through ingest, search, review, logging, editing and delivery, and a workflow automation engine for automating tasks such as transcoding and delivery. Flow users are able to review content remotely and also edit content on a timeline with voiceover and effects from anywhere in the world.

Along with over 500 software updates, the latest version of Flow features a redesigned and unified UI across web-based and desktop apps. Flow also has new capabilities for remotely viewing Avid Media Composer or Adobe Premiere edits in a web browser; range markers for enhanced logging and review capabilities; and new software licensing with a customer portal and license management tools. A new integration with EditShare’s QScan AQC software makes AQC available at any stage of the post workflow.

Flow caters to the increased demand for remote post workflows by enabling full remote access to content, as well as integration with leading NLEs such as Avid Media Composer and Adobe Premiere. Comments James Richings, EditShare managing director, “We are seeing a huge demand from users to interact and collaborate with each other from different locations. The ability to work from anywhere without incurring the time and cost of physically moving content around is becoming much more desirable. With a simple setup, Flow helps these users track their assets, automate workflows and collaborate from anywhere in the world. We are also introducing a new pay-as-you-go model, making asset management affordable for even the smallest of teams.”

Flow will be available through worldwide authorized sales partners and distributors by the end of May, with monthly pricing starting at $19 per user.

Atomos at NAB offering ProRes RAW recorders

Atomos is at this year’s NAB showing support for ProRes RAW, a new format from Apple that combines the performance of ProRes with the flexibility of RAW video. The ProRes RAW update will be available free for the Atomos Shogun Inferno and Sumo 19 devices.

Atomos devices are currently the only monitor recorders to offer ProRes RAW, with realtime recording from the sensor output of Panasonic, Sony and Canon cameras.

The new upgrade brings ProRes RAW and ProRes RAW HQ recording, monitoring, playback and tag editing to all owners of an Atomos Shogun Inferno or Sumo19 device. Once installed, it will allow the capture of RAW images in up to 12-bit RGB — direct from many of our industry’s most advanced cameras onto affordable SSD media. ProRes RAW files can be imported directly into Final Cut Pro 10.4.1 for high-performance editing, color grading, and finishing on Mac laptop and desktop systems.
Eight popular cine cameras with a RAW output — including the Panasonic AU-EVA1, Varicam LT, Sony FS5/FS7 and Canon C300mkII/C500 — will be supported with more to follow.

With this ProRes RAW support, filmmakers can work easily with RAW – whether they are shooting episodic TV, commercials, documentaries, indie films or social events.

Shooting ProRes RAW preserves maximum dynamic range, with a 12-bit depth and wide color gamut — essential for HDR finishing. The new format, which is available in two compression levels — ProRes RAW and ProRes RAW HQ — preserves image quality with low data rates and file sizes much smaller than uncompressed RAW.

Atomos recorders through ProRes RAW allow for increased flexibility in captured frame rates and resolutions. Atomos can record ProRes RAW up to 2K at 240 frames a second, or 4K at up to 120 frames per second. Higher resolutions such as 5.7K from the Panasonic AU-EVA1 are also supported.

Atomos’ OS, AtomOS 9, gives users filming tools to allow them to work efficiently and creatively with ProRes RAW in portable devices. Fast connections in and out and advanced HDR screen processing means every pixel is accurately and instantly available for on-set creative playback and review. Pull the SSD out and dock to your Mac over Thunderbolt 3 or USB-C 3.1 for immediate super fast post production.

Download the AtomOS 9 update for Shogun Inferno and Sumo 19 at www.atomos.com/firmware.

NAB: Adobe’s spring updates for Creative Cloud

By Brady Betzel

Adobe has had a tradition of releasing Creative Cloud updates prior to NAB, and this year is no different. The company has been focused on improving existing workflows and adding new features, some based on Adobe’s Sensei technology, as well as improved VR enhancements.

In this release, Adobe has announced a handful of Premiere Pro CC updates. While I personally don’t think that they are game changing, many users will appreciate the direction Adobe is going. If you are color correcting, Adobe has added the Shot Match function that allows you to match color between two shots. Powered by Adobe’s Sensei technology, Shot Match analyzes one image and tries to apply the same look to another image. Included in this update is the long-requested split screen to compare before and after color corrections.

Motion graphic templates have been improved with new adjustments like 2D position, rotation and scale. Automatic audio ducking has been included in this release as well. You can find this feature in the Essential Sound panel, and once applied it will essentially dip the music in your scene based on dialogue waveforms that you identify.

Still inside of Adobe Premiere Pro CC, but also applicable in After Effects, is Adobe’s enhanced Immersive Environment. This update is for people who use VR headsets to edit and or process VFX. Team Project workflows have been updated with better version tracking and indicators of who is using bins and sequences in realtime.

New Timecode Panel
Overall, while these updates are helpful, none are barn burners, the thing that does have me excited is the new Timecode Panel — it’s the biggest new update to the Premiere Pro CC app. For years now, editors have been clamoring for more than just one timecode view. You can view sequence timecodes, source media timecodes from the clips on the different video layers in your timeline, and you can even view the same sequence timecode in a different frame rate (great for editing those 23.98 shows to a 29.97/59.94 clock!). And one of my unexpected favorites is the clip name in the timecode window.

I was testing this feature in a pre-release version of Premiere Pro, and it was a little wonky. First, I couldn’t dock the timecode window. While I could add lines and access the different menus, my changes wouldn’t apply to the row I had selected. In addition, I could only right click and try to change the first row of contents, but it would choose a random row to change. I am assuming the final release has this all fixed. If it the wonkiness gets flushed out, this is a phenomenal (and necessary) addition to Premiere Pro.

Codecs, Master Property, Puppet Tool, more
There have been some compatible codec updates, specifically Raw Sony X-OCN (Venice), Canon Cinema Raw Light (C200) and Red IPP2.

After Effects CC has also been updated with Master Property controls. Adobe said it best during their announcement: “Add layer properties, such as position, color or text, in the Essential Graphics panel and control them in the parent composition’s timeline. Use Master Property to push individual values to all versions of the composition or pull selected changes back to the master.”

The Puppet Tool has been given some love with a new Advanced Puppet Engine, giving access to improving mesh and starch workflows to animate static objects. Beyond updates to Add Grain, Remove Grain and Match Grain effects, making them multi-threaded, enhanced disk caching and project management improvements have been added.

My favorite update for After Effects CC is the addition of data-driven graphics. You can drop a CSV or JSON data file and pick-whip data to layer properties to control them. In addition, you can drag and drop data right onto your comp to use the actual numerical value. Data-driven graphics is a definite game changer for After Effects.

Audition
While Adobe Audition is an audio mixing application, it has some updates that will directly help anyone looking to mix their edit in Audition. In the past, to get audio to a mixing program like Audition, Pro Tools or Fairlight you would have to export an AAF (or if you are old like me possibly an OMF). In the latest Audition update you can simply open your Premiere Pro projects directly into Audition, re-link video and audio and begin mixing.

I asked Adobe whether you could go back and forth between Audition and Premiere, but it seems like it is a one-way trip. They must be expecting you to export individual audio stems once done in Audition for final output. In the future, I would love to see back and forth capabilities between apps like Premiere Pro and Audition, much like the Fairlight tab in Blackmagic’s Resolve. There are some other updates like larger tracks and under-the-hood updates which you can find more info about on: https://theblog.adobe.com/creative-cloud/.

Adobe Character Animator has some cool updates like overall character building updates, but I am not too involved with Character Animator so you should definitely read about things like the Trigger Improvements on their blog.

Summing Up
In the end, it is great to see Adobe moving forward on updates to its Creative Cloud video offerings. Data-driven animation inside of After Effects is a game-changer. Shot color matching in Premiere Pro is a nice step toward a professional color correction application. Importing Premiere Pro projects directly into Audition is definitely a workflow improvement.

I do have a wishlist though: I would love for Premiere Pro to concentrate on tried-and-true solutions before adding fancy updates like audio ducking. For example, I often hear people complain about how hard it is to export a QuickTime out of Premiere with either stereo or mono/discrete tracks. You need to set up the sequence correctly from the jump, adjust the pan on the tracks, as well as adjust the audio settings and export settings. Doesn’t sound streamlined to me.

In addition, while shot color matching is great, let’s get an Adobe SpeedGrade-style view tab into Premiere Pro so it works like a professional color correction app… maybe Lumetri Pro? I know if the color correction setup was improved I would be way more apt to stay inside of Premiere Pro to finish something instead of going to an app like Resolve.

Finally, consolidating and transcoding used clips with handles is hit or miss inside of Premiere Pro. Can we get a rock-solid consolidate and transcode feature inside of Premiere Pro? Regardless of some of the few negatives, Premiere Pro is an industry staple and it works very well.

Check out Adobe’s NAB 2018 update video playlist for details on each and every update.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

B&H expands its NAB footprint to target multiple workflows

By Randi Altman

In a short time, many in our industry will be making the pilgrimage to Las Vegas for NAB. They will come (if they are smart) with their comfy shoes, Chapstick and the NAB Show app and plot a course for the most efficient way to see all they need to see.

NAB is a big show that spans a large footprint, and typically companies showing their wares need to pick a hall — Central, South Lower, South Upper or North. This year, however, The Studio-B&H made some pros’ lives a bit easier by adding a booth in South Lower in addition to their usual presence in Central Hall.

B&H’s business and services have grown, so it made perfect sense to Michel Suissa, managing director at The Studio-B&H, to grow their NAB presence to include many of the digital workflows the company has been servicing.

We reached out to Suissa to find out more.

This year B&H and its Studio division are in the South Lower. Why was it important for you guys to have a presence in both the Central and South Halls this year?
The Central Hall has been our home for a long time and it remains our home with our largest footprint, but we felt we needed to have a presence in South Hall as well.

Production and post workflows merge and converge constantly and we need to be knowledgeable in both. The simple fact is that we serve all segments of our industry, not just image acquisition and camera equipment. Our presence in image and data centric workflows has grown leaps and bounds.

This world is a familiar one for you personally.
That’s true. The post and VFX worlds are very dear to me. I was an editor, Flame artist and colorist for 25 years. This background certainly plays a role in expanding our reach and services to these communities. The Studio-B&H team is part of a company-wide effort to grow our presence in these markets. From a business standpoint, the South Hall attendees are also our customers, and we needed to show we are here to assist and support them.

What kind of workflows should people expect to see at both your NAB locations?
At the South Hall, we will show a whole range of solutions to show the breadth and diversity of what we have to offer. That includes VR post workflow, color grading, animation and VFX, editing and high-performance Flash storage.

In addition to the new booth in South Hall, we have two in Central. One is for B&H’s main product offerings, including our camera shootout, which is a pillar of our NAB presence.

This Studio-B&H booth features a digital cinema and broadcast acquisition technology showcase, including hybrid SDI/IP switching, 4K studio cameras, a gyro-stabilized camera car, the most recent full-frame cinema cameras, and our lightweight cable cam, the DynamiCam.

Our other Central Hall location is where our corporate team can discuss all business opportunities with new and existing B2B customers

How has The Studio-B&H changed along with the industry over the past year or two?
We have changed quite a bit. With our services and tools, we have re-invented our image from equipment providers to solution providers.

Our services now range from system design to installation and deployment. One of the more notable recent examples is our recent collaboration with HBO Sports on World Championship Boxing. The Studio-B&H team was instrumental in deploying our DynamiCam system to cover several live fights in different venues and integrating with NEP’s mobile production team. This is part of an entirely new type of service —  something the company had never offered its customers before. It is a true game-changer for our presence in the media and entertainment industry.

What do you expect the “big thing” to be at NAB this year?
That’s hard to say. Markets are in transition with a number of new technology advancements: machine learning and AI, cloud-based environments, momentum for the IP transition, AR/VR, etc.

On the acquisition side, full frame/large sensor cameras have captured a lot of attention. And, of course, HDR will be everywhere. It’s almost not a novelty anymore. If you’re not taking advantage of HDR, you are living in the past.