By Tom Coughlin
At the 2020 HPA Retreat, attendees witnessed an active production of the short The Lost Lederhosen. This film used the Unreal gaming engine to provide impressive graphical details, along with several cameras and an ACES workflow, with much production work done in the cloud. Many of the companies and studios participating in the retreat played a role in the film’s production, and the shooting and post were part of the ongoing presentations and panels on the first official day of the conference. Tuesday’s sessions ended with Joachim Zell from Efilm and Josh Pines from Technicolor showing the completed video.
Shooting The Lost Lederhosen – director Steve Shaw is at the far right.
As you can imagine, several digital storage products were needed for The Lost Lederhosen. In checking out the production rig in the back of the conference room, I saw some G-Tech modular storage units and was told that there was an Isilon storage system on the other side of the wall — a giveaway because of the noise from the fans in the system. In one of the sessions on that first day, it was reported that 5TB of total footage was shot with 500GB left after conforming using Avid Media Composer with AAF. Editing was done in the cloud with Avid Nexis 30TB storage online. During dailies AWS CLI was used to push files to S3 for a common storage location. Pixmover from Pixspan was used to move data to and from LA, along with AWS S3 storage in the San Francisco Bay area.
Colorfront supported the cloud-based live production of the HPA video and did a demonstration of its 2020 Express Dailies that was used to do all the dailies and deliverables, as well as Transkoder which was used to do all the VFX pulls. Frame.io, which was used to move content from cameras to the cloud. A Mac Pro was feeding dual Apple 32-inch Retina Pro XDR displays showing 6K HDR content. Colorfront was displaying Transkoder 2020 running on a Supermicro workstation with four Nvidia GeForce RTX 2080 Ti GPUs and an AJA Kona 5 video card outputting to an 85-inch Sony Z9G HDR monitor and an AJA HDR Image Analyzer 12G for video analytic monitoring.
Metadata for video content was an important element in the HPA presentations, which included the ASC MHL (media hash list) that hashes files and folders in a standardized way, with essential file metadata in an XML human-readable format. The ASC MHL is used from data capture and offloading through backup and archiving, and it is an important element in restoring content as shown below. The ASC HML is available on github (https://github.com/ascmitc/mhl) and is still a work in progress.
The following day, Tech Retreat main conference producer Mark Schubin said that film hasn’t died yet and that Kodak had received orders from Disney, NBCUniversal, Paramount, Sony and Warner Bros. for motion picture film stock. He talked about what might be the world’s smallest camera, a small endoscopy image chip with 200×200 resolution. And he mentioned Microsoft’s Project Silica proof of concept — a 7.5cm x 7.5cm glass plate storing the 75.6GB Superman movie — as a possible long-term storage media.
The MovieLabs white paper released in August 2019, “The Evolution of Media Creation,” was referenced in several talks during the HPA retreat. The paper, created in cooperation with the major film studios, suggests a path to the future of moviemaking, and that path is in the cloud. You can read it here: https://movielabs.com/production-technology
During the SMPTE 2110 IP update, it was said that most new video trucks for the UK’s NEP are built for 2110 IP compliance. There are a total of 12 IP-enabled trucks, six IP control rooms and multiple IP flypacks (backpack IP video gear). In a panel organized by the Digital Production Partnership, the DPP’s Mark Harrison gave a presentation that included information on on-side and cloud storage for M&E applications. He spoke about the 2020 report from the DPP and 10 case studies from the M&E industry of companies that have all adopted cloud-led production for different reasons. We will look at the digital storage needs for three of these case studies.
It was reported that COPA90 is doing high-volume global content management with a cloud production hub and AI using the Veritone Digital Media Hub and IBM Cloud Storage, as shown below.
France TV is doing fast turnaround of high-end drama using cloud-based metadata enrichment with AWS, Azure, a private cloud and local storage before going into Avid Nexis storage, Avid Interplay and Media Composer.
UK’s Jellyfish Pictures is reportedly doing secure distributed high-volume virtualized production using Azure public cloud and a private cloud with PixStor storage.
There are five key principles in the Eluvio content fabric.
Distributed Content Delivery
Eluvio’s Michelle Munson gave an update on the company’s distributed content delivery service, and during a demo at the company’s booth, she told me that Eluvio’s approach keeps the master copy for distribution in cold storage, with the published serviceable content inherently streamable. By reusing distributed parts of content within the network, there is a considerable shrink in requirements for storage. In effect, the fabric replaces a hot storage tier, reducing higher-performance storage and network bandwidth requirements.
In her presentation, Munson said that Eluvio eliminates the need for cloud microservices for content distribution. The blockchain-based network system provides an inherent security model that makes it possible to serve audiences directly over public internet to enable a content fabric. This is not a cloud or a CDN, but rather a data distribution and storage protocol. Rendering is done at the consumer endpoint, allowing consumers to play content just in time with low latency, and monetization happens through secure transactions. MGM is deploying Eluvio’s technology for worldwide content distribution, and some other major media players are also working with the technology.
There are five key principles in the Eluvio content fabric. First, there is no movement of the master copy; a mezzanine copy is used for all servicing. Second, a file-based interface is used for upload and download with underlying objects. Third, streaming and servicing are accomplished from the source in a JIT manner. Fourth, it uses a trustless encryption model over open networks, and fifth, access control and rights management are built in.
Best Practices for Cloud-Based Workflows
MediAnswers’ Chris Lennon and PBS’ Renard Jenkins (who subsequently started work as VP, content transmission, at WarnerMedia) spoke about the right way to do cloud-based workflows, which included local as well as cloud content copies. They gave three principles for survival. First, IT is not IP, and a network should be designed around media use and minimizing packet loss. Second, build or find cloud-native solutions rather than “lift and shift.” Third, linear workflows lead to nonlinear problems.
Universal and the Cloud
Universal’s Annie Chang spoke about tools for the next generation of production, including the use of cloud-based tools such as temporary production storage and an active archive for production assets. She went on to detail future cloud workflows wherein content goes from the camera directly to the cloud (or, if on film, from a digital intermediate post house to the cloud). Editing, dailies distribution and EDL are all done in the cloud, as is final archiving.
Chang said that the move to a mostly cloud-based workflow is already starting at Universal. She reported that DreamWorks Animation (DWA) has built a cloud-native platform that creates workspaces for its artists. Assets are related to each other, and workflows can be kicked off through microservices. She wondered if Universal could repurpose the DWA platform for live-action, VFX assets and workflows.
Chang discussed an experiment wherein Universal took one shot from Fast & Furious Presents: Hobbs & Shaw (including reference photos, LIDAR scans, camera raw) and demonstrated a VFX pull on premises at DWA while also testing in a public cloud. When Universal ran the content from the cloud and showed it to Universal VFX execs and the VFX producer from Hobbs & Shaw, Chang was told that this was something they have wanted for a decade. Developing the platform this year, Universal plans to test it on a full production in 2021. The company has 10 concurrent projects and is coordinating with multiple industry efforts with ACES, USC ETC and MovieLabs.
There was much discussion on the next developments for ACES (Academy Color Encoding System), particularly the implementation of ACES 1.2 and the development of ACES 2.0. A panel at the retreat suggested that practical problems with image matching with the current version of ACES could be solved by using AMF (ACES Metadata File). But there are some image matching problems that are not ACES-related but rather related to the source of the image and what sort of format is used for comparison. ACES 2.0 development is underway that plans to address these and other issues with the current version of ACES.
The digital storage exhibitors at the HPA Retreat included Cloudian (local object storage), which demonstrated with AWS, Azure, Google and other cloud storage services. Quantum had an exhibit that focused on its media and storage solutions, such as StorNext Workflow Storage Platform, F-Series NVMe storage, Xcellis high-performance workflow storage appliances and the its object storage and tape archive solutions. (Note that Quantum recently acquired Western Digital’s ActiveScale object storage.)
Racktop was advertising its Brickstor all-flash or hybrid HDD/SSD CyberConverged data storage offering, which supports FIPS 140-2 and AES-256 for encryption and compliance. Rohde & Schwarz was demoing IMF-based workflows with its Spycer Node media storage.
Rohde & Schwarz
Scale Logic featured its Atavium data management and orchestration solution. According to the product literature, data entering Atavium is identified, tagged and classified and can be searched via metadata or tags whether the data is on premises or in the cloud. Also, tasks can be automated using a combination of metadata and tags and a set of APIs and scheduler and application integration determine the placement of data to reflect the needs of the workflow. Local storage includes nearline HDDs as well as NVMe flash, and DRAM is used for read-ahead cache. The system will work with Spectra Logic’s Black Pearl and integrates with asset management systems.
Seagate Technology was showing storage products, including its Lyve Drive Shuttle for physical data delivery using e-ink and protective cases for shipping storage devices. The company had flyers out on its Seagate Exos modular storage for capacity and the Seagate Nytro modular storage for performance. Pixit Media was partnering with Seagate on its software-defined storage solution.
StorageDNA was showing its analytics-driven data management platform (DNAfabric) that provides data visibility services, including storage capacity and cost as well as data mobility services. Tiger Technology was showing its Tiger Bridge and shared an exhibit space with Nexsan NAS products. Western Digital was showing various G-Tech products, including its G-Speed Shuttle storage systems as well as desktop and mobile HDD and SSD storage devices.
Tom Coughlin is a digital storage analyst and business and technology consultant. His Coughlin Associates consults, publishes books and market and technology reports (such as the annual Digital Storage in Media and Entertainment Report ). He is currently working on his 2020 Digital Storage in Media and Entertainment Survey, feel free to participate: https://www.surveymonkey.com/r/MWXL22N