Tag Archives: NAB 2019

NAB 2019: Storage for M&E workflows

By Tom Coughlin

Storage is a vital element in modern post production, since that’s where the video content lives. Let’s look at trends in media post production storage and products shown at the 2019 NAB show. First let’s look at general post production storage architectures and storage trends.

My company produces the yearly “Digital Storage in Media and Entertainment Report,” so we are keeping an eye on storage all year round. The image to the right is a schematic from our 2018 report — it’s a nonlinear editing station showing  optional connections to shared online (or realtime) storage via a SAN or NAS (or even a cloud-based object storage system) and a host bus adapter (HBA or xGbE card). I hope this gives you some good background for what’s to come.

Our 2018 report also includes data from our annual Digital Storage in Media and Entertainment Professional Survey. The report shows that storage capacity annual demand is expected to be over 110 Exabytes of storage by 2023. In 2018 48% of responding survey participants said that they used cloud-based storage for editing and post production. And 56% also said that they have 1TB or more storage capacity in the cloud. In 2018, Internet distribution was the most popular way to view proxies.

All of this proves that M&E pros will continue to use multiple types of digital storage to enable their workflows, with significant growth in the use of cloud storage for collaborative and field projects. With that in mind, let’s dig into some of the storage offerings that were on display at NAB 2019.

Workflow Storage
Dell Technologies said that significant developments in its work with VMware unlock the value of virtualization for applications and tools to automate many critical M&E workflows and operations. Dell EMC and VMware said that they are about to unveil the recipe book for making virtualization a reality for the M&E industry.

Qumulo announced an expansion of its cloud-native file storage offerings. The company introduced two new products —CloudStudio and CloudContinuity — as well as support for Qumulo’s cloud-native, distributed hybrid file system on the Google Cloud Platform (GCP). Qumulo has partnered with Google to support Qumulo’s hybrid cloud file system on GCP and on the Google Cloud Platform Marketplace. Enterprises will be able to take advantage of the elastic compute resources, operational agility, and advanced services that Google’s public cloud offers. With the addition of the Google Cloud Platform, Qumulo is able to provide multi-cloud platform support, making it easy for users to store, manage and access their data, workloads and applications in both Amazon Web Services (AWS) and GCP. Qumulo also enables data replication between clouds for migration or multi-copy requirements.

M&E companies of any size can scale production into the public cloud with CloudStudio, which securely moves traditionally on-prem workspaces, including desktops, applications and data, to the public cloud on both the AWS and GCP platforms. Qumulo’s file storage software is the same whether on-prem or in the cloud, making the transition seamless and easy and eliminating the need to reconfigure applications or retrain users.

CloudContinuity enables users to automatically replicate their data from an on-prem Qumulo cluster to a Qumulo instance running in the cloud. Should a primary on-prem storage system experience a catastrophic failure, customers can redirect users and applications to the Qumulo cloud, where they will have access to all of their data immediately. CloudContinuity also enables quick, automated fail-back to an on-prem cluster in disaster recovery scenarios.

Quantum announced its VS-Series, designed for surveillance and industrial IoT applications. The VS-Series is available in a broad range of server choices, suitable for deployments with fewer than 10 cameras up to the largest environments with thousands of cameras. Using the VS-Series, security pros can efficiently record and store surveillance footage and run an entire security infrastructure on a single platform.

Quantum’s VS-Series architecture is based on the Quantum Cloud Storage Platform (CSP), a new software-defined storage platform specifically designed for storing machine and sensor-generated data. Like storage technologies used in the cloud, the Quantum CSP is software-defined and can be deployed on bare metal, as a virtual machine, or as part of a hyperconverged infrastructure.Unlike other software-defined storage technologies, the Quantum CSP was designed specifically for video and other forms of high-resolution content — engineered for extremely low latency, maximizing the streaming performance of large files to storage.

The Quantum Cloud Storage Platform allows high-speed video recording with optimal camera density and can host and run certified VMS management applications, recording servers and other building control servers on a single platform.

Quantum say that the VS-Series product line is being offered in a variety of deployment options, including software-only, mini-tower and 1U, 2U and 4U hyperconverged servers.

Key VS-Series attributes:
– Supports high camera density and software architecture that enables users to run their entire security infrastructure on a single hyperconverged platform.
– Offers a software-defined platform with the broadest range of deployment options. Many appliances can scale out for more cameras or scale up for increased retention.
– Comes pre-installed with certified VMS applications and can be installed and configured in minutes.
– Offers a fault-tolerant design to minimize hardware and software issues, which is meant to virtually eliminate downtime

Quantum was also showing its R-3000 at NAB. This box was designed for in-vehicle data capture for developing driver assistance and autonomous driving systems. This NAS box includes storage modules of 60TB with HDDs and 23TB or 46TB using SSDs. It works off 12 volt power and features two 10 GbE ports.

Arrow Distribution bundled NetApp storage appliances with Axle AI software. The three solutions offered are the VM100, VM200 and VM400 with 100TB, 200TB and 400TB, respectively, with 10GbE network interfaces and NetApp’s FAS architecture. Each configuration also includes an Intel-based application server running a five-user version of Axle AI 2019. The software includes a browser front-end that allows multiple users to tag, catalog and search their media files, as well as a range of AI-driven options for automatically cataloging and discovering specific visual and audio attributes within those files.

Avid Nexis|Cloudspaces

Avid Nexis|Cloudspaces is a storage as a service (SaaS) offering for post, news and sports teams, enabling them to store and park media and projects not currently in production in the cloud, leveraging Microsoft Azure. This frees up local Nexis storage space for production work. The company is offering all Avid Nexis users a limited-time free offer of 2TB of Microsoft Azure storage that is auto-provisioned for easy setup and can scale as needed. Avid Nexis manages these Cloudspaces alongside local workspaces, allowing unified content management.

DDP was showing a rack with hybrid SSD/HDD storage that the company says provides 24/7 365 days of reliable operation with zero interruptions and a transparent failover setup. DDP has redesigned its GUI to provide faster operation and easier use.

Facilis displayed its new Hub shared storage line developed specifically for media production workflows. Built as an entirely new platform, Facilis Hub represents the evolution of the Facilis shared file system with the block-level virtualization and multi-connectivity performance required in shared creative environments. This solution offers both block-mode Fibre Channel and Ethernet connectivity simultaneously, allowing connection through either method with the same permissions, user accounts and desktop appearance.

Facilis’ Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for one low yearly cost. A native Facilis virtual volume can display cloud, tape and spinning disk data in the same directory structure, on the client desktop. Every Facilis Hub shared storage server comes with unlimited seats of the Facilis FastTracker asset tracking application. The Object Cloud software and storage package is available for most Facilis servers running version 7.2 or higher.

Facilis also had particular product updates. The Facilis 8 has 1GB/s data rates through standard dual-port 10GbE and options for 40GbE and Fibre Channel connectivity with 32TB, 48TB and 64TB capacities. The Facilis Hub 16 model offers 2GB/s speed with 16 HDDs with 64TB, 96TB and 128TB capacities. The company’s Hub Hybrid 16 model and SSD offers SSDs in an integrated high-capacity HDD-based storage system offering performance of 3GB/s and 4GB/s. With two or more Hub 16 or Hub 32 servers attached through 32Gb Fibre Channel controllers, Facilis Hub One configurations can be fully redundant, with multi-server bandwidth aggregated into a single point of network connectivity. The Hub One starts at 128GB and scales to 1PB.

Pixit Media announced the launch of PixStor 5, the latest version of its leading scale-out data-driven storage platform. According to the company, “PixStor 5 is an enterprise-class scale-out NAS platform delivering guaranteed 99% performance for all types of workflow and a single global namespace across multiple storage tiers — from on-prem for the cloud.”

New PixStor 5 highlights include:

PixStor 5

– Secure container services – This new feature offers multi-tenancy from a single storage fabric. PixStor 5 enables creative studios to deploy secure media environments without crippling productivity and creativity and aligns with TPN security accreditation standards to attract A-list clients.
– Cloud workflow flexibility — PixStor 5 expands your workflows cost-effectively into the cloud with fully automated seamless deployment to cloud marketplaces, enabling hybrid workflows for burst render and cloud-first workflows for global collaboration. PixStor 5 will soon be available in the Google Cloud Platform Marketplace, followed shortly by AWS and Azure.
– Enhanced search capabilities — Using machine learning and artificial intelligence cloud-based tools to drive powerful media indexing and search capabilities, users can perform fast, easy and accurate content searches across their entire global namespace.
– Deep granular analytics – With single-pane-of-glass management and user-friendly dashboards, PixStor 5 allows a holistic view of the entire filesystem and delivers business-relevant metrics to reinforce storage strategies.

GB Labs launched new software, features and updates to its FastNAS and Space, Echo and Vault ranges at NAB. The Space, Echo and Vault ranges got intelligent new software features, including the Mosaic asset organizer and the latest Analytics Center, along with brand-new Core.4 and Core.4 Lite software. The new Core software is also now included in the FastNAS product range.

GB Labs

Mosaic software, which already features on the FastNAS, range, could be compared to a MAM.  It is an asset organizer that can automatically scour all in-built metadata and integrate with AI tagging systems to give users the power to find what they’re looking for without having to manually enter any metadata.

Analytics Center will give users the visibility into their network so that they can see how they’re using their data, giving them a better understanding of individual or system-wide use, with suggestions on how to optimize their systems much more quickly and at a lower cost.

The new Core.4 software for both ranges builds on GB Labs’ current Core.3 OS offering a high-performance custom OS that is specifically used to serve media files. It allows a stable performance for every user and the best from the least amount of disk, which saves power.

EditShare’s flagship EditShare EFS scale-out storage enterprise scale-out storage solution was on display. It was developed for large-scale media organizations and supports hundreds of users simultaneously, with embedded tools for sharing media and collaborating across departments, across sites and around the world.

EditShare was showcasing advancements in its EFS File Auditing technology, the industry’s only realtime auditing platform designed to manage, monitor and secure your media from inception to delivery. EFS File Auditing keeps track of all digital assets and captures every digital footprint that a file takes throughout its life cycle, including copying, modifying and deleting of any content within a project.

Storbyte introduced its eco-friendly SBJ-496 at the 2019 NAB show. According to the company, this product is a new design in high-capacity disk systems for long-term management of digital media content with enterprise-class availability and data services. Ideal for large archive libraries, the SBJ-496 requires little to no electricity to maintain data, and its environmentally friendly green design allows unrestricted air flow, generates minimal heat and saves on cooling expenses.

Echo Flash

The new EcoFlash SBS-448, for digital content creation and streaming, is an efficient solid-state storage array that can deliver over 20GB of data per second. EcoFlash SBS-448 consumes less than half the electrical power and produces a lot less heat. Its patented design extends its lifespan significantly, resulting in a total operating cost per terabyte that is 300-500% lower.

NGD Systems was showing its computational storage product with several system partners at NAB, including at the Echostreams booth for its 1U platforms. NGD said that its M.2 and upcoming EDSFF form factors can be used in dense and performance-optimized solutions within the EchoStreams 1U server and canister system. In addition to providing data analytics and realtime analysis capture, the combination of NGD Systems products and EchoStreams 1U platforms allow for deployment at the extreme edge for use in onsite video acquisition and post processing at the edge.

OpenDrives was showcasing its Atlas software platform and product family of shared storage solutions. Its NAB demo was built on a single Summit system, including the OmniDrive media accelerator, powered by NVMe, to significantly boosts editorial, transcoding, color grading and visual effects shared workflows. OpenDrives is moving to a 2U form factor in its manufacturing, streamlining systems without sacrificing performance.

iX Systems said that their TrueNAS enterprise storage appliances deliver a perfect range of features and scalability for next-gen M&E workflows. AIC had an exhibit showing several enterprise storage systems, including some with NGD Systems computational storage SSDs. Promise Technology said that its VTrak NAS has been optimized for video application environments. Sony was offering PCIe SSD data storage servers. Other companies showing workflow storage products included Asustor, elements, PAC Storage and Rocstor.

Conclusions
The media and entertainment industry has unique requirements for storage to support modern digital workflows. A number of large and small companies have come up with a variety of local and cloud-based approaches to provide storage for post production applications. The NAB show is one of the world’s largest forums for such products and a great place to learn about what the digital storage and memory industry has to offer media and entertainment professionals.


Tom Coughlin, president of Coughlin Associates, is a digital storage analyst and business/technology consultant. He is active with SMPTE, SNIA, the IEEE — he is president of IEEE-USA and active in the CES, where he is chairman of the Future Directions Committee) and other pro organizations. 

NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.

Cobalt Digital’s card-based solution for 4K/HDR conversions

Cobalt Digital was at NAB showing with card-based solutions for openGear frames for 4K and HDR workflows. Cobalt’s 9904-UDX-4K up/down/cross converter and image processor offers an economical SDR-to-HDR and HDR-to-SDR conversion for 4K.

John Stevens, director of engineering at Burbank post house The Foundation, calls it “a swiss army knife” for a post facility.

The 9904-UDX-4K upconverts 12G/6G/3G/HD/SD to either UHD1 3840×2160 square division multiplex (SDM) or two-sample interleave (2SI) quad 3G-SDI-based formats, or it can output SMPTE ST 2082 12G-SDI for single-wire 4K transport. With both 12G-SDI and quad 3G-SDI inputs, the 9904-UDX-4K can downconvert 12G and quad UHD. The 9904-UDX-4K provides an HDMI 2.0 output for economical 4K video monitoring and offers numerous options, including SDR-to-HDR conversion and color correction.

The 9904-UDX-4K-IP model offers the same functionality as the 9904-UDX-4K SDI-based model, plus it also provides dual 10GigE ports to support for the emerging uncompressed video/audio/data over IP standards.

The 9904-UDX-4K-DSP model provides the same functionality as the 9904-UDX-4K model, and additionally also offers a DSP-based platform that supports multiple audio DSP options, including Dolby realtime loudness leveling (automatic loudness processing), Dolby E/D/D+ encode/decode and Linear Acoustic Upmax automatic upmixing. Embedded audio and metadata are properly delayed and re-embedded to match any video processing delay, with full adjustment available for audio/video offset.

The product’s high-density openGear design allows for up to five 9904-UDX-4K cards to be installed in one 2RU openGear frame. Card control/monitoring is available via the DashBoard user interface, integrated HTML5 web interface, SNMP or Cobalt’s RESTful-based Reflex protocol.

“I have been looking for a de-embedder that will work with SMPTE ST-2048 raster sizes — specifically 2048×1080 and 4096×2160,” explains Stevens. “The reason this is important is Netflix deliverables require these rasters. We use all embedded audio and I need to de-embed for monitoring. The same Cobalt Digital card will take almost every SDI input from quad link to 12G and output HDMI. There are other converters that will do some of the same things, but I haven’t seen anything that does what this product does.”

NAB 2019: An engineer’s perspective

By John Ferder

Last week I attended my 22nd NAB, and I’ve got the Ross lapel pin to prove it! This was a unique NAB for me. I attended my first 20 NABs with my former employer, and most of those had me setting up the booth visits for the entire contingent of my co-workers and making sure that the vendors knew we were at each booth and were ready to go. Thursday was my “free day” to go wandering and looking at the equipment, cables, connectors, test gear, etc., that I was looking for.

This year, I’m part of a new project, so I went with a shopping list and a rough schedule with the vendors we needed to see. While I didn’t get everywhere I wanted to go, the three days were very full and very rewarding.

Beck Video IP panel

Sessions and Panels
I also got the opportunity to attend the technical sessions on Saturday and Sunday. I spent my time at the BEITC in the North Hall and the SMPTE Future of Cinema Conference in the South Hall. Beck TV gave an interesting presentation on constructing IP-based facilities of the future. While SMPTE ST2110 has been completed and issued, there are still implementation issues, as NMOS is still being developed. Today’s systems are and will for the time being be hybrid facilities. The decision to be made is whether the facility will be built on an IP routing switcher core with gateways to SDI, or on an SDI routing switcher core with gateways to IP.

Although more expensive, building around an IP core would be more efficient and future-proof. Fiber infrastructure design, test equipment and finding engineers who are proficient in both IP and broadcast (the “Purple Squirrels”) are large challenges as well.

A lot of attention was also paid to cloud production and distribution, both in the BEITC and the FoCC. One such presentation, at the FoCC, was on VFX in the cloud with an eye toward the development of 5G. Nathaniel Bonini of BeBop Technology reported that BeBop has a new virtual studio partnership with Avid, and that the cloud allows tasks to be performed in a “massively parallel” way. He expects that 5G mobile technology will facilitate virtualization of the network.

VFX in the Cloud panel

Ralf Schaefer, of the Fraunhofer Heinrich-Hertz Institute, expressed his belief that all devices will be attached to the cloud via 5G, resulting in no cables and no mobile storage media. 5G for AR/VR distribution will render the scene in the network and transmit it directly to the viewer. Denise Muyco of StratusCore provided a link to a virtual workplace: https://bit.ly/2RW2Vxz. She felt that 5G would assist in the speed of the collaboration process between artist and client, making it nearly “friction-free.” While there are always security concerns, 5G would also help the prosumer creators to provide more content.

Chris Healer of The Molecule stated that 5G should help to compress VFX and production workflows, enable cloud computing to work better and perhaps provide realtime feedback for more perfect scene shots, showing line composites of VR renders to production crews in remote locations.

The Floor
I was very impressed with a number of manufacturers this year. Ross Video demonstrated new capabilities of Inception and OverDrive. Ross also showed its new Furio SkyDolly three-wheel rail camera system. In addition, 12G single-link capability was announced for Acuity, Ultrix and other products.

ARRI AMIRA (Photo by Cotch Diaz)

ARRI showed a cinematic multicam system built using the AMIRA camera with a DTS FCA fiber camera adapter back and a base station controllable by Sony RCP1500 or Skaarhoj RCP. The Sony panel will make broadcast-centric people comfortable, but I was very impressed with the versatility of the Skaarhoj RCP. The system is available using either EF, PL, or B4 mount lenses.

During the show, I learned from one of the manufacturers that one of my favorite OLED evaluation monitors is going to be discontinued. This was bad news for the new project I’ve embarked on. Then we came across the Plura booth in the North Hall. Plura as showing a new OLED monitor, the PRM-224-3G. It is a 24.5-inch diagonal OLED, featuring two 3G/HD/SD-SDI and three analog inputs, built-in waveform monitors and vectorscopes, LKFS audio measurement, PQ and HLG, 10-bit color depth, 608/708 closed caption monitoring, and more for a very attractive price.

Sony showed the new HDC-3100/3500 3xCMOS HD cameras with global shutter. These have an upgrade program to UHD/HDR with and optional processor board and signal format software, and a 12G-SDI extension kit as well. There is an optional single-mode fiber connector kit to extend the maximum distance between camera and CCU to 10 kilometers. The CCUs work with the established 1000/1500 series of remote control panels and master setup units.

Sony’s HDC-3100/3500 3xCMOS HD camera

Canon showed its new line of 4K UHD lenses. One of my favorite lenses has been the HJ14ex4.3B HD wide-angle portable lens, which I have installed in many of the studios I’ve worked in. They showed the CJ14ex4.3B at NAB, and I even more impressed with it. The 96.3-degree horizontal angle of view is stunning, and the minimization of chromatic aberration is carried over and perhaps improved from the HJ version. It features correction data that support the BT.2020 wide color gamut. It works with the existing zoom and focus demand controllers for earlier lenses, so it’s  easily integrated into existing facilities.

Foot Traffic
The official total of registered attendees was 91,460, down from 92,912 in 2018. The Evertz booth was actually easy to walk through at 10a.m. on Monday, which I found surprising given the breadth of new interesting products and technologies. Evertz had to show this year. The South Hall had the big crowds, but Wednesday seemed emptier than usual, almost like a Thursday.

The NAB announced that next year’s exhibition will begin on Sunday and end on Wednesday. That change might boost overall attendance, but I wonder how adversely it will affect the attendance at the conference sessions themselves.

I still enjoy attending NAB every year, seeing the new technologies and meeting with colleagues and former co-workers and clients. I hope that next year’s NAB will be even better than this year’s.

Main Image: Barbie Leung.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

NAB 2019: A cinematographer’s perspective

By Barbie Leung

As an emerging cinematographer, I always wanted to attend an NAB show, and this year I had my chance. I found that no amount of research can prepare you for the sheer size of the show floor, not to mention the backrooms, panels and after-hours parties. As a camera operator as well as a cinematographer who is invested in the post production and exhibition end of the spectrum, I found it absolutely impossible to see everything I wanted to or catch up with all the colleagues and vendors I wanted to. This show is a massive and draining ride.

Panasonic EV1

There was a lot of buzz in the ether about 5G technology. Fast and accurate, the consensus seems to be that 5G will be the tipping point in implementing a lot of the tech that’s been talked about for years but hasn’t quite taken off yet, including the feasibility of autonomous vehicles and 8K streaming stateside.

It’s hard to deny the arrival of 8K technology while staring at the detail and textures on an 80-inch Sharp 8K professional display. Every roof tile, every wave in the ocean is rendered in rich, stunning detail.

In response to the resolution race, on the image capture end of things, Arri had already announced and started taking orders for the Alexa Mini LF — its long-awaited entry into the large format game — in the week before NAB.

Predictably, at NAB we saw many lens manufacturers highlighting full-frame coverage. Canon introduced its Sumire Prime lenses, while Fujinon announced the Premista 28-100mm T2.9 full-format zoom.

Sumire Prime lenses

Camera folks, including many ASC members, are embracing large format capture for sure, but some insist the appeal lies not so much in the increased resolution, but rather in the depth and overall image quality.

Meanwhile, back in 35mm sensor land, Panasonic continues its energetic push of the EVA1 camera. Aside from presentations at their booth emphasizing “cinematic” images from this compact 5.7K camera, they’ve done a subtle but not-to-subtle job of disseminating the EVA1 throughout the trade show floor. If you’re at the Atomos booth, you’ll find director/cinematographers like Elle Schneider presenting work shot with Atomos with the EVA1 balanced on a Ronin-S, and if you stop by Tiffen you’ll find an EVA1 being flown next to the Alexa Mini.

I found a ton of motion control at the show. From Shotover’s new compact B1 gyro stabilized camera system to the affable folks at Arizona-based Defy, who showed off their Dactylcam Pro, an addictively smooth-to-operate cable-suspension rig. The Bolt high-speed Cinebot had high-speed robotic arms complete with a spinning hologram.

Garret Brown at the Tiffen booth.

All this new gimbal technology is an ever-evolving game changer. Steadicam inventor Garrett Brown was on hand at the Tiffen booth to show the new M2 sled, which has motors elegantly built into the base. He enthusiastically heralded that camera operators can go faster and more “dangerously” than ever. There was so much motion control that it vied for attention alongside all the talk of 5G, 8K and LED lighting.

Some veterans of the show have expressed that this year’s show felt “less exciting” than shows of the past eight to 10 years. There were fewer big product launch announcements, perhaps due to past years where companies have been unable to fulfill the rush of post-NAB orders for new products for 12 or even 18 months. Vendors have been more conservative with what to hype, more careful with what to promise.

For a new attendee like me, there was more than enough new tech to explore. Above all else, NAB is really about the people you meet. The tech will be new next year, but the relationships you start and build at NAB are meant to last a career.

Main Image: ARRI’s Alexa Mini LF.


Barbie Leung is a New York-based cinematographer and camera operator working in independent film and branded content. Her work has played Sundance, the Tribeca Film Festival and Outfest. You can follow her on Instagram at @barbieleungdp.

Colorfront at NAB with 8K HDR, product updates

Colorfront, which makes on-set dailies and transcoding systems, has rolled out new 8K HDR capabilities and updates across its product lines. The company has also deepened its technology partnership with AJA and entered into a new collaboration with Pomfort to bring more efficient color and HDR management on-set.

Colorfront Transkoder is a post workflow tool for handling UHD, HDR camera, color and editorial/deliverables formats, with recent customers such as Sky, Pixelogic, The Picture Shop and Hulu. With a new HDR GUI, Colorfront’s Transkoder 2019 performs the realtime decompression/de-Bayer/playback of Red and Panavision DXL2 8K R3D material displayed on a Samsung 82-inch Q900R QLED 8K Smart TV in HDR and in full 8K resolution (7680 X 4320). The de-Bayering process is optimized through Nvidia GeForce RTX graphics cards with Turing GPU architecture (also available on Colorfront On-Set Dailies 2019), with 8K video output (up to 60p) using AJA Kona 5 video cards.

“8K TV sets are becoming bigger, as well as more affordable, and people are genuinely awestruck when they see 8K camera footage presented on an 8K HDR display,” said Aron Jaszberenyi, managing director, Colorfront. “We are actively working with several companies around the world originating 8K HDR content. Transkoder’s new 8K capabilities — across on-set, post and mastering — demonstrate that 8K HDR is perfectly accessible to an even wider range of content creators.”

Powered by a re-engineered version of Colorfront Engine and featuring the HDR GUI and 8K HDR workflow, Transkoder 2019 supports camera/editorial formats including Apple ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE (High Density Encoding).

Transkoder 2019’s mastering toolset has been further expanded to support Dolby Vision 4.0 as well as Dolby Atmos for the home with IMF and Immersive Audio Bitstream capabilities. The new Subtitle Engine 2.0 supports CineCanvas and IMSC 1.1 rendering for preservation of content, timing, layout and styling. Transkoder can now also package multiple subtitle language tracks into the timeline of an IMP. Further features support fast and efficient audio QC, including solo/mute of individual tracks on the timeline, and a new render strategy for IMF packages enabling independent audio and video rendering.

Colorfront also showed the latest versions of its On-Set Dailies and Express Dailies products for motion pictures and episodic TV production. On-Set Dailies and Express Dailies both now support ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE. As with Transkoder 2019, the new version of On-Set Dailies supports real-time 8K HDR workflows to support a set-to-post pipeline from HDR playback through QC and rendering of HDR deliverables.

In addition, AJA Video Systems has released v3.0 firmware for its FS-HDR realtime HDR/WCG converter and frame synchronizer. The update introduces enhanced coloring tools together with several other improvements for broadcast, on-set, post and pro AV HDR production developed by Colorfront.

A new, integrated Colorfront Engine Film Mode offers an ACES-based grading and look creation toolset with ASC Color Decision List (CDL) controls, built-in LOOK selection including film emulation looks, and variable Output Mastering Nit Levels for PQ, HLG Extended and P3 colorspace clamp.

Since launching in 2018, FS-HDR has been used on a wide range of TV and live outside broadcast productions, as well as motion pictures including Paramount Pictures’ Top Gun: Maverick, shot by Claudio Miranda, ASC.

Colorfront licensed its HDR Image Analyzer software to AJA for AJA’s HDR Image Analyzer in 2018. A new version of AJA HDR Image Analyzer is set for release during Q3 2019.

Finally, Colorfront and Pomfort have teamed up to integrate their respective HDR-capable on-set systems. This collaboration, harnessing Colorfront Engine, will include live CDL reading in ACES pipelines between Colorfront On-Set/Express Dailies and Pomfort LiveGrade Pro, giving motion picture productions better control of HDR images while simplifying their on-set color workflows and dailies processes.

AWS at NAB with a variety of partners, cloud workflows

During NAB 2019, Amazon Web Services (AWS) showcased advances for content creation, media supply chains and content distribution that improve agility and enhance quality across video workflows. Demonstrations included enhanced live and on-demand video workflows, such as next-gen transcoding, studio in the cloud, content protection, low latency and personalization. The company also highlighted cloud-based machine learning capabilities for content redaction, highlight creation, video clipping, live subtitling and metadata extraction.

AWS was joined by 12 technology partners in showing solutions that help users create, protect, distribute and monetize streaming video content. More than 60 Amazon Partner members across the show floor demonstrated media solutions built on AWS and interoperable with AWS services to deliver scalable video workflows.

Here are some workflows highlighted:
• Studio in the cloud – Users can deploy a creative studio in the cloud for visual effects, animation and editing workloads. They can scale rendering, virtual workstations and data storage globally with AWS Thinkbox Deadline, Amazon Elastic Compute Cloud (EC2) instances and AWS Cloud storage options such as Amazon Simple Storage Service (Amazon S3), Amazon FSx and more.
• Next-generation transcoding – AWS Elemental MediaConvert spotlighted advanced features for file-based video processing. Support for IMF inputs and CMAF output simplifies video delivery, and integrated Quality-Defined Variable Bitrate (QVBR) rate control enables high-quality video while lowering bitrates, storage and bandwidth requirements.
• Cloud DVR services – AWS Elemental MediaPackage enables an end-to-end cloud DVR workflow that lets content providers deliver DVR-like experiences, such as catch-up and start-over functionality for viewing on mobile and other over-the-top (OTT) devices.

AWS also highlighted intelligent workflows and automated capabilities:
• Media-to-cloud migration – Media asset management tools integrate with AWS Elemental MediaConvert, Amazon S3 and Amazon CloudFront to accelerate migration of large-scale video archives into the cloud. Built-in metadata tools improve search and management for massive media archives.
• Smart language workflows – AWS Elemental Media Services and Amazon Machine Learning work together to automate realtime transcription, caption creation and multi-language subtitling and dubbing, as well as creation of video clips based on caption text.
• Deep media archive – The new Amazon S3 Glacier Deep Archive storage class is a low-cost cloud storage offering that enables customers to eliminate digital tape from their media infrastructures. It is ideally suited to cold media archives and to second copy and disaster recovery needs.

Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.

HP shows off new HP Z6 and Z8 G4 workstations at NAB

HP was at NAB demoing their new HP Z6 and Z8 G4 workstations, which feature Intel Xeon scalable processors and Intel Optane DC persistent memory technology to eliminate the barrier between memory and storage for compute-intensive workflows, including machine learning, multimedia and VFX. The new workstations offer accelerated performance with a processor-architecture that allows users to work faster and more efficiently.

Intel Optane DC allows users to improve system performance by moving large datasets closer to the CPU so it can be assessed, processed and analyzed in realtime and in a more affordable way. This will allow for no data loss after a power cycle or application closure. Once applications are written to take advantage of this new technology, users will benefit from accelerated workflows and little or no downtime.

Targeting 8K video editing in realtime and for rendering workflows, the HP Z6 G4 workstation is equipped with two next-generation Intel Xeon processors providing up to 48 total processor cores in one system, Nvidia and AMD graphics and 384GB of memory. Users can install professional-grade storage hardware without using standard PCIe slots, offering the ability to upgrade over time.

Powered by up to 56 processing cores and up to 3TB of high-speed memory, the HP Z8 G4 workstation can run complex 3D simulations, supporting VFX workflows and handling advanced machine learning algorithms. They are certified for some of the most-used software apps, including Autodesk Flame and DaVinci Resolve.

HP’s Remote Graphics Software (RGS), included with all HP Z workstations, enables remote workstation access from any Windows, Linux or Mac device.

Avid is collaborating with HP to test RGS with Media Composer|Cloud VM.

The HP Z6 G4 workstation with new Intel Xeon processors is available now for the base price of $2,372. The HP Z8 G4 workstation starts at $2,981.

AI and deep learning at NAB 2019

By Tim Nagle

If you’ve been there, you know. Attending NAB can be both exciting and a chore. The vast show floor spreads across three massive halls and several hotels, and it will challenge even the most comfortable shoes. With an engineering background and my daily position as a Flame artist, I am definitely a gear-head, but I feel I can hardly claim that title at these events.

Here are some of my takeaways from the show this year…

Tim Nagle

8K
Having listened to the rumor mill, this year’s event promised to be exciting. And for me, it did not disappoint. First impressions: 8K infrastructure is clearly the goal of the manufacturers. Massive data rates and more Ks are becoming the norm. Everybody seemed to have an 8K workflow announcement. As a Flame artist, I’m not exactly looking forward to working on 8K plates. Sure, it is a glorious number of pixels, but the challenges are very real. While this may be the hot topic of the show, the fact that it is on the horizon further solidifies the need for the industry at large to have a solid 4K infrastructure. Hey, maybe we can even stop delivering SD content soon? All kidding aside, the systems and infrastructure elements being designed are quite impressive. Seeing storage solutions that can read and write at these astronomical speeds is just jaw dropping.

Young Attendees
Attendance remained relatively stable this year, but what I did notice was a lot of young faces making their way around the halls. It seemed like high school and university students were able to take advantage of interfacing with manufacturers, as well as some great educational sessions. This is exciting, as I really enjoy watching young creatives get the opportunity to express themselves in their work and make the rest of us think a little differently.

Blackmagic Resolve 16

AI/Deep Learning
Speaking of the future, AI and deep learning algorithms are being implemented into many parts of our industry, and this is definitely something to watch for. The possibilities to increase productivity are real, but these technologies are still relatively new and need time to mature. Some of the post apps taking advantage of these algorithms come from Blackmagic, Autodesk and Adobe.

At the show, Blackmagic announced their Neural Engine AI processing, which is integrated into DaVinci Resolve 16 for facial recognition, speed warp estimation and object removal, to name just a few. These features will add to the productivity of this software, further claiming its place among the usual suspects for more than just color correction.

Flame 2020

The Autodesk Flame team has implemented deep learning in to their app as well. It portends really impressive uses for retouching and relighting, as well as creating depth maps of scenes. Autodesk demoed a shot of a woman on the beach, with no real key light possibility and very flat, diffused lighting in general. With a few nodes, they were able to relight her face to create a sense of depth and lighting direction. This same technique can be used for skin retouch as well, which is very useful in my everyday work.

Adobe has also been working on their implementation of AI with the integration of Sensei. In After Effects, the content-aware algorithms will help to re-texture surfaces, remove objects and edge blend when there isn’t a lot of texture to pull from. Watching a demo artist move through a few shots, removing cars and people from plates with relative ease and decent results, was impressive.

These demos have all made their way online, and I encourage everyone to watch. Seeing where we are headed is quite exciting. We are on our way to these tools being very accurate and useful in everyday situations, but they are all very much a work in progress. Good news, we still have jobs. The robots haven’t replaced us yet.


Tim Nagle is a Flame artist at Dallas-based Lucky Post.

NAB 2019: First impressions

By Mike McCarthy

There are always a slew of new product announcements during the week of NAB, and this year was no different. As a Premiere editor, the developments from Adobe are usually the ones most relevant to my work and life. Similar to last year, Adobe was able to get their software updates released a week before NAB, instead of for eventual release months later.

The biggest new feature in the Adobe Creative Cloud apps is After Effects’ new “Content Aware Fill” for video. This will use AI to generate image data to automatically replace a masked area of video, based on surrounding pixels and surrounding frames. This functionality has been available in Photoshop for a while, but the challenge of bringing that to video is not just processing lots of frames but keeping the replaced area looking consistent across the changing frames so it doesn’t stand out over time.

The other key part to this process is mask tracking, since masking the desired area is the first step in that process. Certain advances have been made here, but based on tech demos I saw at Adobe Max, more is still to come, and that is what will truly unlock the power of AI that they are trying to tap here. To be honest, I have been a bit skeptical of how much AI will impact film production workflows, since AI-powered editing has been terrible, but AI-powered VFX work seems much more promising.

Adobe’s other apps got new features as well, with Premiere Pro adding Free-Form bins for visually sorting through assets in the project panel. This affects me less, as I do more polishing than initial assembly when I’m using Premiere. They also improved playback performance for Red files, acceleration with multiple GPUs and certain 10-bit codecs. Character Animator got a better puppet rigging system, and Audition got AI-powered auto-ducking tools for automated track mixing.

Blackmagic
Elsewhere, Blackmagic announced a new version of Resolve, as expected. Blackmagic RAW is supported on a number of new products, but I am not holding my breath to use it in Adobe apps anytime soon, similar to ProRes RAW. (I am just happy to have regular ProRes output available on my PC now.) They also announced a new 8K Hyperdeck product that records quad 12G SDI to HEVC files. While I don’t think that 8K will replace 4K television or cinema delivery anytime soon, there are legitimate markets that need 8K resolution assets. Surround video and VR would be one, as would live background screening instead of greenscreening for composite shots. No image replacement in post, as it is capturing in-camera, and your foreground objects are accurately “lit” by the screens. I expect my next major feature will be produced with that method, but the resolution wasn’t there for the director to use that technology for the one I am working on now (enter 8K…).

AJA
AJA was showing off the new Ki Pro Go, which records up to four separate HD inputs to H.264 on USB drives. I assume this is intended for dedicated ISO recording of every channel of a live-switched event or any other multicam shoot. Each channel can record up to 1080p60 at 10-bit color to H264 files in MP4 or MOV and up to 25Mb.

HP
HP had one of their existing Z8 workstations on display, demonstrating the possibilities that will be available once Intel releases their upcoming DIMM-based Optane persistent memory technology to the market. I have loosely followed the Optane story for quite a while, but had not envisioned this impacting my workflow at all in the near future due to software limitations. But HP claims that there will be options to treat Optane just like system memory (increasing capacity at the expense of speed) or as SSD drive space (with DIMM slots having much lower latency to the CPU than any other option). So I will be looking forward to testing it out once it becomes available.

Dell
Dell was showing off their relatively new 49-inch double-wide curved display. The 4919DW has a resolution of 5120×1440, making it equivalent to two 27-inch QHD displays side by side. I find that 32:9 aspect ratio to be a bit much for my tastes, with 21:9 being my preference, but I am sure there are many users who will want the extra width.

Digital Anarchy
I also had a chat with the people at Digital Anarchy about their Premiere Pro-integrated Transcriptive audio transcription engine. Having spent the last three months editing a movie that is split between English and Mandarin dialogue, needing to be fully subtitled in both directions, I can see the value in their tool-set. It harnesses the power of AI-powered transcription engines online and integrates the results back into your Premiere sequence, creating an accurate script as you edit the processed clips. In my case, I would still have to handle the translations separately once I had the Mandarin text, but this would allow our non-Mandarin speaking team members to edit the Mandarin assets in the movie. And it will be even more useful when it comes to creating explicit closed captioning and subtitles, which we have been doing manually on our current project. I may post further info on that product once I have had a chance to test it out myself.

Summing Up
There were three halls of other products to look through and check out, but overall, I was a bit underwhelmed at the lack of true innovation I found at the show this year.

Full disclosure, I was only able to attend for the first two days of the exhibition, so I may have overlooked something significant. But based on what I did see, there isn’t much else that I am excited to try out or that I expect to have much of a serious impact on how I do my various jobs.

It feels like most of the new things we are seeing are merely commoditized versions of products that may originally have been truly innovative when they were initially released, but now are just slightly more fleshed out versions over time.

There seems to be much less pioneering of truly new technology and more repackaging of existing technologies into other products. I used to come to NAB to see all the flashy new technologies and products, but now it feels like the main thing I am doing there is a series of annual face-to-face meetings, and that’s not necessarily a bad thing.

Until next year…


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Sony’s NAB updates — a cinematographer’s perspective

By Daniel Rodriguez

With its NAB offerings, Sony once again showed that they have a firm presence in nearly every stage of production, be it motion picture, broadcast media or short form. The company continues to keep up to date with the current demands while simultaneously preparing for the inevitable wave of change that seems to come faster and faster each year. While the introduction of new hardware was kept to a short list this year, many improvements to existing hardware and software were released to ensure Sony products — both new and existing — still have a firm presence in the future.

The ability to easily access, manipulate, share and stream media has always been a priority for Sony. This year at NAB, Sony continued to demonstrate its IP Live, SR Live, XDCAM Air and Media Backbone Hive platforms, which give users the opportunity to manage media all over the globe. IP Live allows users to access remote production, which contains core processing hardware while accessing it anywhere. This extends to 4K and HDR/SDR streaming as well, which is where SR Live comes into play. SR Live allows for a native 4K HDR signal to be processed into full HD and regular SDR signals, and a core improvement is the ability to adjust the curves during a live broadcast for any issues that may arise in converting HDR signals to SDR.

For other media, including XDCAM-based cameras, XDCAM Air allows for the wireless transfer and streaming of most media through QoS services, and turns almost any easily accessible camera with wireless capabilities into a streaming tool.

Media Backbone Hive allows users to access their media anywhere they want. Rather than just being an elaborate cloud service, Media Backbone Hive allows internal Adobe Cloud-based editing, accepts nearly every file type, allows a user to embed metadata and makes searching simple with keywords and phrases that are spoken in the media itself.

For the broadcast market, Sony introduced the Sony HDC-5500 4K HDR three-CMOS sensor camcorder which they are calling their “flagship” camera in this market. Offering 4K HDR and high frame rates, the camera also offers a global shutter — which is essential for dealing with strobing from lights — and can now capture fast action without the infamous rolling shutter blur. The camera allows for 4K output over 12G SDI, allowing for 4K monitoring and HDR, and as these outputs continue to be the norm, the introduction of the HDC-5500 will surely be a hit with users, especially with the addition of global shutter.

Sony is very much a company that likes to focus on the longevity of their previous releases… cameras especially. Sony’s FS7 is a camera that has excelled in its field since its introduction in 2014, and to this day is an extremely popular choice for short form, narrative and broadcast media. Like other Sony camera bodies, the FS7 allows for modular builds and add-ons, and this is where the new CBK-FS7BK ENG Build-Up Kit comes in. Sporting a shoulder mount and ENG viewfinder, the kit includes an extension in the back that allows for two wireless audio inputs, RAW output, streaming and file transfer via Wireless LAN or 4G/LTE connection, as well as QoS streaming (only through XDCAM Air) and timecode input. This CBK-FS7BK ENG Build-Up Kit turns the FS7 into an even more well-rounded workhorse.

The Sony Venice is Sony’s flagship Cinema camera, replacing the Sony F65, which is still brilliant and a popular camera. Having popped up as recently as last year’s Annihilation, the Venice takes a leap further in entering the full-frame, VistaVision market. Boasting top-of-the-line specs and a smaller, more modular build than the F65, the camera isn’t exactly a new release — it came out in November 2017 — but Sony has secured longevity in their flagship camera in a time when other camera manufacturers are just releasing their own VistaVision-sensored cameras and smaller alternatives.

Sony recently released a firmware update to the Venice that allows X-OCN XT — their highest form of compressed 16-bit RAW — two new imager modes, allowing the camera to sample 5.7K 16:9 in full frame and 6K 2.39:1 in full width, as well as 4K signal over 6G/12G SDI output and wireless remote control with the CBK-WA02. Since the Venice is smaller and able to be mounted on harder-to-reach mounts, wireless control is quickly becoming a feature that many camera assistants need. Newer anamorphic desqueeze modes for 1.25x, 1.3x, 1.5x and 1.8x have also been added, which is huge, since many older and newer lenses are constantly being created and revisited, such as the Technovision 1.5x — made famous by Vittorio Storaro on Apocalypse Now (1979) — and the Cooke Full Frame Anamorphics 1.8X. With VistaVision full frame now being an easily accessible way of filming, new forms of lensing are now becoming common, so systems like anamorphic are no longer limited to 1.3X and 2X. It’s reassuring to see Sony look out for storytellers who may want to employ less common anamorphic desqueeze sizes.

As larger resolutions and higher frame rates become the norm, Sony has introduced the new Sony SxS Pro X cards. A follow up to the hugely successful Sony SxS Pro+ cards, these new cards boost an incredible transfer speed of 10Gbps (1250Mbps) in 120GB and 240GB cards. This is a huge step up from the previous SxS Pro+ cards that offered a read speed of 3.5Gbps and a write speed of 2.8Gbps. Probably the most exciting part of these new cards being introduced is the corresponding SBAC-T40 card reader which guarantees a full 240GB card to be offloaded in 3.5 minutes.

Sony’s newest addition to the Venice camera is the Rialto extension system. Using the Venice’s modular build, the Rialto is a hardware extension that allows you to remove the main body’s sensor and install it into a smaller body unit which is then tethered either nine or 18 feet by cable back to the main body. Very reminiscent of the design of ARRI’s Alexa M unit, the Rialto goes further by being an extension of its main system rather than a singular system, which may bring its own issues. The Rialto allows users to reach spots where it may otherwise prove difficult using the actual Venice body. Its lightweight design allows users to mount it nearly anywhere. Where other camera bodies that are designed to be smaller end up heavy when outfitted with accessories such as batteries and wireless transmitters, the Rialto can easily be rigged to aerials, handhelds, and Steadicams. Though some may question why you wouldn’t just get a smaller body from another camera company, the big thing to consider is that the Rialto isn’t a solution to the size of the Venice body — which is already very small, especially compared to the previous F65 — but simply another tool to get the most out of the Venice system, especially considering you’re not sacrificing anything as far as features or frame rates. The Rialto is currently being used on James Cameron’s Avatar sequels, as its smaller body allows him to employ two simultaneously for true 3D recording whilst giving all the options of the Venice system.

With innovations in broadcast and motion picture production, there is a constant drive to push boundaries and make capture/distribution instant. Creating a huge network for distribution, streaming, capture, and storage has secured Sony not only as the powerhouse that it already is, but also ensures its presence in the ever-changing future.


Daniel Rodriguez is a New York based director and cinematographer. Having spent years working for such companies as Light Iron, Panavision and ARRI Rental, he currently works as a freelance cinematographer, filming narrative and commercial work throughout the five boroughs. 

 

NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.

Avid offers rebuilt engine and embraces cloud, ACES, AI, more

By Daniel Restuccio

During its Avid Connect conference just prior to NAB, Avid announced a Media Composer upgrade, support for ACES color standard and additional upgrades to a number of its toolsets, apps and services, including Avid Nexis.

The chief news from Avid is that Media Composer, its flagship video editing system, has been significantly retooled: sporting a new user interface, rebuilt engine, and additional built-in audio, visual effects, color grading and delivery features.

In a pre-interview with postPerspective, Avid president/CEO Jeff Rosica said, “We’re really trying to leap frog and jump ahead to where the creative tools need to go.”

Avid asked themselves, what did they need to do “to help production and post production really innovate?” He pointed to TV shows and films, and how complex they’re getting. “That means they’re dealing with more media, more elements, and with so many more decisions just in the program itself. Let alone the fact that the (TV or film) project may have to have 20 different variants just to go out the door.”

Jeff Rosica

The new paneled user interface simplifies the workspace, has redesigned bins to find media faster, as well as task-based workspaces showing only what the user wants and needs to see.

Dave Colantuoni, VP of product management at Avid, said they spent the most amount of time studying the way that editors manage and organize bins and content within Media Composer. “Some of our editors use 20, 30, 40 bins at a time. We’ve really spent a lot of time so that we can provide an advantage to you in how you approach organizing your media. “

Avid is also offering more efficient workflow solutions. Users, without leaving Media Composer, can work in 8K, 16K or HDR thanks to the newly built-in 32-bit full float color pipeline. Additionally, Avid continues to work with OTT content providers to help establish future industry standards.

“We’re trying to give as much creative power to the creative people as we can, and bring them new ways to deal with things,” said Rosica. “We’re also trying to help the workflow side. We’re trying to help make sure production doesn’t have to do more with less, or sometimes more with the same budget. Cloud (computing) allows us to bring a lot of new capabilities to the products, and we’re going to be cloud powering a lot of our products… more than you’ve seen before.”

The new Media Composer engine is now native OP1A, can handle more video and audio streams, offers Live Timeline and background rendering, and a distributed processing add-on option to shorten turnaround times and speed up post production.

“This is something our competitors do pretty well,” explained Colantuoni. “And we have different instances of OP1A working among the different Avid workflows. Until now, we’ve never had it working natively inside of Media Composer. That’s super-important because a lot of capabilities started in OP1A, and we can now keep it pristine through the pipeline.”

Said Rosica, “We are also bringing the ability to do distributive rendering. An editor no longer has to render or transcode on their machine. They can perform those tasks in a distributed or centralized render farm environment. That allows this work to get done behind the scenes. This is actually an Avid Supply solution, so it will be very powerful and reliable. Users will be able to do background rendering, as well as distributive rendering and move things off the machine to other centralized machines. That’s going to be very helpful for a lot of post workflows.”

Avid had previously offered three main flavors of Media Composer: Media Composer First, the free version; Media Composer; and Media Composer Ultimate. Now they are also offering a new Enterprise version.

For the first time, large production teams can customize the interface for any role in the organization, whether the user is a craft editor, assistant, logger or journalist. It also offers unparalleled security to lock down content, reducing the chances of unauthorized leaks of sensitive media. Enterprise also integrates with Editorial Management 2019.

“The new fourth tier at the top is what we are calling the Enterprise Edition or Enterprise. That word doesn’t necessarily mean broadcast,” says Rosica. “It means for business deployment. This is for post houses and production companies, broadcast, and even studios. This lets the business, or the enterprise, or production, or post house to literally customize interfaces and customize work spaces to the job role or to the user.”

Nexis Cloudspaces
Avid also announced Avid Nexis|Cloudspaces. So Instead of resorting to NAS or external drives for media storage, Avid Nexis|Cloudspaces allows editorial to offload projects and assets not currently in production. Cloudspaces extends Avid Nexis storage directly to Microsoft Azure.

“Avid Nexis|Cloudspaces brings the power of the cloud to Avid Nexis, giving organizations a cost-effective and more efficient way to extend Avid Nexis storage to the cloud for reliable backup and media parking,” said Dana Ruzicka, chief product officer/senior VP at Avid. “Working with Microsoft, we are offering all Avid Nexis users a limited-time free offer of 2TB of Microsoft Azure storage that is auto-provisioned for easy setup and as much capacity as you need, when you need it.”

ACES
The Academy Color Encoding System (ACES) team also announced that Avid is now part of the ACES Logo Program, as the first Product Partner in the new Editorial Finishing product category. ACES is a free, open, device-independent color management and image interchange system and is the global standard for color management, digital image interchange and archiving. Avid will be working to implement ACES in conformance with logo program specifications for consistency and quality with a high quality ACES-color managed video creation workflow.

“We’re pleased to welcome Avid to the ACES logo program,” said Andy Maltz, managing director of the ACES Council. “Avid’s participation not only benefits editors that need their editing systems to accurately manage color, but also the broader ACES end-user community through expanded adoption of ACES standards and best practices.”

What’s Next?
“We’ve already talked about how you can deploy Media Composer or other tools in a virtualized environment, or how you can use these kind of cloud environments to extend or advance production,” said Rosica. “We also see that these things are going to allow us to impact workloads. You’ll see us continue to power our MediaCentral platform, editorial management of MediaCentral, and even things like Media Composer with AI to help them get to the job faster. We can help automate functions, automate environments and use cloud technologies to allow people to collaborate better, to share better, to just power their workloads. You’re going to see a lot from us over time.”

Autodesk’s Flame 2020 features machine learning tools

Autodesk’s new Flame 2020 offers a new machine-learning-powered feature set with a host of new capabilities for Flame artists working in VFX, color grading, look development or finishing. This latest update will be showcased at the upcoming NAB Show.

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

New creative tools include:
· Z-Depth Map Generator— Enables Z-depth map extraction analysis using machine learning for live-action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
· Human Face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
· Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity updates include:
· Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
· Simpler UX in Core Areas— A new expanded full-width UX layout for MasterGrade, Image surface and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
· Manager for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment.
· Open FX Support—Flame, Flare and Flame Assist version 2020 now include comprehensive support for industry-standard Open FX creative plugins such as Batch/BFX nodes or on the Flame timeline.
· Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene.

For single-user licenses, Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.
Flame, Flare, Flame Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. Pricing details can be found at the Autodesk website.

Atomos’ new Shogun 7: HDR monitor, recorder, switcher

The new Atomos Shogun 7 is a seven-inch HDR monitor, recorder and switcher that offers an all-new 1500-nit, daylight-viewable, 1920×1200 panel with a 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. It also offers ProRes RAW recording and realtime Dolby Vision output. Shogun 7 will be available in June 2019, priced at $1,499.

The Atomos screen uses a combination of advanced LED and LCD technologies which together offer deeper, better blacks the company says rivals OLED screens, “but with the much higher brightness and vivid color performance of top-end LCDs.”

A new 360-zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel, says Atomos, is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered, allowing for the same on-screen dynamic range, palette of colors and shades that your camera sensor sees.

Atomos and Dolby have teamed up to create Dolby Vision HDR “live” — a tool that allows you to see HDR live on-set and carry your creative intent from the camera through into HDR post. Dolby have optimized their target display HDR processing algorithm which Atomos has running inside the Shogun 7. It brings realtime automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and AtomOS 10 automatically analyzes the image, queries the TV and applies the right color and brightness profiles for the maximum HDR experience on the display.

Shogun 7 records images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. Also, 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most quad-link, dual-link or single-link SDI cinema cameras. Pixels are preserved with data rates of up to 1.8Gb/s.

In terms of audio, Shogun 7 eliminates the need for a separate audio recorder. Users can add 48V stereo mics via an optional balanced XLR breakout cable, or select mic or line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. Monitoring selected stereo tracks is via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.

Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the Ninja V.  The new body of Shogun 7 has a Ninja V-like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting.

AtomOS 10 on Shogun 7 has the full range of monitoring tools, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis.

Shogun 7 can also be used as a portable touchscreen-controlled multi-camera switcher with asynchronous quad-ISO recording. Users can switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update.

There is asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, users can import the XML file into an NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed.

IDEA launches to create specs for next-gen immersive media

The Immersive Digital Experiences Alliance (IDEA) will launch at the NAB 2019 with the goal of creating a suite of royalty-free specifications that address all immersive media formats, including emerging light field technology.

Founding members — including CableLabs, Light Field Lab, Otoy and Visby — created IDEA to serve as an alliance of like-minded technology, infrastructure and creative innovators working to facilitate the development of an end-to-end ecosystem for the capture, distribution and display of immersive media.

Such a unified ecosystem must support all displays, including highly anticipated light field panels. Recognizing that the essential launch point would be to create a common media format specification that can be deployed on commercial networks, IDEA has already begun work on the new Immersive Technology Media Format (ITMF).

ITMF will serve as an interchange and distribution format that will enable high-quality conveyance of complex image scenes, including six-degrees-of-freedom (6DoF), to an immersive display for viewing. Moreover, ITMF will enable the support of immersive experience applications including gaming, VR and AR, on top of commercial networks.

Recognized for its potential to deliver an immersive true-to-life experience, light field media can be regarded as the richest and most dense form of visual media, thereby setting the highest bar for features that the ITMF will need to support and the new media-aware processing capabilities that commercial networks must deliver.

Jon Karafin, CEO/co-founder of Light Field Lab, explains that “a light field is a representation describing light rays flowing in every direction through a point in space. New technologies are now enabling the capture and display of this effect, heralding new opportunities for entertainment programming, sports coverage and education. However, until now, there has been no common media format for the storage, editing, transmission or archiving of these immersive images.”

“We’re working on specifications and tools for a variety of immersive displays — AR, VR, stereoscopic 3D and light field technology, with light field being the pinnacle of immersive experiences,” says Dr. Arianne Hinds, Immersive Media Strategist at CableLabs. “As a display-agnostic format, ITMF will provide near-term benefits for today’s screen technology, including VR and AR headsets and stereoscopic displays, with even greater benefits when light field panels hit the market. If light field technology works half as well as early testing suggests, it will be a game-changer, and the cable industry will be there to help support distribution of light field images with the 10G platform.”

Starting with Otoy’s ORBX scene graph format, a well-established data structure widely used in advanced computer animation and computer games, IDEA will provide extensions to expand the capabilities of ORBX for light field photographic camera arrays, live events and other applications. Further specifications will include network streaming for ITMF and transcoding of ITMF for specific displays, archiving, and other applications. IDEA will preserve backwards-compatibility on the existing ORBX format.

IDEA anticipates releasing an initial draft of the ITMF specification in 2019. The alliance also is planning an educational seminar to explain more about the requirements for immersive media and the benefits of the ITMF approach. The seminar will take place in Los Angeles this summer.

Photo Credit: All Rights Reserved: Light Field Lab. Future Vision concept art of room-scale holographic display from Light Field Lab, Inc.