NBCUni 7.26

Category Archives: NAB

Review: Avid Media Composer Symphony 2018 v.12

By Brady Betzel

In February of 2018, we saw a seismic shift in the leadership at Avid. Chief executive officer Louis Hernandez Jr. was removed and subsequently replaced by Jeff Rosica. Once Rosica was installed, I think everyone who was worried Avid was about to be liquidated to the highest bidder breathed a sigh of temporary relief. Still unsure whether new leadership was going to right a tilting ship, I immediately wanted to see a new action plan from Avid, specifically on where Media Composer and Symphony were going.

Media Composer with Symphony

Not long afterward, I was happily reading how Avid was taking lessons from its past transgressions and listening to its clients. I heard Avid was taking tours around the industry and listening to what customers and artists needed from them. Personally, I was asking myself if Media Composer with Symphony would ever be the finishing tool of Avid DS was. I’m happy to say, it’s starting to look that way.

It appears from the outside that Rosica is indeed the breath of fresh air Avid needed. At NAB 2019, Avid teased the next iteration of Media Composer, version 2019, with overhauled interface and improvements, such as a 32-bit float color pipeline workflow complete with ACES color management and a way to deliver IMF packages; a new engine with a distributed processing engine; and a whole new product called Media Composer|Enterprise, all of which will really help sell this new Media Composer. But the 2019 update is coming soon and until then I took a deep dive into Media Composer 2018 v12, which has many features editors, assistants, and even colorists have been asking for: a new Avid Titler, shape-based color correction (with Symphony option), new multicam features and more.

Titling
As an online editor who uses Avid Media Composer with Symphony option about 60% of the time, titling is always a tricky subject. Avid has gone through some rough seas when dealing with how to fix the leaky hole known as the Avid Title Tool. The classic Avid Title Tool was basic but worked. However, if you aligned something in the Title Tool interface to Title Safe zones, it might jump around once you close the Title Tool interface. Fonts wouldn’t always stay the same when working across PC and MacOS platforms. The list goes on, and it is excruciatingly annoying.

Titler

Let’s take a look at some Avid history: In 2002, Avid tried to appease creators and introduced the, at the time, a Windows-only titler: Avid Marquee. While Marquee was well-intentioned, it was extremely difficult to understand if you weren’t interested in 3D lighting, alignment and all sorts of motion graphics stuff that not all editors want to spend time learning. So, most people didn’t use it, and if they did it took a little while for anyone taking over the project to figure out what was done.

In December of 2014, Avid leaned on the New Blue Titler, which would work in projects higher than 1920×1080 resolution. Unfortunately, many editors ran into a very long render at the end, and a lot bailed on it. Most decided to go out of house and create titles in Adobe Photoshop and Adobe After Effects. While this all relates to my experience, I assume others feel the same.

In Avid Media Composer 2018, the company has introduced the Avid Titler, which in the Tools menu is labeled: Avid Titler +. It works like an effect rather than a rendered piece of media like in the traditional Avid Title Tool, where an Alpha and a Fill layer worked. This method is similar to how NewBlue or Marquee functioned. However, Avid Titler works by typing directly on the record monitor; adding a title is as easy as marking an in and out point and clicking on the T+ button on the timeline.

You can specify things like kerning, shadow, outlines, underlines, boxes, backgrounds and more. One thing I found peculiar was that under Face, the rotation settings rotate individual letters and not the entire word by default. I reached out to Avid and they are looking into making the entire word rotation option the default in the mini toolbar of Avid Titler. So stay tuned.

Also, you can map your fast forward and rewind buttons to “Go To Next/Previous Event.” This allows you to jump to your next edits in the timeline but also to the next/previous keyframes when in the Effect Editor. Typically, you click on the scrub line in the record window and then you can use those shortcuts to jump to the next keyframe. In the Avid Titler, it would just start typing in the text box. Furthermore, when I wanted to jump out of Effect Editor mode and back into Edit Mode, I usually hit “y,” but that did not get me out of Effects Mode (Avid did mention they are working on updates to the Avid Titler that would solve this issue). The new Avid Titler definitely has some bugs and/or improvements that are needed, and they are being addressed, but it’s a decent start toward a modern title editor.

Shape-based color correction

Color
If you want advanced color correction built into Media Composer, then you are going to want the Symphony option. Media Composer with the Symphony option allows for more detailed color correction using secondary color corrections as well as some of the newer updates, including shape-based color correction. Before Resolve and Baselight became more affordable, Symphony was the gold standard for color correction on a budget (and even not on a budget since it works so well in the same timeline the editors use). But what we are really here for is the 2018 v.12 update of Shapes.

With the Symphony option, you can now draw specific regions on the footage for your color correction to affect. It essentially works similarly to a layer-based system like Adobe Photoshop. You can draw shapes with the same familiar tools you are used to drawing with in the Paint or AniMatte tools and then just apply your brightness, saturation or hue swings in those areas only. On the color correction page you can access all of these tools on the right-hand side, including the softening, alpha view, serial mode and more.

When using the new shape-based tools you must point the drop-down menu to “CC Effect.” From there you can add a bunch of shapes on top of each other and they will play in realtime. If you want to lay a base correction down, you can specify it in the shape-based sidebar, then click shape and you can dial in the specific areas to your or your client’s taste. You can check off the “Serial Mode” box to have all corrections interact with one another or uncheck the box to allow for each color correction to be a little more isolated — a really great option to keep in mind when correcting. Unfortunately, tracking a shape can only be done in the Effect Editor, so you need to kind of jump out of color correction mode, track, and then go back. It’s not the end of the world, but it would be infinitely better if you could track efficiently inside of the color correction window. Avid could even take it further by allowing planar tracking by an app like Mocha Pro.

Shape-based color correction

The new shape-based corrector also has an alpha view mode identified by the infinity symbol. I love this! I often find myself making mattes in the Paint tool, but it can now be done right in the color correction tool. The Symphony option is an amazing addition to Media Composer if you need to go further than simple color correction but not dive into a full color correction app like Baselight or Resolve. In fact, for many projects you won’t need much more than what Symphony can do. Maybe a +10 on the contrast, +5 on the brightness and +120 on the saturation and BAM a finished masterpiece. Kind of kidding, but wait until you see it work.

Multicam
The final update I want to cover is multicam editing and improvements to editing group clips. I cannot emphasize enough how much time this would have saved me as an assistant editor back in the pre-historic Media Composer days… I mean we had dongles, and I even dabbled in the Meridian box. Literally days of grouping and regrouping could have been avoided with the Edit Group feature. But I did make a living fixing groups that were created incorrectly, so I guess this update is a Catch 22. Anyway, you can now edit groups in Media Composer by creating a group, right-clicking on that group and selecting Edit Group. From there, the group will now open in the Record Monitor as a sequence, and from there you can move, nudge and even add cameras to a previously created group. Once you are finished, you can update the group and refresh any sequences that used that group to update if you wish. One issue is that with mixed frame rate groups, Avid says committing to that sequence might produce undesirable effects.

Editing workspace

Cost of Entry
How much does Media Composer cost these days? While you can still buy it outright, it seems a bit more practical to go monthly since you will automatically get updates, but it can still be a little tricky. Do you need PhraseFind and/or ScriptSync? Do you need the Symphony option? Do you need to access shared storage? There are multiple options depending on your needs. If you want everything, then Media Composer Ultimate for $49 per month is what you want. If you want Media Composer and just one add-on, like Symphony, it will cost $19 per month plus $199 per year for the Symphony option. If you want to test the water before jumping in, you can always try Media Composer First.

For a good breakdown of the Media Composer pricing structure, check out KeyCode Media  page (a certified reseller). Another great link with tons of information organized into easily digestible bites is this. Additionally, www.freddylinks.com is a great resource chock full of everything else Avid, written by Avid technical support specialist Fredrik Liljeblad out of Sweden.

Group editing

Summing Up
In the end, I use and have used Media Composer with Symphony for over 15 years, and it is the most reliable nonlinear editor supporting multiple editors in a shared network environment that I have used. While Adobe Premiere Pro, Apple Final Cut Pro X and Blackmagic Resolve are offering fancy new features and collaboration modes, Avid seems to always hold stabile when I need it the most. These new improvements and a UI overhaul (set to debut in May), new leadership from Rosica, and the confidence of Rosica’s faithful employees all seem to be paying off and getting Avid back on the track they should have always been on.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

NAB 2019: Storage for M&E workflows

By Tom Coughlin

Storage is a vital element in modern post production, since that’s where the video content lives. Let’s look at trends in media post production storage and products shown at the 2019 NAB show. First let’s look at general post production storage architectures and storage trends.

My company produces the yearly “Digital Storage in Media and Entertainment Report,” so we are keeping an eye on storage all year round. The image to the right is a schematic from our 2018 report — it’s a nonlinear editing station showing  optional connections to shared online (or realtime) storage via a SAN or NAS (or even a cloud-based object storage system) and a host bus adapter (HBA or xGbE card). I hope this gives you some good background for what’s to come.

Our 2018 report also includes data from our annual Digital Storage in Media and Entertainment Professional Survey. The report shows that storage capacity annual demand is expected to be over 110 Exabytes of storage by 2023. In 2018 48% of responding survey participants said that they used cloud-based storage for editing and post production. And 56% also said that they have 1TB or more storage capacity in the cloud. In 2018, Internet distribution was the most popular way to view proxies.

All of this proves that M&E pros will continue to use multiple types of digital storage to enable their workflows, with significant growth in the use of cloud storage for collaborative and field projects. With that in mind, let’s dig into some of the storage offerings that were on display at NAB 2019.

Workflow Storage
Dell Technologies said that significant developments in its work with VMware unlock the value of virtualization for applications and tools to automate many critical M&E workflows and operations. Dell EMC and VMware said that they are about to unveil the recipe book for making virtualization a reality for the M&E industry.

Qumulo announced an expansion of its cloud-native file storage offerings. The company introduced two new products —CloudStudio and CloudContinuity — as well as support for Qumulo’s cloud-native, distributed hybrid file system on the Google Cloud Platform (GCP). Qumulo has partnered with Google to support Qumulo’s hybrid cloud file system on GCP and on the Google Cloud Platform Marketplace. Enterprises will be able to take advantage of the elastic compute resources, operational agility, and advanced services that Google’s public cloud offers. With the addition of the Google Cloud Platform, Qumulo is able to provide multi-cloud platform support, making it easy for users to store, manage and access their data, workloads and applications in both Amazon Web Services (AWS) and GCP. Qumulo also enables data replication between clouds for migration or multi-copy requirements.

M&E companies of any size can scale production into the public cloud with CloudStudio, which securely moves traditionally on-prem workspaces, including desktops, applications and data, to the public cloud on both the AWS and GCP platforms. Qumulo’s file storage software is the same whether on-prem or in the cloud, making the transition seamless and easy and eliminating the need to reconfigure applications or retrain users.

CloudContinuity enables users to automatically replicate their data from an on-prem Qumulo cluster to a Qumulo instance running in the cloud. Should a primary on-prem storage system experience a catastrophic failure, customers can redirect users and applications to the Qumulo cloud, where they will have access to all of their data immediately. CloudContinuity also enables quick, automated fail-back to an on-prem cluster in disaster recovery scenarios.

Quantum announced its VS-Series, designed for surveillance and industrial IoT applications. The VS-Series is available in a broad range of server choices, suitable for deployments with fewer than 10 cameras up to the largest environments with thousands of cameras. Using the VS-Series, security pros can efficiently record and store surveillance footage and run an entire security infrastructure on a single platform.

Quantum’s VS-Series architecture is based on the Quantum Cloud Storage Platform (CSP), a new software-defined storage platform specifically designed for storing machine and sensor-generated data. Like storage technologies used in the cloud, the Quantum CSP is software-defined and can be deployed on bare metal, as a virtual machine, or as part of a hyperconverged infrastructure.Unlike other software-defined storage technologies, the Quantum CSP was designed specifically for video and other forms of high-resolution content — engineered for extremely low latency, maximizing the streaming performance of large files to storage.

The Quantum Cloud Storage Platform allows high-speed video recording with optimal camera density and can host and run certified VMS management applications, recording servers and other building control servers on a single platform.

Quantum say that the VS-Series product line is being offered in a variety of deployment options, including software-only, mini-tower and 1U, 2U and 4U hyperconverged servers.

Key VS-Series attributes:
– Supports high camera density and software architecture that enables users to run their entire security infrastructure on a single hyperconverged platform.
– Offers a software-defined platform with the broadest range of deployment options. Many appliances can scale out for more cameras or scale up for increased retention.
– Comes pre-installed with certified VMS applications and can be installed and configured in minutes.
– Offers a fault-tolerant design to minimize hardware and software issues, which is meant to virtually eliminate downtime

Quantum was also showing its R-3000 at NAB. This box was designed for in-vehicle data capture for developing driver assistance and autonomous driving systems. This NAS box includes storage modules of 60TB with HDDs and 23TB or 46TB using SSDs. It works off 12 volt power and features two 10 GbE ports.

Arrow Distribution bundled NetApp storage appliances with Axle AI software. The three solutions offered are the VM100, VM200 and VM400 with 100TB, 200TB and 400TB, respectively, with 10GbE network interfaces and NetApp’s FAS architecture. Each configuration also includes an Intel-based application server running a five-user version of Axle AI 2019. The software includes a browser front-end that allows multiple users to tag, catalog and search their media files, as well as a range of AI-driven options for automatically cataloging and discovering specific visual and audio attributes within those files.

Avid Nexis|Cloudspaces

Avid Nexis|Cloudspaces is a storage as a service (SaaS) offering for post, news and sports teams, enabling them to store and park media and projects not currently in production in the cloud, leveraging Microsoft Azure. This frees up local Nexis storage space for production work. The company is offering all Avid Nexis users a limited-time free offer of 2TB of Microsoft Azure storage that is auto-provisioned for easy setup and can scale as needed. Avid Nexis manages these Cloudspaces alongside local workspaces, allowing unified content management.

DDP was showing a rack with hybrid SSD/HDD storage that the company says provides 24/7 365 days of reliable operation with zero interruptions and a transparent failover setup. DDP has redesigned its GUI to provide faster operation and easier use.

Facilis displayed its new Hub shared storage line developed specifically for media production workflows. Built as an entirely new platform, Facilis Hub represents the evolution of the Facilis shared file system with the block-level virtualization and multi-connectivity performance required in shared creative environments. This solution offers both block-mode Fibre Channel and Ethernet connectivity simultaneously, allowing connection through either method with the same permissions, user accounts and desktop appearance.

Facilis’ Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for one low yearly cost. A native Facilis virtual volume can display cloud, tape and spinning disk data in the same directory structure, on the client desktop. Every Facilis Hub shared storage server comes with unlimited seats of the Facilis FastTracker asset tracking application. The Object Cloud software and storage package is available for most Facilis servers running version 7.2 or higher.

Facilis also had particular product updates. The Facilis 8 has 1GB/s data rates through standard dual-port 10GbE and options for 40GbE and Fibre Channel connectivity with 32TB, 48TB and 64TB capacities. The Facilis Hub 16 model offers 2GB/s speed with 16 HDDs with 64TB, 96TB and 128TB capacities. The company’s Hub Hybrid 16 model and SSD offers SSDs in an integrated high-capacity HDD-based storage system offering performance of 3GB/s and 4GB/s. With two or more Hub 16 or Hub 32 servers attached through 32Gb Fibre Channel controllers, Facilis Hub One configurations can be fully redundant, with multi-server bandwidth aggregated into a single point of network connectivity. The Hub One starts at 128GB and scales to 1PB.

Pixit Media announced the launch of PixStor 5, the latest version of its leading scale-out data-driven storage platform. According to the company, “PixStor 5 is an enterprise-class scale-out NAS platform delivering guaranteed 99% performance for all types of workflow and a single global namespace across multiple storage tiers — from on-prem for the cloud.”

New PixStor 5 highlights include:

PixStor 5

– Secure container services – This new feature offers multi-tenancy from a single storage fabric. PixStor 5 enables creative studios to deploy secure media environments without crippling productivity and creativity and aligns with TPN security accreditation standards to attract A-list clients.
– Cloud workflow flexibility — PixStor 5 expands your workflows cost-effectively into the cloud with fully automated seamless deployment to cloud marketplaces, enabling hybrid workflows for burst render and cloud-first workflows for global collaboration. PixStor 5 will soon be available in the Google Cloud Platform Marketplace, followed shortly by AWS and Azure.
– Enhanced search capabilities — Using machine learning and artificial intelligence cloud-based tools to drive powerful media indexing and search capabilities, users can perform fast, easy and accurate content searches across their entire global namespace.
– Deep granular analytics – With single-pane-of-glass management and user-friendly dashboards, PixStor 5 allows a holistic view of the entire filesystem and delivers business-relevant metrics to reinforce storage strategies.

GB Labs launched new software, features and updates to its FastNAS and Space, Echo and Vault ranges at NAB. The Space, Echo and Vault ranges got intelligent new software features, including the Mosaic asset organizer and the latest Analytics Center, along with brand-new Core.4 and Core.4 Lite software. The new Core software is also now included in the FastNAS product range.

GB Labs

Mosaic software, which already features on the FastNAS, range, could be compared to a MAM.  It is an asset organizer that can automatically scour all in-built metadata and integrate with AI tagging systems to give users the power to find what they’re looking for without having to manually enter any metadata.

Analytics Center will give users the visibility into their network so that they can see how they’re using their data, giving them a better understanding of individual or system-wide use, with suggestions on how to optimize their systems much more quickly and at a lower cost.

The new Core.4 software for both ranges builds on GB Labs’ current Core.3 OS offering a high-performance custom OS that is specifically used to serve media files. It allows a stable performance for every user and the best from the least amount of disk, which saves power.

EditShare’s flagship EditShare EFS scale-out storage enterprise scale-out storage solution was on display. It was developed for large-scale media organizations and supports hundreds of users simultaneously, with embedded tools for sharing media and collaborating across departments, across sites and around the world.

EditShare was showcasing advancements in its EFS File Auditing technology, the industry’s only realtime auditing platform designed to manage, monitor and secure your media from inception to delivery. EFS File Auditing keeps track of all digital assets and captures every digital footprint that a file takes throughout its life cycle, including copying, modifying and deleting of any content within a project.

Storbyte introduced its eco-friendly SBJ-496 at the 2019 NAB show. According to the company, this product is a new design in high-capacity disk systems for long-term management of digital media content with enterprise-class availability and data services. Ideal for large archive libraries, the SBJ-496 requires little to no electricity to maintain data, and its environmentally friendly green design allows unrestricted air flow, generates minimal heat and saves on cooling expenses.

Echo Flash

The new EcoFlash SBS-448, for digital content creation and streaming, is an efficient solid-state storage array that can deliver over 20GB of data per second. EcoFlash SBS-448 consumes less than half the electrical power and produces a lot less heat. Its patented design extends its lifespan significantly, resulting in a total operating cost per terabyte that is 300-500% lower.

NGD Systems was showing its computational storage product with several system partners at NAB, including at the Echostreams booth for its 1U platforms. NGD said that its M.2 and upcoming EDSFF form factors can be used in dense and performance-optimized solutions within the EchoStreams 1U server and canister system. In addition to providing data analytics and realtime analysis capture, the combination of NGD Systems products and EchoStreams 1U platforms allow for deployment at the extreme edge for use in onsite video acquisition and post processing at the edge.

OpenDrives was showcasing its Atlas software platform and product family of shared storage solutions. Its NAB demo was built on a single Summit system, including the OmniDrive media accelerator, powered by NVMe, to significantly boosts editorial, transcoding, color grading and visual effects shared workflows. OpenDrives is moving to a 2U form factor in its manufacturing, streamlining systems without sacrificing performance.

iX Systems said that their TrueNAS enterprise storage appliances deliver a perfect range of features and scalability for next-gen M&E workflows. AIC had an exhibit showing several enterprise storage systems, including some with NGD Systems computational storage SSDs. Promise Technology said that its VTrak NAS has been optimized for video application environments. Sony was offering PCIe SSD data storage servers. Other companies showing workflow storage products included Asustor, elements, PAC Storage and Rocstor.

Conclusions
The media and entertainment industry has unique requirements for storage to support modern digital workflows. A number of large and small companies have come up with a variety of local and cloud-based approaches to provide storage for post production applications. The NAB show is one of the world’s largest forums for such products and a great place to learn about what the digital storage and memory industry has to offer media and entertainment professionals.


Tom Coughlin, president of Coughlin Associates, is a digital storage analyst and business/technology consultant. He is active with SMPTE, SNIA, the IEEE — he is president of IEEE-USA and active in the CES, where he is chairman of the Future Directions Committee) and other pro organizations. 

NBCUni 7.26

NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.


Cobalt Digital’s card-based solution for 4K/HDR conversions

Cobalt Digital was at NAB showing with card-based solutions for openGear frames for 4K and HDR workflows. Cobalt’s 9904-UDX-4K up/down/cross converter and image processor offers an economical SDR-to-HDR and HDR-to-SDR conversion for 4K.

John Stevens, director of engineering at Burbank post house The Foundation, calls it “a swiss army knife” for a post facility.

The 9904-UDX-4K upconverts 12G/6G/3G/HD/SD to either UHD1 3840×2160 square division multiplex (SDM) or two-sample interleave (2SI) quad 3G-SDI-based formats, or it can output SMPTE ST 2082 12G-SDI for single-wire 4K transport. With both 12G-SDI and quad 3G-SDI inputs, the 9904-UDX-4K can downconvert 12G and quad UHD. The 9904-UDX-4K provides an HDMI 2.0 output for economical 4K video monitoring and offers numerous options, including SDR-to-HDR conversion and color correction.

The 9904-UDX-4K-IP model offers the same functionality as the 9904-UDX-4K SDI-based model, plus it also provides dual 10GigE ports to support for the emerging uncompressed video/audio/data over IP standards.

The 9904-UDX-4K-DSP model provides the same functionality as the 9904-UDX-4K model, and additionally also offers a DSP-based platform that supports multiple audio DSP options, including Dolby realtime loudness leveling (automatic loudness processing), Dolby E/D/D+ encode/decode and Linear Acoustic Upmax automatic upmixing. Embedded audio and metadata are properly delayed and re-embedded to match any video processing delay, with full adjustment available for audio/video offset.

The product’s high-density openGear design allows for up to five 9904-UDX-4K cards to be installed in one 2RU openGear frame. Card control/monitoring is available via the DashBoard user interface, integrated HTML5 web interface, SNMP or Cobalt’s RESTful-based Reflex protocol.

“I have been looking for a de-embedder that will work with SMPTE ST-2048 raster sizes — specifically 2048×1080 and 4096×2160,” explains Stevens. “The reason this is important is Netflix deliverables require these rasters. We use all embedded audio and I need to de-embed for monitoring. The same Cobalt Digital card will take almost every SDI input from quad link to 12G and output HDMI. There are other converters that will do some of the same things, but I haven’t seen anything that does what this product does.”


NAB 2019: An engineer’s perspective

By John Ferder

Last week I attended my 22nd NAB, and I’ve got the Ross lapel pin to prove it! This was a unique NAB for me. I attended my first 20 NABs with my former employer, and most of those had me setting up the booth visits for the entire contingent of my co-workers and making sure that the vendors knew we were at each booth and were ready to go. Thursday was my “free day” to go wandering and looking at the equipment, cables, connectors, test gear, etc., that I was looking for.

This year, I’m part of a new project, so I went with a shopping list and a rough schedule with the vendors we needed to see. While I didn’t get everywhere I wanted to go, the three days were very full and very rewarding.

Beck Video IP panel

Sessions and Panels
I also got the opportunity to attend the technical sessions on Saturday and Sunday. I spent my time at the BEITC in the North Hall and the SMPTE Future of Cinema Conference in the South Hall. Beck TV gave an interesting presentation on constructing IP-based facilities of the future. While SMPTE ST2110 has been completed and issued, there are still implementation issues, as NMOS is still being developed. Today’s systems are and will for the time being be hybrid facilities. The decision to be made is whether the facility will be built on an IP routing switcher core with gateways to SDI, or on an SDI routing switcher core with gateways to IP.

Although more expensive, building around an IP core would be more efficient and future-proof. Fiber infrastructure design, test equipment and finding engineers who are proficient in both IP and broadcast (the “Purple Squirrels”) are large challenges as well.

A lot of attention was also paid to cloud production and distribution, both in the BEITC and the FoCC. One such presentation, at the FoCC, was on VFX in the cloud with an eye toward the development of 5G. Nathaniel Bonini of BeBop Technology reported that BeBop has a new virtual studio partnership with Avid, and that the cloud allows tasks to be performed in a “massively parallel” way. He expects that 5G mobile technology will facilitate virtualization of the network.

VFX in the Cloud panel

Ralf Schaefer, of the Fraunhofer Heinrich-Hertz Institute, expressed his belief that all devices will be attached to the cloud via 5G, resulting in no cables and no mobile storage media. 5G for AR/VR distribution will render the scene in the network and transmit it directly to the viewer. Denise Muyco of StratusCore provided a link to a virtual workplace: https://bit.ly/2RW2Vxz. She felt that 5G would assist in the speed of the collaboration process between artist and client, making it nearly “friction-free.” While there are always security concerns, 5G would also help the prosumer creators to provide more content.

Chris Healer of The Molecule stated that 5G should help to compress VFX and production workflows, enable cloud computing to work better and perhaps provide realtime feedback for more perfect scene shots, showing line composites of VR renders to production crews in remote locations.

The Floor
I was very impressed with a number of manufacturers this year. Ross Video demonstrated new capabilities of Inception and OverDrive. Ross also showed its new Furio SkyDolly three-wheel rail camera system. In addition, 12G single-link capability was announced for Acuity, Ultrix and other products.

ARRI AMIRA (Photo by Cotch Diaz)

ARRI showed a cinematic multicam system built using the AMIRA camera with a DTS FCA fiber camera adapter back and a base station controllable by Sony RCP1500 or Skaarhoj RCP. The Sony panel will make broadcast-centric people comfortable, but I was very impressed with the versatility of the Skaarhoj RCP. The system is available using either EF, PL, or B4 mount lenses.

During the show, I learned from one of the manufacturers that one of my favorite OLED evaluation monitors is going to be discontinued. This was bad news for the new project I’ve embarked on. Then we came across the Plura booth in the North Hall. Plura as showing a new OLED monitor, the PRM-224-3G. It is a 24.5-inch diagonal OLED, featuring two 3G/HD/SD-SDI and three analog inputs, built-in waveform monitors and vectorscopes, LKFS audio measurement, PQ and HLG, 10-bit color depth, 608/708 closed caption monitoring, and more for a very attractive price.

Sony showed the new HDC-3100/3500 3xCMOS HD cameras with global shutter. These have an upgrade program to UHD/HDR with and optional processor board and signal format software, and a 12G-SDI extension kit as well. There is an optional single-mode fiber connector kit to extend the maximum distance between camera and CCU to 10 kilometers. The CCUs work with the established 1000/1500 series of remote control panels and master setup units.

Sony’s HDC-3100/3500 3xCMOS HD camera

Canon showed its new line of 4K UHD lenses. One of my favorite lenses has been the HJ14ex4.3B HD wide-angle portable lens, which I have installed in many of the studios I’ve worked in. They showed the CJ14ex4.3B at NAB, and I even more impressed with it. The 96.3-degree horizontal angle of view is stunning, and the minimization of chromatic aberration is carried over and perhaps improved from the HJ version. It features correction data that support the BT.2020 wide color gamut. It works with the existing zoom and focus demand controllers for earlier lenses, so it’s  easily integrated into existing facilities.

Foot Traffic
The official total of registered attendees was 91,460, down from 92,912 in 2018. The Evertz booth was actually easy to walk through at 10a.m. on Monday, which I found surprising given the breadth of new interesting products and technologies. Evertz had to show this year. The South Hall had the big crowds, but Wednesday seemed emptier than usual, almost like a Thursday.

The NAB announced that next year’s exhibition will begin on Sunday and end on Wednesday. That change might boost overall attendance, but I wonder how adversely it will affect the attendance at the conference sessions themselves.

I still enjoy attending NAB every year, seeing the new technologies and meeting with colleagues and former co-workers and clients. I hope that next year’s NAB will be even better than this year’s.

Main Image: Barbie Leung.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.


NAB 2019: A cinematographer’s perspective

By Barbie Leung

As an emerging cinematographer, I always wanted to attend an NAB show, and this year I had my chance. I found that no amount of research can prepare you for the sheer size of the show floor, not to mention the backrooms, panels and after-hours parties. As a camera operator as well as a cinematographer who is invested in the post production and exhibition end of the spectrum, I found it absolutely impossible to see everything I wanted to or catch up with all the colleagues and vendors I wanted to. This show is a massive and draining ride.

Panasonic EV1

There was a lot of buzz in the ether about 5G technology. Fast and accurate, the consensus seems to be that 5G will be the tipping point in implementing a lot of the tech that’s been talked about for years but hasn’t quite taken off yet, including the feasibility of autonomous vehicles and 8K streaming stateside.

It’s hard to deny the arrival of 8K technology while staring at the detail and textures on an 80-inch Sharp 8K professional display. Every roof tile, every wave in the ocean is rendered in rich, stunning detail.

In response to the resolution race, on the image capture end of things, Arri had already announced and started taking orders for the Alexa Mini LF — its long-awaited entry into the large format game — in the week before NAB.

Predictably, at NAB we saw many lens manufacturers highlighting full-frame coverage. Canon introduced its Sumire Prime lenses, while Fujinon announced the Premista 28-100mm T2.9 full-format zoom.

Sumire Prime lenses

Camera folks, including many ASC members, are embracing large format capture for sure, but some insist the appeal lies not so much in the increased resolution, but rather in the depth and overall image quality.

Meanwhile, back in 35mm sensor land, Panasonic continues its energetic push of the EVA1 camera. Aside from presentations at their booth emphasizing “cinematic” images from this compact 5.7K camera, they’ve done a subtle but not-to-subtle job of disseminating the EVA1 throughout the trade show floor. If you’re at the Atomos booth, you’ll find director/cinematographers like Elle Schneider presenting work shot with Atomos with the EVA1 balanced on a Ronin-S, and if you stop by Tiffen you’ll find an EVA1 being flown next to the Alexa Mini.

I found a ton of motion control at the show. From Shotover’s new compact B1 gyro stabilized camera system to the affable folks at Arizona-based Defy, who showed off their Dactylcam Pro, an addictively smooth-to-operate cable-suspension rig. The Bolt high-speed Cinebot had high-speed robotic arms complete with a spinning hologram.

Garret Brown at the Tiffen booth.

All this new gimbal technology is an ever-evolving game changer. Steadicam inventor Garrett Brown was on hand at the Tiffen booth to show the new M2 sled, which has motors elegantly built into the base. He enthusiastically heralded that camera operators can go faster and more “dangerously” than ever. There was so much motion control that it vied for attention alongside all the talk of 5G, 8K and LED lighting.

Some veterans of the show have expressed that this year’s show felt “less exciting” than shows of the past eight to 10 years. There were fewer big product launch announcements, perhaps due to past years where companies have been unable to fulfill the rush of post-NAB orders for new products for 12 or even 18 months. Vendors have been more conservative with what to hype, more careful with what to promise.

For a new attendee like me, there was more than enough new tech to explore. Above all else, NAB is really about the people you meet. The tech will be new next year, but the relationships you start and build at NAB are meant to last a career.

Main Image: ARRI’s Alexa Mini LF.


Barbie Leung is a New York-based cinematographer and camera operator working in independent film and branded content. Her work has played Sundance, the Tribeca Film Festival and Outfest. You can follow her on Instagram at @barbieleungdp.


Colorfront at NAB with 8K HDR, product updates

Colorfront, which makes on-set dailies and transcoding systems, has rolled out new 8K HDR capabilities and updates across its product lines. The company has also deepened its technology partnership with AJA and entered into a new collaboration with Pomfort to bring more efficient color and HDR management on-set.

Colorfront Transkoder is a post workflow tool for handling UHD, HDR camera, color and editorial/deliverables formats, with recent customers such as Sky, Pixelogic, The Picture Shop and Hulu. With a new HDR GUI, Colorfront’s Transkoder 2019 performs the realtime decompression/de-Bayer/playback of Red and Panavision DXL2 8K R3D material displayed on a Samsung 82-inch Q900R QLED 8K Smart TV in HDR and in full 8K resolution (7680 X 4320). The de-Bayering process is optimized through Nvidia GeForce RTX graphics cards with Turing GPU architecture (also available on Colorfront On-Set Dailies 2019), with 8K video output (up to 60p) using AJA Kona 5 video cards.

“8K TV sets are becoming bigger, as well as more affordable, and people are genuinely awestruck when they see 8K camera footage presented on an 8K HDR display,” said Aron Jaszberenyi, managing director, Colorfront. “We are actively working with several companies around the world originating 8K HDR content. Transkoder’s new 8K capabilities — across on-set, post and mastering — demonstrate that 8K HDR is perfectly accessible to an even wider range of content creators.”

Powered by a re-engineered version of Colorfront Engine and featuring the HDR GUI and 8K HDR workflow, Transkoder 2019 supports camera/editorial formats including Apple ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE (High Density Encoding).

Transkoder 2019’s mastering toolset has been further expanded to support Dolby Vision 4.0 as well as Dolby Atmos for the home with IMF and Immersive Audio Bitstream capabilities. The new Subtitle Engine 2.0 supports CineCanvas and IMSC 1.1 rendering for preservation of content, timing, layout and styling. Transkoder can now also package multiple subtitle language tracks into the timeline of an IMP. Further features support fast and efficient audio QC, including solo/mute of individual tracks on the timeline, and a new render strategy for IMF packages enabling independent audio and video rendering.

Colorfront also showed the latest versions of its On-Set Dailies and Express Dailies products for motion pictures and episodic TV production. On-Set Dailies and Express Dailies both now support ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE. As with Transkoder 2019, the new version of On-Set Dailies supports real-time 8K HDR workflows to support a set-to-post pipeline from HDR playback through QC and rendering of HDR deliverables.

In addition, AJA Video Systems has released v3.0 firmware for its FS-HDR realtime HDR/WCG converter and frame synchronizer. The update introduces enhanced coloring tools together with several other improvements for broadcast, on-set, post and pro AV HDR production developed by Colorfront.

A new, integrated Colorfront Engine Film Mode offers an ACES-based grading and look creation toolset with ASC Color Decision List (CDL) controls, built-in LOOK selection including film emulation looks, and variable Output Mastering Nit Levels for PQ, HLG Extended and P3 colorspace clamp.

Since launching in 2018, FS-HDR has been used on a wide range of TV and live outside broadcast productions, as well as motion pictures including Paramount Pictures’ Top Gun: Maverick, shot by Claudio Miranda, ASC.

Colorfront licensed its HDR Image Analyzer software to AJA for AJA’s HDR Image Analyzer in 2018. A new version of AJA HDR Image Analyzer is set for release during Q3 2019.

Finally, Colorfront and Pomfort have teamed up to integrate their respective HDR-capable on-set systems. This collaboration, harnessing Colorfront Engine, will include live CDL reading in ACES pipelines between Colorfront On-Set/Express Dailies and Pomfort LiveGrade Pro, giving motion picture productions better control of HDR images while simplifying their on-set color workflows and dailies processes.


AWS at NAB with a variety of partners, cloud workflows

During NAB 2019, Amazon Web Services (AWS) showcased advances for content creation, media supply chains and content distribution that improve agility and enhance quality across video workflows. Demonstrations included enhanced live and on-demand video workflows, such as next-gen transcoding, studio in the cloud, content protection, low latency and personalization. The company also highlighted cloud-based machine learning capabilities for content redaction, highlight creation, video clipping, live subtitling and metadata extraction.

AWS was joined by 12 technology partners in showing solutions that help users create, protect, distribute and monetize streaming video content. More than 60 Amazon Partner members across the show floor demonstrated media solutions built on AWS and interoperable with AWS services to deliver scalable video workflows.

Here are some workflows highlighted:
• Studio in the cloud – Users can deploy a creative studio in the cloud for visual effects, animation and editing workloads. They can scale rendering, virtual workstations and data storage globally with AWS Thinkbox Deadline, Amazon Elastic Compute Cloud (EC2) instances and AWS Cloud storage options such as Amazon Simple Storage Service (Amazon S3), Amazon FSx and more.
• Next-generation transcoding – AWS Elemental MediaConvert spotlighted advanced features for file-based video processing. Support for IMF inputs and CMAF output simplifies video delivery, and integrated Quality-Defined Variable Bitrate (QVBR) rate control enables high-quality video while lowering bitrates, storage and bandwidth requirements.
• Cloud DVR services – AWS Elemental MediaPackage enables an end-to-end cloud DVR workflow that lets content providers deliver DVR-like experiences, such as catch-up and start-over functionality for viewing on mobile and other over-the-top (OTT) devices.

AWS also highlighted intelligent workflows and automated capabilities:
• Media-to-cloud migration – Media asset management tools integrate with AWS Elemental MediaConvert, Amazon S3 and Amazon CloudFront to accelerate migration of large-scale video archives into the cloud. Built-in metadata tools improve search and management for massive media archives.
• Smart language workflows – AWS Elemental Media Services and Amazon Machine Learning work together to automate realtime transcription, caption creation and multi-language subtitling and dubbing, as well as creation of video clips based on caption text.
• Deep media archive – The new Amazon S3 Glacier Deep Archive storage class is a low-cost cloud storage offering that enables customers to eliminate digital tape from their media infrastructures. It is ideally suited to cold media archives and to second copy and disaster recovery needs.


Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.

HP shows off new HP Z6 and Z8 G4 workstations at NAB

HP was at NAB demoing their new HP Z6 and Z8 G4 workstations, which feature Intel Xeon scalable processors and Intel Optane DC persistent memory technology to eliminate the barrier between memory and storage for compute-intensive workflows, including machine learning, multimedia and VFX. The new workstations offer accelerated performance with a processor-architecture that allows users to work faster and more efficiently.

Intel Optane DC allows users to improve system performance by moving large datasets closer to the CPU so it can be assessed, processed and analyzed in realtime and in a more affordable way. This will allow for no data loss after a power cycle or application closure. Once applications are written to take advantage of this new technology, users will benefit from accelerated workflows and little or no downtime.

Targeting 8K video editing in realtime and for rendering workflows, the HP Z6 G4 workstation is equipped with two next-generation Intel Xeon processors providing up to 48 total processor cores in one system, Nvidia and AMD graphics and 384GB of memory. Users can install professional-grade storage hardware without using standard PCIe slots, offering the ability to upgrade over time.

Powered by up to 56 processing cores and up to 3TB of high-speed memory, the HP Z8 G4 workstation can run complex 3D simulations, supporting VFX workflows and handling advanced machine learning algorithms. They are certified for some of the most-used software apps, including Autodesk Flame and DaVinci Resolve.

HP’s Remote Graphics Software (RGS), included with all HP Z workstations, enables remote workstation access from any Windows, Linux or Mac device.

Avid is collaborating with HP to test RGS with Media Composer|Cloud VM.

The HP Z6 G4 workstation with new Intel Xeon processors is available now for the base price of $2,372. The HP Z8 G4 workstation starts at $2,981.

AI and deep learning at NAB 2019

By Tim Nagle

If you’ve been there, you know. Attending NAB can be both exciting and a chore. The vast show floor spreads across three massive halls and several hotels, and it will challenge even the most comfortable shoes. With an engineering background and my daily position as a Flame artist, I am definitely a gear-head, but I feel I can hardly claim that title at these events.

Here are some of my takeaways from the show this year…

Tim Nagle

8K
Having listened to the rumor mill, this year’s event promised to be exciting. And for me, it did not disappoint. First impressions: 8K infrastructure is clearly the goal of the manufacturers. Massive data rates and more Ks are becoming the norm. Everybody seemed to have an 8K workflow announcement. As a Flame artist, I’m not exactly looking forward to working on 8K plates. Sure, it is a glorious number of pixels, but the challenges are very real. While this may be the hot topic of the show, the fact that it is on the horizon further solidifies the need for the industry at large to have a solid 4K infrastructure. Hey, maybe we can even stop delivering SD content soon? All kidding aside, the systems and infrastructure elements being designed are quite impressive. Seeing storage solutions that can read and write at these astronomical speeds is just jaw dropping.

Young Attendees
Attendance remained relatively stable this year, but what I did notice was a lot of young faces making their way around the halls. It seemed like high school and university students were able to take advantage of interfacing with manufacturers, as well as some great educational sessions. This is exciting, as I really enjoy watching young creatives get the opportunity to express themselves in their work and make the rest of us think a little differently.

Blackmagic Resolve 16

AI/Deep Learning
Speaking of the future, AI and deep learning algorithms are being implemented into many parts of our industry, and this is definitely something to watch for. The possibilities to increase productivity are real, but these technologies are still relatively new and need time to mature. Some of the post apps taking advantage of these algorithms come from Blackmagic, Autodesk and Adobe.

At the show, Blackmagic announced their Neural Engine AI processing, which is integrated into DaVinci Resolve 16 for facial recognition, speed warp estimation and object removal, to name just a few. These features will add to the productivity of this software, further claiming its place among the usual suspects for more than just color correction.

Flame 2020

The Autodesk Flame team has implemented deep learning in to their app as well. It portends really impressive uses for retouching and relighting, as well as creating depth maps of scenes. Autodesk demoed a shot of a woman on the beach, with no real key light possibility and very flat, diffused lighting in general. With a few nodes, they were able to relight her face to create a sense of depth and lighting direction. This same technique can be used for skin retouch as well, which is very useful in my everyday work.

Adobe has also been working on their implementation of AI with the integration of Sensei. In After Effects, the content-aware algorithms will help to re-texture surfaces, remove objects and edge blend when there isn’t a lot of texture to pull from. Watching a demo artist move through a few shots, removing cars and people from plates with relative ease and decent results, was impressive.

These demos have all made their way online, and I encourage everyone to watch. Seeing where we are headed is quite exciting. We are on our way to these tools being very accurate and useful in everyday situations, but they are all very much a work in progress. Good news, we still have jobs. The robots haven’t replaced us yet.


Tim Nagle is a Flame artist at Dallas-based Lucky Post.

NAB 2019: First impressions

By Mike McCarthy

There are always a slew of new product announcements during the week of NAB, and this year was no different. As a Premiere editor, the developments from Adobe are usually the ones most relevant to my work and life. Similar to last year, Adobe was able to get their software updates released a week before NAB, instead of for eventual release months later.

The biggest new feature in the Adobe Creative Cloud apps is After Effects’ new “Content Aware Fill” for video. This will use AI to generate image data to automatically replace a masked area of video, based on surrounding pixels and surrounding frames. This functionality has been available in Photoshop for a while, but the challenge of bringing that to video is not just processing lots of frames but keeping the replaced area looking consistent across the changing frames so it doesn’t stand out over time.

The other key part to this process is mask tracking, since masking the desired area is the first step in that process. Certain advances have been made here, but based on tech demos I saw at Adobe Max, more is still to come, and that is what will truly unlock the power of AI that they are trying to tap here. To be honest, I have been a bit skeptical of how much AI will impact film production workflows, since AI-powered editing has been terrible, but AI-powered VFX work seems much more promising.

Adobe’s other apps got new features as well, with Premiere Pro adding Free-Form bins for visually sorting through assets in the project panel. This affects me less, as I do more polishing than initial assembly when I’m using Premiere. They also improved playback performance for Red files, acceleration with multiple GPUs and certain 10-bit codecs. Character Animator got a better puppet rigging system, and Audition got AI-powered auto-ducking tools for automated track mixing.

Blackmagic
Elsewhere, Blackmagic announced a new version of Resolve, as expected. Blackmagic RAW is supported on a number of new products, but I am not holding my breath to use it in Adobe apps anytime soon, similar to ProRes RAW. (I am just happy to have regular ProRes output available on my PC now.) They also announced a new 8K Hyperdeck product that records quad 12G SDI to HEVC files. While I don’t think that 8K will replace 4K television or cinema delivery anytime soon, there are legitimate markets that need 8K resolution assets. Surround video and VR would be one, as would live background screening instead of greenscreening for composite shots. No image replacement in post, as it is capturing in-camera, and your foreground objects are accurately “lit” by the screens. I expect my next major feature will be produced with that method, but the resolution wasn’t there for the director to use that technology for the one I am working on now (enter 8K…).

AJA
AJA was showing off the new Ki Pro Go, which records up to four separate HD inputs to H.264 on USB drives. I assume this is intended for dedicated ISO recording of every channel of a live-switched event or any other multicam shoot. Each channel can record up to 1080p60 at 10-bit color to H264 files in MP4 or MOV and up to 25Mb.

HP
HP had one of their existing Z8 workstations on display, demonstrating the possibilities that will be available once Intel releases their upcoming DIMM-based Optane persistent memory technology to the market. I have loosely followed the Optane story for quite a while, but had not envisioned this impacting my workflow at all in the near future due to software limitations. But HP claims that there will be options to treat Optane just like system memory (increasing capacity at the expense of speed) or as SSD drive space (with DIMM slots having much lower latency to the CPU than any other option). So I will be looking forward to testing it out once it becomes available.

Dell
Dell was showing off their relatively new 49-inch double-wide curved display. The 4919DW has a resolution of 5120×1440, making it equivalent to two 27-inch QHD displays side by side. I find that 32:9 aspect ratio to be a bit much for my tastes, with 21:9 being my preference, but I am sure there are many users who will want the extra width.

Digital Anarchy
I also had a chat with the people at Digital Anarchy about their Premiere Pro-integrated Transcriptive audio transcription engine. Having spent the last three months editing a movie that is split between English and Mandarin dialogue, needing to be fully subtitled in both directions, I can see the value in their tool-set. It harnesses the power of AI-powered transcription engines online and integrates the results back into your Premiere sequence, creating an accurate script as you edit the processed clips. In my case, I would still have to handle the translations separately once I had the Mandarin text, but this would allow our non-Mandarin speaking team members to edit the Mandarin assets in the movie. And it will be even more useful when it comes to creating explicit closed captioning and subtitles, which we have been doing manually on our current project. I may post further info on that product once I have had a chance to test it out myself.

Summing Up
There were three halls of other products to look through and check out, but overall, I was a bit underwhelmed at the lack of true innovation I found at the show this year.

Full disclosure, I was only able to attend for the first two days of the exhibition, so I may have overlooked something significant. But based on what I did see, there isn’t much else that I am excited to try out or that I expect to have much of a serious impact on how I do my various jobs.

It feels like most of the new things we are seeing are merely commoditized versions of products that may originally have been truly innovative when they were initially released, but now are just slightly more fleshed out versions over time.

There seems to be much less pioneering of truly new technology and more repackaging of existing technologies into other products. I used to come to NAB to see all the flashy new technologies and products, but now it feels like the main thing I am doing there is a series of annual face-to-face meetings, and that’s not necessarily a bad thing.

Until next year…


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

AJA intros Ki Pro Go, Corvid 44 12G and more at NAB

AJA was at NAB this year showing the new Ki Pro Go H.264 multichannel HD/SD recorder/player, as well as 14 openGear converter cards featuring DashBoard software support, two new IP video transmitters that bridge HDMI and 3G-SDI signals to SMPTE ST 2110 and the Corvid 44 12G I/O card for AJA Developers. AJA also introduced updates featuring improvements for its FS-HDR HDR/WCG converter, desktop and mobile I/O products, AJA Control Room software, HDR Image Analyzer and the Helo recorder/streamer.

Ki Pro Go is a genlock-free, multichannel H.264 HD and SD recorder/player with a flexible architecture. This portable device allows users to record up to four channels of pristine HD and SD content from SDI and HDMI sources to off-the-shelf USB media via 4x USB 3.0 ports, with a fifth port for redundant recording. The Ki Pro Go will be available in June for $3,995.

A FS-HDR v3.0 firmware update features enhanced coloring tools and support for multichannel Dynamic LUTs, plus other improvements. The release includes a new integrated Colorfront Engine Film Mode offering a rich grading and look creation toolset with optional ACES colorspace, ASC color decision list controls and built-in look selection. It’s available in June as a free update.

Developed with Colorfront, the HDR Image Analyzer v1.1 firmware update features several new enhancements, including a new web UI that simplifies remote configuration and control from multiple machines, with updates over Ethernet offering the ability to download logs and screenshots. New remote desktop support provides facility-friendly control from desktops, laptops and tablets on any operating system. The update also adds new HDR monitoring and analysis tools. It’s available soon as a free update.

The Desktop Software v15.2 update offers new features and performance enhancements for AJA Kona and Io products. It offers psupport for Apple ProRes capture and playback across Windows, Linux and macOS in AJA Control Room, at up to 8K resolutions, while also adding new IP SMPTE ST 2110 workflows using AJA Io IP and updates for Kona IP, including ST 2110-40 ANC support. The free Desktop Software update will be available in May.

The Helo v4.0 firmware update introduces new features that allow users to customize their streaming service and improve monitoring and control. AV Mute makes it easy to personalize the viewing experience with custom service branding when muting audio and video streams, while Event Logging enables encoder activity monitoring for simpler troubleshooting. It’s available in May as a free update.

The new openGear converter cards combine the capabilities of AJA’s mini converters with openGear’s high-density architecture and support for DashBoard, enabling industry-standard configuration, monitoring and control in broadcast and live event environments over a PC or local network on Windows, macOS or Linux. New models include re-clocking SDI distribution amplifiers, single-mode 3G-SDI fiber converters plus Multi-Mode variants and an SDI audio embedder/ disembedder. The openGear cards are available now, with pricing dependent upon the model.

AJA’s new IPT-10G2-HDMI and IPT-10G2-SDI mini converters are single-channel IP video transmitters for bridging traditional HDMI and 3G-SDI signals to SMPTE ST 2110 for IP-based workflows. Both models feature dual 10 GigE SFP+ ports for facilities using SMPTE ST 2022-7 for redundancy in critical distribution and monitoring. They will be available soon for $1,295.

The Corvid 44 12G is an 8-lane PCIe 3.0 video and audio I/O card featuring support for 12G-SDI I/O in a low-profile design for workstations and servers and 8K/UltraHD2/4K/UltraHD high frame rate, deep color and HDR workflows. Corvid 44 12G also facilitates multichannel 12G-SDI I/O, enabling either 8K or multiple 4K streams of input or output. It is compatible across macOS, Windows and Linux and used in high-performance applications for imaging, post, broadcast and virtual production. Corvid 44 12G cards will be available soon.

Sony’s NAB updates — a cinematographer’s perspective

By Daniel Rodriguez

With its NAB offerings, Sony once again showed that they have a firm presence in nearly every stage of production, be it motion picture, broadcast media or short form. The company continues to keep up to date with the current demands while simultaneously preparing for the inevitable wave of change that seems to come faster and faster each year. While the introduction of new hardware was kept to a short list this year, many improvements to existing hardware and software were released to ensure Sony products — both new and existing — still have a firm presence in the future.

The ability to easily access, manipulate, share and stream media has always been a priority for Sony. This year at NAB, Sony continued to demonstrate its IP Live, SR Live, XDCAM Air and Media Backbone Hive platforms, which give users the opportunity to manage media all over the globe. IP Live allows users to access remote production, which contains core processing hardware while accessing it anywhere. This extends to 4K and HDR/SDR streaming as well, which is where SR Live comes into play. SR Live allows for a native 4K HDR signal to be processed into full HD and regular SDR signals, and a core improvement is the ability to adjust the curves during a live broadcast for any issues that may arise in converting HDR signals to SDR.

For other media, including XDCAM-based cameras, XDCAM Air allows for the wireless transfer and streaming of most media through QoS services, and turns almost any easily accessible camera with wireless capabilities into a streaming tool.

Media Backbone Hive allows users to access their media anywhere they want. Rather than just being an elaborate cloud service, Media Backbone Hive allows internal Adobe Cloud-based editing, accepts nearly every file type, allows a user to embed metadata and makes searching simple with keywords and phrases that are spoken in the media itself.

For the broadcast market, Sony introduced the Sony HDC-5500 4K HDR three-CMOS sensor camcorder which they are calling their “flagship” camera in this market. Offering 4K HDR and high frame rates, the camera also offers a global shutter — which is essential for dealing with strobing from lights — and can now capture fast action without the infamous rolling shutter blur. The camera allows for 4K output over 12G SDI, allowing for 4K monitoring and HDR, and as these outputs continue to be the norm, the introduction of the HDC-5500 will surely be a hit with users, especially with the addition of global shutter.

Sony is very much a company that likes to focus on the longevity of their previous releases… cameras especially. Sony’s FS7 is a camera that has excelled in its field since its introduction in 2014, and to this day is an extremely popular choice for short form, narrative and broadcast media. Like other Sony camera bodies, the FS7 allows for modular builds and add-ons, and this is where the new CBK-FS7BK ENG Build-Up Kit comes in. Sporting a shoulder mount and ENG viewfinder, the kit includes an extension in the back that allows for two wireless audio inputs, RAW output, streaming and file transfer via Wireless LAN or 4G/LTE connection, as well as QoS streaming (only through XDCAM Air) and timecode input. This CBK-FS7BK ENG Build-Up Kit turns the FS7 into an even more well-rounded workhorse.

The Sony Venice is Sony’s flagship Cinema camera, replacing the Sony F65, which is still brilliant and a popular camera. Having popped up as recently as last year’s Annihilation, the Venice takes a leap further in entering the full-frame, VistaVision market. Boasting top-of-the-line specs and a smaller, more modular build than the F65, the camera isn’t exactly a new release — it came out in November 2017 — but Sony has secured longevity in their flagship camera in a time when other camera manufacturers are just releasing their own VistaVision-sensored cameras and smaller alternatives.

Sony recently released a firmware update to the Venice that allows X-OCN XT — their highest form of compressed 16-bit RAW — two new imager modes, allowing the camera to sample 5.7K 16:9 in full frame and 6K 2.39:1 in full width, as well as 4K signal over 6G/12G SDI output and wireless remote control with the CBK-WA02. Since the Venice is smaller and able to be mounted on harder-to-reach mounts, wireless control is quickly becoming a feature that many camera assistants need. Newer anamorphic desqueeze modes for 1.25x, 1.3x, 1.5x and 1.8x have also been added, which is huge, since many older and newer lenses are constantly being created and revisited, such as the Technovision 1.5x — made famous by Vittorio Storaro on Apocalypse Now (1979) — and the Cooke Full Frame Anamorphics 1.8X. With VistaVision full frame now being an easily accessible way of filming, new forms of lensing are now becoming common, so systems like anamorphic are no longer limited to 1.3X and 2X. It’s reassuring to see Sony look out for storytellers who may want to employ less common anamorphic desqueeze sizes.

As larger resolutions and higher frame rates become the norm, Sony has introduced the new Sony SxS Pro X cards. A follow up to the hugely successful Sony SxS Pro+ cards, these new cards boost an incredible transfer speed of 10Gbps (1250Mbps) in 120GB and 240GB cards. This is a huge step up from the previous SxS Pro+ cards that offered a read speed of 3.5Gbps and a write speed of 2.8Gbps. Probably the most exciting part of these new cards being introduced is the corresponding SBAC-T40 card reader which guarantees a full 240GB card to be offloaded in 3.5 minutes.

Sony’s newest addition to the Venice camera is the Rialto extension system. Using the Venice’s modular build, the Rialto is a hardware extension that allows you to remove the main body’s sensor and install it into a smaller body unit which is then tethered either nine or 18 feet by cable back to the main body. Very reminiscent of the design of ARRI’s Alexa M unit, the Rialto goes further by being an extension of its main system rather than a singular system, which may bring its own issues. The Rialto allows users to reach spots where it may otherwise prove difficult using the actual Venice body. Its lightweight design allows users to mount it nearly anywhere. Where other camera bodies that are designed to be smaller end up heavy when outfitted with accessories such as batteries and wireless transmitters, the Rialto can easily be rigged to aerials, handhelds, and Steadicams. Though some may question why you wouldn’t just get a smaller body from another camera company, the big thing to consider is that the Rialto isn’t a solution to the size of the Venice body — which is already very small, especially compared to the previous F65 — but simply another tool to get the most out of the Venice system, especially considering you’re not sacrificing anything as far as features or frame rates. The Rialto is currently being used on James Cameron’s Avatar sequels, as its smaller body allows him to employ two simultaneously for true 3D recording whilst giving all the options of the Venice system.

With innovations in broadcast and motion picture production, there is a constant drive to push boundaries and make capture/distribution instant. Creating a huge network for distribution, streaming, capture, and storage has secured Sony not only as the powerhouse that it already is, but also ensures its presence in the ever-changing future.


Daniel Rodriguez is a New York based director and cinematographer. Having spent years working for such companies as Light Iron, Panavision and ARRI Rental, he currently works as a freelance cinematographer, filming narrative and commercial work throughout the five boroughs. 

 

NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.

Facilis Launches Hub shared storage line

Facilis Technology rolled out its new Hub Shared Storage line for media production workflows during the NAB show. Facilis Hub includes new hardware and an integrated disk-caching system for cloud and LTO backup and archive designed to provide block-level virtualization and multi-connectivity performance.

“Hub Shared Storage is an all-new product based on our Hub Server that launched in 2017. It’s the answer to our customers’ requests for a more compact server chassis, lower-cost hybrid (SSD and HDD) options and integrated cloud and LTO archive features,” says Jim McKenna, VP of sales and marketing at Facilis. “We deliver all of this with new, more powerful hardware, new drive capacity options and a new look to both the system and software interface.”

The Facilis shared storage network allows both block-mode Fibre Channel and Ethernet connectivity simultaneously with the ability to connect through either method with the same permissions, user accounts and desktop appearance. This expands user access, connection resiliency and network permissions. The system can be configured as a direct-attached drive or segmented into various-sized volumes that carry individual permissions for read and write access.

Facilis Object Cloud
Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for an annual fee. The Facilis Virtual Volume can display cloud, tape and spinning disk data in the same directory structure on the client desktop.

“A big problem for our customers is managing multiple interfaces for the various locations of their data. With Object Cloud, files in multiple locations reside in the same directory structure and are tracked by our FastTracker asset tracking in the same database as any active media asset,” says McKenna. “Object Cloud uses Object Storage technology to virtualize a Facilis volume with cloud and LTO locations. This gives access to files that exist entirely on disk, in the Cloud or on LTO, or even partially on disk and partially in the cloud.”

Every Facilis Hub Shared Storage server comes with unlimited seats in the Facilis FastTracker asset tracking application. The Object Cloud Software and Storage package is available for most Facilis servers running version 7.2 or higher.

Blackmagic’s Resolve 16: speedy cut page, Resolve Editor Keyboard, more

Blackmagic was at NAB with Resolve 16, which in addition to dozens of new features includes a new editing tab focused on speed. While Resolve still has its usual robust editing offerings, this particular cut page is designed for those working on short-form projects and on tight deadlines. Think of having a client behind you watching you cut something together, or maybe showing your director a rough cut. You get in, you edit and you go — it’s speedy, like editing triage.

For those who don’t want to edit this way, no worries, you don’t have to use this new tab. Just ignore it and move on. It’s an option, and only an option. That’s another theme with Resolve 16 — if you don’t want to see the Fairlight tab, turn it off. You want to see something in a different way, turn it on.

Blackmagic also introduced the DaVinci Resolve Editor Keyboard, a new premium keyboard for Resolve that helps improve the speed of editing. It allows the use of two hands while editing, so transport control and selecting clips can be done while performing edits. The Resolve Editor Keyboard will be available in August for $995.

The keyboard combined with the new cut page is designed to further speed up editing. This alternate edit page lets users import, edit, trim, add transitions, titles, automatically match color, mix audio and more. Whether you’re delivering for broadcast or for YouTube, the cut page allows editors to do all things in one place. Plus, the regular edit page is still available, so customers can switch between edit and cut pages to change editing styles right in the middle of a job.

“The new cut page in DaVinci Resolve 16 helps television commercial and other high-end editors meet super tight deadlines on fast turn-around projects,” says Grant Petty, Blackmagic CEO. “We’ve designed a whole new high-performance, nonlinear workflow. The cut page is all about power and speed. Plus, editors that need to work on more complex projects can still use the regular edit page. DaVinci Resolve 16 gives different editors the choice to work the way they want.”

The cut page is reminiscent of how editors used to work in the days of tape, where finding a clip was easy because customers could just spool up and down the tape to see their media and select shots. Today, finding the right clip in a bin with hundreds of files can be slow. With source tape, users no longer have to hunt through bins to find the clip they need. They can click on the source tape button and all of the clips in their bin appear in the viewer as a single long “tape.” This makes it easy to scrub through all of the shots, find the parts they want and quickly edit them to the timeline. Blackmagic calls it an “old-fashioned” concept that’s been modernized to help editors find the shots they need fast.

The new cut page features a dual timeline so editors don’t have to zoom in or out. The upper timeline shows users the entire program, while the lower timeline shows the current work area. Both timelines are fully functional, allowing editors to move and trim clips in whichever timeline is most convenient.

Also new is the DaVinci Neural Engine, which uses deep neural networks and learning, along with AI, to power new features such as speed warp motion estimation for retiming, super scale for up-scaling footage, auto color and color matching, facial recognition and more. The DaVinci Neural Engine is entirely cross-platform and uses the latest GPU innovations for AI and deep learning. The Neural Engine provides simple tools to solve complex, repetitive and time-consuming problems. For example, it enables facial recognition to automatically sort and organize clips into bins based on people in the shot.

DaVinci Resolve 16 also features new adjustment clips that let users apply effects and grades to clips on the timeline below; quick export that can be used to upload projects to YouTube, Vimeo and Frame.io from anywhere in the application; and new GPU-accelerated scopes providing more technical monitoring options than before. So now sharing your work on social channels, or for collaboration via Frame.io., is simple because it’s integrated into Resolve 16 Studio

DaVinci Resolve 16 Studio features improvements to existing ResolveFX, along with several new plugins that editors and colorists will like. There are new ResolveFX plugins for adding vignettes, drop shadows, removing objects, adding analog noise and damage, chromatic aberration, stylizing video and more. There are also improvements to the scanline, beauty, face refinement, blanking fill, warper, dead pixel fixer and colorspace transformation plugins. Plus, users can now view and edit ResolveFX keyframes from the timeline curve editor on the edit page or from the keyframe panel on the color page.

Here are all the updates within Resolve 16:

• DaVinci Neural Engine for AI and deep learning features
• Dual timeline to edit and trim without zooming and scrolling
• Source tape to review all clips as if they were a single tape
• Trim interface to view both sides of an edit and trim
• Intelligent edit modes to auto-sync clips and edit
• Timeline review playback speed based on clip length
• Built-in tools for retime, stabilization and transform
• Render and upload directly to YouTube and Vimeo
• Direct media import via buttons
• Scalable interface for working on laptop screens
• Create projects with different frame rates and resolutions
• Apply effects to multiple clips at the same time
• DaVinci Neural Engine detects faces and auto-creates bins
• Frame rate conversions and motion estimation
• Cut and edit page image stabilization
• Curve editor ease in and out controls
• Tape-style audio scrubbing with pitch correction
• Re-encode only changed files for faster rendering
• Collaborate remotely with Frame.io integration
• Improved GPU performance for Fusion 3D operations
• Cross platform GPU accelerated tools
• Accelerated mask operations including B-Spline and bitmap
• Improved planar and tracker performance
• Faster user and smart cache
• GPU-accelerated scopes with advanced technical monitoring
• Custom and HSL curves now feature histogram overlay
• DaVinci Neural Engine auto color and shot match
• Synchronize SDI output to viewer zoom
• Mix and master immersive 3D audio
• Elastic wave audio alignment and retiming
• Bus tracks with automation on timeline
• Foley sampler, frequency analyzer, dialog processor, FairlightFX
• 500 royalty-free Foley sounds effects
• Share markers and notes in collaboration workflows
• Individual user cache for collaborative projects
• Resolve FX plugins with timeline and keyframes

Avid offers rebuilt engine and embraces cloud, ACES, AI, more

By Daniel Restuccio

During its Avid Connect conference just prior to NAB, Avid announced a Media Composer upgrade, support for ACES color standard and additional upgrades to a number of its toolsets, apps and services, including Avid Nexis.

The chief news from Avid is that Media Composer, its flagship video editing system, has been significantly retooled: sporting a new user interface, rebuilt engine, and additional built-in audio, visual effects, color grading and delivery features.

In a pre-interview with postPerspective, Avid president/CEO Jeff Rosica said, “We’re really trying to leap frog and jump ahead to where the creative tools need to go.”

Avid asked themselves, what did they need to do “to help production and post production really innovate?” He pointed to TV shows and films, and how complex they’re getting. “That means they’re dealing with more media, more elements, and with so many more decisions just in the program itself. Let alone the fact that the (TV or film) project may have to have 20 different variants just to go out the door.”

Jeff Rosica

The new paneled user interface simplifies the workspace, has redesigned bins to find media faster, as well as task-based workspaces showing only what the user wants and needs to see.

Dave Colantuoni, VP of product management at Avid, said they spent the most amount of time studying the way that editors manage and organize bins and content within Media Composer. “Some of our editors use 20, 30, 40 bins at a time. We’ve really spent a lot of time so that we can provide an advantage to you in how you approach organizing your media. “

Avid is also offering more efficient workflow solutions. Users, without leaving Media Composer, can work in 8K, 16K or HDR thanks to the newly built-in 32-bit full float color pipeline. Additionally, Avid continues to work with OTT content providers to help establish future industry standards.

“We’re trying to give as much creative power to the creative people as we can, and bring them new ways to deal with things,” said Rosica. “We’re also trying to help the workflow side. We’re trying to help make sure production doesn’t have to do more with less, or sometimes more with the same budget. Cloud (computing) allows us to bring a lot of new capabilities to the products, and we’re going to be cloud powering a lot of our products… more than you’ve seen before.”

The new Media Composer engine is now native OP1A, can handle more video and audio streams, offers Live Timeline and background rendering, and a distributed processing add-on option to shorten turnaround times and speed up post production.

“This is something our competitors do pretty well,” explained Colantuoni. “And we have different instances of OP1A working among the different Avid workflows. Until now, we’ve never had it working natively inside of Media Composer. That’s super-important because a lot of capabilities started in OP1A, and we can now keep it pristine through the pipeline.”

Said Rosica, “We are also bringing the ability to do distributive rendering. An editor no longer has to render or transcode on their machine. They can perform those tasks in a distributed or centralized render farm environment. That allows this work to get done behind the scenes. This is actually an Avid Supply solution, so it will be very powerful and reliable. Users will be able to do background rendering, as well as distributive rendering and move things off the machine to other centralized machines. That’s going to be very helpful for a lot of post workflows.”

Avid had previously offered three main flavors of Media Composer: Media Composer First, the free version; Media Composer; and Media Composer Ultimate. Now they are also offering a new Enterprise version.

For the first time, large production teams can customize the interface for any role in the organization, whether the user is a craft editor, assistant, logger or journalist. It also offers unparalleled security to lock down content, reducing the chances of unauthorized leaks of sensitive media. Enterprise also integrates with Editorial Management 2019.

“The new fourth tier at the top is what we are calling the Enterprise Edition or Enterprise. That word doesn’t necessarily mean broadcast,” says Rosica. “It means for business deployment. This is for post houses and production companies, broadcast, and even studios. This lets the business, or the enterprise, or production, or post house to literally customize interfaces and customize work spaces to the job role or to the user.”

Nexis Cloudspaces
Avid also announced Avid Nexis|Cloudspaces. So Instead of resorting to NAS or external drives for media storage, Avid Nexis|Cloudspaces allows editorial to offload projects and assets not currently in production. Cloudspaces extends Avid Nexis storage directly to Microsoft Azure.

“Avid Nexis|Cloudspaces brings the power of the cloud to Avid Nexis, giving organizations a cost-effective and more efficient way to extend Avid Nexis storage to the cloud for reliable backup and media parking,” said Dana Ruzicka, chief product officer/senior VP at Avid. “Working with Microsoft, we are offering all Avid Nexis users a limited-time free offer of 2TB of Microsoft Azure storage that is auto-provisioned for easy setup and as much capacity as you need, when you need it.”

ACES
The Academy Color Encoding System (ACES) team also announced that Avid is now part of the ACES Logo Program, as the first Product Partner in the new Editorial Finishing product category. ACES is a free, open, device-independent color management and image interchange system and is the global standard for color management, digital image interchange and archiving. Avid will be working to implement ACES in conformance with logo program specifications for consistency and quality with a high quality ACES-color managed video creation workflow.

“We’re pleased to welcome Avid to the ACES logo program,” said Andy Maltz, managing director of the ACES Council. “Avid’s participation not only benefits editors that need their editing systems to accurately manage color, but also the broader ACES end-user community through expanded adoption of ACES standards and best practices.”

What’s Next?
“We’ve already talked about how you can deploy Media Composer or other tools in a virtualized environment, or how you can use these kind of cloud environments to extend or advance production,” said Rosica. “We also see that these things are going to allow us to impact workloads. You’ll see us continue to power our MediaCentral platform, editorial management of MediaCentral, and even things like Media Composer with AI to help them get to the job faster. We can help automate functions, automate environments and use cloud technologies to allow people to collaborate better, to share better, to just power their workloads. You’re going to see a lot from us over time.”

Dell updates Precision 7000 Series workstation line

Dell has updated its Precision 7920 and 7820 towers and Precision 7920 rack workstations to target the media and entertainment industry. Enhancements include processing of large data workloads, AI capabilities, hot-swappable drives, a tool-less external power supply and a flexible 2U rack form factor that boosts cooling, noise reduction and space savings.

Both the Dell Precision 7920 and 7820 towers will be available with the new 2nd Gen Intel Xeon Scalable processors and Nvidia Quadro RTX graphic options to deliver enhanced performance for applications with large datasets, including enhancements for artificial intelligence and machine learning workloads. All Precision workstations come equipped with the Dell Precision Optimizer. The Dell Precision Optimizer Premium is available at an additional cost. This feature uses AI-based technology to tune the workstation based on how it is being used.

In addition, the Precision workstations now feature a multichannel thermal design for advanced cooling and acoustics. An externally accessible tool-less power supply and FlexBays for lockable, hot-swappable drives are also included.

For users needing high-security, remotely accessible 1:1 workstation performance, the updated Dell Precision 7920 rack workstation delivers the same performance and scalability of the Dell Precision 7920 tower in a 2U rack form factor. This rack workstation is targeted to OEMs and users who need to locate their compute resources and valuable data in central environments. This option can save space and help reduce noise and heat, while providing secure remote access to external employees and contractors.

Configuration options will include the recently announced 2nd Gen Intel Xeon Scalable processors, built for advanced workstation professionals, with up to 28 cores, 56 threads and 3TB DDR4 RDIMM per socket. The workstations will also support Intel Deep Learning Boost, a new set of Intel AVX-512 instructions.

The Precision 7000 Series workstations will be available in May with high-performance storage capacity options, including up to 120TB/96TB of Enterprise SATA HDD and up to 16TB of PCIe NVMe SSDs.

Video: Machine learning with Digital Domain’s Doug Roble

Just prior to NAB, postPerspective’s Randi Altman caught up with Digital Domain’s senior director of software R&D, Doug Roble, to talk machine learning.

Roble is on a panel on the Monday of NAB 2019 called “Influencers in AI: Companies Accelerating the Future.” It’s being moderated by Google’s technical director for media, Jeff Kember, and features Roble along with
Autodesk’s Evan Atherton, Nvidia’s Rick Champagne, Warner Bros’ Greg Gewickey, Story Tech/Television Academy’s Lori Schwartz.

In our conversation with Roble, he talks about how Digital Domain has been using machine learning in visual effects for a couple of years. He points to the movie Avengers and the character Thanos, which they worked on.

A lot of that character’s facial motion was done with a variety of machine learning techniques. Since then, Digital Domain has pushed that technology further, taking the machine learning aspect and putting it on realtime digital humans — including Doug Roble.

Watch our conversation and find out more…

Atomos’ new Shogun 7: HDR monitor, recorder, switcher

The new Atomos Shogun 7 is a seven-inch HDR monitor, recorder and switcher that offers an all-new 1500-nit, daylight-viewable, 1920×1200 panel with a 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. It also offers ProRes RAW recording and realtime Dolby Vision output. Shogun 7 will be available in June 2019, priced at $1,499.

The Atomos screen uses a combination of advanced LED and LCD technologies which together offer deeper, better blacks the company says rivals OLED screens, “but with the much higher brightness and vivid color performance of top-end LCDs.”

A new 360-zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel, says Atomos, is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered, allowing for the same on-screen dynamic range, palette of colors and shades that your camera sensor sees.

Atomos and Dolby have teamed up to create Dolby Vision HDR “live” — a tool that allows you to see HDR live on-set and carry your creative intent from the camera through into HDR post. Dolby have optimized their target display HDR processing algorithm which Atomos has running inside the Shogun 7. It brings realtime automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and AtomOS 10 automatically analyzes the image, queries the TV and applies the right color and brightness profiles for the maximum HDR experience on the display.

Shogun 7 records images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. Also, 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most quad-link, dual-link or single-link SDI cinema cameras. Pixels are preserved with data rates of up to 1.8Gb/s.

In terms of audio, Shogun 7 eliminates the need for a separate audio recorder. Users can add 48V stereo mics via an optional balanced XLR breakout cable, or select mic or line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. Monitoring selected stereo tracks is via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.

Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the Ninja V.  The new body of Shogun 7 has a Ninja V-like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting.

AtomOS 10 on Shogun 7 has the full range of monitoring tools, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis.

Shogun 7 can also be used as a portable touchscreen-controlled multi-camera switcher with asynchronous quad-ISO recording. Users can switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update.

There is asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, users can import the XML file into an NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed.

Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.

Atomos offering Shinobi SDI camera-top monitor

On the heels of its successful Shinobi launch in March, Atomos has introduced Atomos Shinobi SDI, a
super-lightweight, 5-inch HD-SDI and 4K HDMI camera-top monitor. Its color-accurate calibrated display makes makes it suitable compact HDR and SDR reference monitor. It targets the professional video creator who uses or owns a variety of cameras and camcorders and needs the flexibility of SDI or HDMI, accurate high bright and HDR, while not requiring external recording capability.

Shinobi SDI features a compact, durable body combined with an ultra-clear, ultra-bright, daylight viewable 1000-nit display. The anti-reflection, anti-fingerprint screen has a pixel density of 427PPI (pixels per inch) and is factory calibrated for color accuracy, with the option for in-field calibration providing ongoing accuracy. Thanks to the
HD-SDI input and output, plus a 4K HDMI input, it can be used in most productions.

This makes Shinobi SDI a useful companion for high-end cinema and production cameras, ENG cameras, handheld camcorders and any other
HD-SDI equipped source.

“Our most requested product in recent times has been a stand-alone SDI monitor. We are thrilled to be bringing the Atomos Shinobi SDI to market for professional video and film creators,” says Jeromy Young, CEO of Atomos.

ARRI’s new Alexa Mini LF offers large-format sensor in small footprint

Offering a large-format sensor in a small form factor, ARRI has introduced its new Alexa Mini LF camera, which combines the compact size and low weight of the Alexa Mini with the large-format Alexa LF sensor. According to the company, it “provides the best overall image quality for large-format shooting” and features three internal motorized FSND filters, 12V power input, extra power outputs, a new Codex Compact Drive and a new MVF-2 high-contrast HD viewfinder.

The new Alexa Mini LF cameras are scheduled to start shipping in mid-2019.

ARRI’s large-format camera system, launched in 2018, is based around a 4.5K version of the Alexa sensor, which is twice the size and offers twice the resolution of Alexa cameras in 35 format. This allows for large-format looks, with improvements on the Alexa sensor’s natural colorimetry, pleasing skin tones, low noise and it’s suitable for HDR and Wide Color Gamut workflows.

Alexa Mini LF now joins the existing system elements: the high-speed capable Alexa LF camera; ARRI Signature Prime lenses; LPL lens mount and PL-to-LPL adapter; and Lens Data System LDS-2. The combined feature sets and form factors of ARRI’s two large-format cameras encompass all on-set requirements.

The Alexa Mini LF is built for use in challenging professional conditions. It features a hard-wearing carbon body and a wide temperature range of -4° F to +113° F, and each Alexa Mini LF is put through a vigorous stress test before leaving the ARRI factory and is then supported by ARRI’s global service centers.

While Alexa Mini LF is compatible with almost all Alexa Mini accessories, the company says it brings significant enhancements to the Mini camera design. Among them are extra connectors, including regulated 12V and 24V accessory power; a new 6-pin audio connector; built-in microphones; and improved WiFi.

Six user buttons are now in place on the camera’s operating side, and the camera and viewfinder each have their own lock button, while user access to the recording media, and VF and TC connectors, has been made easier.

Alexa Mini LF allows internal recording of MXF/ARRIRAW or MXF/Apple ProRes in a variety of formats and aspect ratios, and features the new Compact Drive recording media from Codex, an ARRI technology partner. This small and lightweight drive offers 1TB of recording. It comes with a USB-C Compact Drive reader that can be used without any extra software or licenses on Mac or Windows computers. In addition, a Compact Drive adapter can be used in any dock that accepts SXR Capture Drives, potentially more than doubling download speeds.

Another development from Codex is Codex High Density Encoding (HDE), which uses sophisticated, loss-less encoding to reduce ARRIRAW file sizes by around 40% during downloading or later in the workflow. This lowers storage costs, shortens transfer times and speeds up workflows.

HDE is free for use with Codex Capture or Compact Drives, openly shared and fast: ARRIRAW Open Gate 4.5K can be encoded at 24fps on a modern MacBook Pro.

ARRI’s new MVF-2 viewfinder for the Alexa Mini LF is the same high-contrast HD OLED display, color science and ARRICAM eyepiece as in Alexa LF’s EVF-2 viewfinder, allowing optimal judgment of focus, dynamic range and color on set.

In addition, the MVF-2 features a large, four-inch flip-out monitor that can display the image or the camera control menu. The MVF-2 can be used on either side of the camera and connects via a new CoaXPress VF cable that has a reach of up to 10m for remote camera operations. It features a refined user interface, a built-in eyepiece lens heater for de-fogging and a built-in headphones connector.

FilmLight offers additions to Baselight toolkit

FilmLight will be at NAB showing updates to its Baselight toolkit, including T-Cam v2. This is FilmLight’s new and improved color appearance model, which allows the user to render an image for all formats and device types with confidence of color.

It combines with the Truelight Scene Looks and ARRI Look Library, now implemented within the Baselight software. “T-CAM color handling with the updated Looks toolset produces a cleaner response compared to creative, camera-specific LUTs or film emulations,” says Andrea Chlebak, senior colorist at Deluxe’s Encore in Hollywood. “I know I can push the images for theatrical release in the creative grade and not worry about how that look will translate across the many deliverables.”

FilmLight had added what they call “a new approach to color grading” with the addition of Texture Blend tools, which allow the colorist to apply any color grading operation dependent on image detail. This gives the colorist fine control over the interaction of color and texture.

Other workflow improvements aimed at speeding the process include enhanced cache management; a new client view that displays a live web-based representation of a scene showing current frame and metadata; and multi-directory conform for a faster and more straightforward conform process.

The latest version of Baselight software also includes per-pixel alpha channels, eliminating the need for additional layer mattes when compositing VFX elements. Tight integration with VFX suppliers, including Foundry Nuke and Autodesk, means that new versions of sequences can be automatically detected, with the colorist able to switch quickly between versions within Baselight.

IDEA launches to create specs for next-gen immersive media

The Immersive Digital Experiences Alliance (IDEA) will launch at the NAB 2019 with the goal of creating a suite of royalty-free specifications that address all immersive media formats, including emerging light field technology.

Founding members — including CableLabs, Light Field Lab, Otoy and Visby — created IDEA to serve as an alliance of like-minded technology, infrastructure and creative innovators working to facilitate the development of an end-to-end ecosystem for the capture, distribution and display of immersive media.

Such a unified ecosystem must support all displays, including highly anticipated light field panels. Recognizing that the essential launch point would be to create a common media format specification that can be deployed on commercial networks, IDEA has already begun work on the new Immersive Technology Media Format (ITMF).

ITMF will serve as an interchange and distribution format that will enable high-quality conveyance of complex image scenes, including six-degrees-of-freedom (6DoF), to an immersive display for viewing. Moreover, ITMF will enable the support of immersive experience applications including gaming, VR and AR, on top of commercial networks.

Recognized for its potential to deliver an immersive true-to-life experience, light field media can be regarded as the richest and most dense form of visual media, thereby setting the highest bar for features that the ITMF will need to support and the new media-aware processing capabilities that commercial networks must deliver.

Jon Karafin, CEO/co-founder of Light Field Lab, explains that “a light field is a representation describing light rays flowing in every direction through a point in space. New technologies are now enabling the capture and display of this effect, heralding new opportunities for entertainment programming, sports coverage and education. However, until now, there has been no common media format for the storage, editing, transmission or archiving of these immersive images.”

“We’re working on specifications and tools for a variety of immersive displays — AR, VR, stereoscopic 3D and light field technology, with light field being the pinnacle of immersive experiences,” says Dr. Arianne Hinds, Immersive Media Strategist at CableLabs. “As a display-agnostic format, ITMF will provide near-term benefits for today’s screen technology, including VR and AR headsets and stereoscopic displays, with even greater benefits when light field panels hit the market. If light field technology works half as well as early testing suggests, it will be a game-changer, and the cable industry will be there to help support distribution of light field images with the 10G platform.”

Starting with Otoy’s ORBX scene graph format, a well-established data structure widely used in advanced computer animation and computer games, IDEA will provide extensions to expand the capabilities of ORBX for light field photographic camera arrays, live events and other applications. Further specifications will include network streaming for ITMF and transcoding of ITMF for specific displays, archiving, and other applications. IDEA will preserve backwards-compatibility on the existing ORBX format.

IDEA anticipates releasing an initial draft of the ITMF specification in 2019. The alliance also is planning an educational seminar to explain more about the requirements for immersive media and the benefits of the ITMF approach. The seminar will take place in Los Angeles this summer.

Photo Credit: All Rights Reserved: Light Field Lab. Future Vision concept art of room-scale holographic display from Light Field Lab, Inc.

Arvato to launch VPMS MediaEditor NLE at NAB

First seen as a technology preview at IBC 2018, Arvato’s MediaEditor is a browser-based desktop editor aimed at journalistic editing and content preparation workflows. MediaEditor projects can be easily exported and published in various formats, including square and vertical video, or can be opened in Adobe Premiere with VPMS EditMate for craft editing.

MediaEditor, which features a familiar editing interface, offers simple drag-and-drop transitions and effects, as well as basic color correction. Users can also record voiceovers directly into a sequence, and the system enables automatic mixing of audio tracks for quicker turnaround. Arvato will add motion graphics for captioning and pre-generated graphics in an upcoming version of MediaEditor.

MediaEditor is a part of Arvato Systems’ Video Production Management Suite (VPMS) enterprise MAM solution. Like other products in the suite, it can be independently deployed and scaled, or combined with other products for workflows across the media enterprise. MediaEditor can also be used with Vidispine-based systems, and VPMS and Vidispine clients can access their material through MediaEditor whether on-premise or via the cloud. MediaEditor takes advantage of the advanced VPMS streaming technology allowing users to work anywhere with high-quality, responsive video playback, even on lower-speed connections.

Western Digital adds NVMe to its WD Blue solid state drive

Western Digital has added an NVMe model to its WD Blue solid state drive (SSD) portfolio. The WD Blue SN500 NVMe SSD offers three times the performance of its SATA counterpart and is optimized for multitasking and resource-heavy applications, providing near-instant access to files and programs.

Using the scalable in-house SSD architecture of the WD Black SN750 NVMe SSD, the new WD Blue SN500 NVMe SSD is also built on Western Digital’s 3D NAND technology, firmware and controller, and delivers sequential read and write speeds up to 1,700MB/s and 1,450MB/s respectively (for 500GB model) with efficient power consumption as low as 2.7W.

Targeting evolving workflows, the WD Blue SN500 NVMe SSD features high sustained write performance over SATA, as well as other emerging technologies on the market today, to give that performance edge.

“Content transitioning from 4K and 8K means it’s a perfect time for video and photo editors, content creators, heavy data users and PC enthusiasts to transition from SATA to NVMe,” says Eyal Bek, VP, data center and client computing, Western Digital. “The WD Blue SN500 NVMe SSD will enable customers to build high-performance laptops and PCs with fast speeds and enough capacity in a reliable, rugged and slim form factor.”

The WD Blue SN500 NVMe SSD will be available in 250GB and 500GB capacities in a single-sided M.2 2280 PCIe Gen3 x2 form factor. Pricing is $54.99 USD for 250GB (model WDS250G1B0C) and $77.99 USD for 500GB (model WDS500G1B0C).

Blackmagic offers next-gen Ursa Mini Pro camera, other product news

Blackmagic has introduced the Ursa Mini Pro 4.6K G2, a second-generation Ursa Mini Pro camera featuring fully redesigned electronics and a new Super 35mm 4.6K image sensor with 15 stops of dynamic range that combine to support high-frame-rate shooting at up to 300 frames per second.

In addition, the Ursa Mini Pro 4.6K G2 supports Blackmagic RAW and features a new USB-C expansion port for direct recording to external disks. Ursa Mini Pro 4.6K G2 is available now for $5,995 from Blackmagic resellers worldwide.

The new user interface

Key Features:
• Digital film camera with 15 stops of dynamic range
• Super 35mm 4.6K sensor with Blackmagic Design Generation 4 Color Science
• Supports project frame rates up to 60fps and off-speed slow motion recording up to 120fps in 4.6K, 150fps in 4K DCI and 300fps in HD Blackmagic RAW
• Interchangeable lens mount with EF mount included as standard. Optional PL, B4 and F lens mounts available separately
• High-quality 2-, 4- and 6-stop neutral density (ND) filters with IR compensation designed to specifically match the colorimetry and color science of Blackmagic URSA Mini Pro 4.6K G2
• Fully redundant controls including external controls that allow direct access to the most important camera settings such as external power switch, ND filter wheel, ISO, shutter, white balance, record button, audio gain controls, lens and transport control, high frame rate button and more
• Built-in dual C-Fast 2.0 recorders and dual SD/UHS-II card recorders allow unlimited duration recording in high quality
• High-speed USB-C expansion port for recording directly to an external SSD or flash disk
• Lightweight and durable magnesium alloy body
• LCD status display for quickly checking timecode, shutter and lens settings, battery, recording status and audio levels
• Support for Blackmagic RAW files in constant bitrate 3:1, 5:1, 8:1 and 12:1 or constant quality Q0 and Q5 as well as ProRes 4444 XQ, ProRes 4444, ProRes 422 HQ, ProRes 422, ProRes 422 LT, ProRes 422 Proxy recording at 4.6K, 4K, Ultra HD and HD resolutions
• Supports recording of up to 300fps in HD, 150fps in 4K DCI and 120fps at full-frame 4.6K.
• Features all standard connections, including dual XLR mic/line audio inputs with phantom power, 12G-SDI output for monitoring with camera status graphic overlay and separate XLR 4-pin power output for viewfinder power, headphone jack, LANC remote control and standard 4-pin 12V DC power connection
• Built-in high-quality stereo microphones for recording sound
• Offers a four-inch foldout touchscreen for on-set monitoring and menu settings
• Includes full copy of DaVinci Resolve color grading and editing software

Additional Blackmagic news:
– Blackmagic adds Blackmagic RAW to Blackmagic Pocket Cinema Camera 4K
– Blackmagic intros DeckLink Quad HDMI recorder
– Blackmagic updates DeckLink 8K Pro
– Blackmagic announces long-form recording on Blackmagic Duplicator 4K

NAB NY: A DP’s perspective

By Barbie Leung

At this year’s NAB New York show, my third, I was able to wander the aisles in search of tools that fit into my world of cinematography. Here are just a few things that caught my eye…

Blackmagic, which had large booth at the entrance to the hall, was giving demos of its Resolve 15, among other tools. Panasonic also had a strong presence mid-floor, with an emphasis on the EVA-1 cameras. As usual, B&H attracted a lot of attention, as did Arri, which brought a couple of Arri Trinity rigs to demo.

During the HDR Video Essentials session, colorist Juan Salvo of TheColourSpace, talked about the emerging HDR 10+ standard proposed by Samsung and Amazon Video. Also mentioned was the trend of consumer displays getting brighter every year and that impact on content creation and content grading. Salvo pointed out the affordability of LG’s C7 OLEDs (about 700 Nits) for use as client monitors, while Flanders Scientific (which had a booth at the show) remains the expensive standard for grading. It was interesting to note that LG, while being the show’s Official Display Partner, was conspicuously absent from the floor.

Many of the panels and presentations unsurprisingly focused on content monetization — how to monetize faster and cheaper. Amazon Web Service’s stage sessions emphasized various AWS Elemental technologies, including automating the creation of video highlight clips for content like sports videos using facial recognition algorithms to generate closed captioning, and improving the streaming experience onboard airplanes. The latter will ultimately make content delivery a streamlined enough process for airlines that it would enable advertisers to enter this currently untapped space.

Editor Janis Vogel, a board member of the Blue Collar Post Collective, spoke at the #galsngear “Making Waves” panel, and noted the progression toward remote work in her field. She highlighted the fact that DaVinci Resolve, which had already made it possible for color work to be done remotely, is now also making it possible for editors to collaborate remotely. The ability to work remotely gives professionals the choice to work outside of the expensive-to-live-in major markets, which is highly desirable given that producers are trying to make more and more content while keeping budgets low.

Speaking at the same panel, director of photography/camera operator Selene Richholt spoke to the fact that crews are being monetized with content producers either asking production and post pros to provide standard service at substandard rates, or more services without paying more.

On a more exciting note, she cited recent 9×16 projects that she has shot with the camera mounted vertically (as opposed to shooting 16×9 and cropping in) in order to take full advantage of lens properties. She looks forward to the trend of more projects that can mix aspects ratios and push aesthetics.

Well, that’s it for this year. I’m already looking forward to next year.

 


Barbie Leung is a New York-based cinematographer and camera operator working in film, music video and branded content. Her work has played Sundance, the Tribeca Film Festival, Outfest and Newfest. She is also the DCP mastering technician at the Tribeca Film Festival.

A Sneak Peek: Avid shows its next-gen Media Composer

By Jonathan Moser

On the weekend of NAB and during Avid Connect, I found myself sitting in a large meeting room with some of the most well-known editors and creatives in the business. To my left was Larry Jordan, Steve Audette was across from me, Chris Bovè and Norman Hollyn to my right, and many other luminaries of the post world filled the room. Motion picture, documentary, boutique, commercial and public broadcasting masters were all represented here… as well as sound designers and producers. It was quite humbling for me.

We’d all been asked to an invite-only meeting with the leading product designers and engineers from Avid Technology to see the future of Media Composer… and to do the second thing we editors do best: bitch. We were asked to be as tough, critical and vocal as we could about what we’re about to see. We were asked to give them a thumbs up or thumbs down on their vision and execution of the next generation of Media Composer as they showed us long-needed overhauls and redesigns.

Editors Chris Bové and Avid’s Randy Martens getting ready for the unveil.

What we were shown is the future of the Media Composer, and based on what I saw, its future is bright. You think you’ve heard that before? Maybe, but this time is different. This is not vaporware, smoke and mirrors or empty promises… I assure you, this is the future.

The Avid team, including new Avid CEO Jeff Rosica, was noticeably open and attentive to the assembled audience of seasoned professionals invited to Avid Connect… a far cry from the halcyon days of the ‘90s and 2000s when Media Composer ruled the roost, and sat complacently on its haunches. Too recently, the Avid corporate culture was viewed by many in the post community as arrogant and tone deaf to its users’ criticisms and requests. This meeting was a far cry from that.

What we were shown was a redefined, reenergized and proactive attitude from Avid. Big corporations aren’t ordinarily so open about such big changes, but this one directly addressed decades of users’ concerns and suggestions.

By the way, this presentation was separate from the new NAB announcements of tiered pricing, new feature rollouts and enhanced interoperability for Media Composer. Avid invited us here not for approval, but for appraisal… for our expertise and feedback and to help steer them in the right direction.

As a life-long Avid user who has often questioned the direction of where the company was headed, I need to say this once more: this time is different.

These are real operational changes that we got to see in an open, informed — and often questioned and critiqued — environment. We editors are a tough crowd, but team Avid was ready, listening, considering and feeding back new ideas. It was an amazingly open and frank give and take from a company that once was shut off from such possibilities.

In her preliminary introduction, Kate Ketcham, manager of Media Composer product management, gave the assembled audience a pretty brutal and honest assessment of Media Composer’s past (and oft repeated) failings and weaknesses —a task usually reserved for us editors to tell Avid, but this time it was Avid telling us what we already knew and they had come to realize. Pretty amazing.

The scope of her critique showed us that, despite popular opinion, Avid HAS been listening to us all along: they got it. They acknowledged the problems, warts and all, and based on the two-hour presentation shown through screenshots and demos, they’re intent on correcting their mistakes and are actively doing so.

Addressing User Concerns
Before the main innovations were shared, there was an initial concern from the editors that Avid be careful not to “throw out the baby with the bathwater” in its reinvention. Media Composer’s primary strength — as well as one of its most recognized weaknesses among newer editors — has been its consistency of look and feel, as well as its logical operational methodology and dependable media file structural organization. Much was made of one competitor’s historical failure to keep consistency and integrity of the basic and established editing paradigms (such as two-screen layout, track-based editing, reasonably established file structure, etc.) in a new release.

We older editors depend on a certain consistency. Don’t abandon the tried and true, but still “get us into this century” was the refrain from the assembled. The Avid team addressed these concerns clearly and reassuringly — the best, familiar and most trusted elements of Media Composer would stay, but there will now be so much more under the hood. Enough to dynamically attract and compel newer users and adoptees.

The company has spent almost a year doing research, redesign and implementation; this is a true commitment, and they are pledging to do this right. Avid’s difficult and challenging task in reimagining Media Composer was to make the new iteration steadfast, efficient and dependable (something existing users expect), yet innovative, attractive, flexible, workflow-fluid and intuitive enough for the newer users who are used to more contemporary editing and software. It’s a slippery and problematic slope, but one the Avid team seemed to navigate with an understanding of the issues.

As this is still in the development stage, I can’t reveal particulars (I really wish I could because there were a ton), but I can give an overview of the types of implementation they’ve been developing. Also, this initial presentation deals only with one stage of the redesign of Media Composer — the user interface changes — with much more to come within the spectrum of change.

Rebuilding the Engine
I was assured by the Avid design team that most of the decades-old Media Composer code has been completely rewritten, updated and redesigned with current innovations and implementations similar to those of the competition. This is a fully realized redesign.

Flexibility and customization are integrated throughout. There are many UI innovations, tabbed bins, new views and newer and more efficient access to enhanced tools. Media Composer has entirely new windowing and organizational options that goes way beyond mere surface looks and feels, yet it is much different than the competition’s implementations. You can now customize the UI to incredible lengths. There are new ways of viewing and organizing media, source and clip information and new and intuitive (and graphical) ways of creating workspaces that get much more usable information to the editor than before.

The Avid team examined weaknesses of the existing Media Composer environment and workflow: clutter, too many choices onscreen at once; screens that resize mysteriously, which can throw concentration and creative flow off-base; looking at what causes oft-repeated actions and redundant keystrokes or operations that could be minimized or eliminated altogether; finding ways of changing how Media Composer handles screen real estate to let the editor see only what they need to see when they need it.

Gone are the windows covering other windows and other things that might slow users down. Avid showed us how attention was paid to making Media Composer more intuitive to new editors by shrinking the learning curve. The ability for more contextual help (without getting in the way of editing) has been addressed.

There are new uses of dynamic thumbnails, color for immediate recognition of active operations and window activation, different ways of changing modalities — literally changing how we looked at timelines, how we find media. You want tabbed bins? You want hover scrubbing? You want customization of workspaces done quickly and efficiently? Avid looked at what do we need to see and what we don’t. All of these things have been addressed and integrated. They have addressed the difficulties of handling effect layering, effect creation, visualization and effect management with sleek but understandable solutions. Copying complex multilayered effects will now be a breeze.

Everything we were shown answered long-tolerated problems we’ve had to accept. There were no gimmicks, no glitz, just honesty. There was method to the madness for every new feature, implementation and execution, but after feedback from us, many things were reconsidered or jettisoned. Interruptions from this critical audience were fast and furious: “Why did you do that?” “What about my workflow?” “Those palette choices don’t work for me.” “Why are those tools buried?” This was a synergy and free-flow of information between company and end-users unlike anything I’ve ever seen.

There was no defensiveness from Avid; they listened to each and every critique. I could see they were actively learning from us and that they understood the problems we were pointing out. They were taking notes, asking more questions and adding to their change lists. Editors made suggestions, and those suggestions were added and actively considered. They didn’t want blind acceptance. We were informing them, and it was really amazing to see.

Again, I wish I could be more specific about details and new implementations — all I can say is that they really have listened to the complaints and are addressing them. And there is much more in the works, from media ingest and compatibility to look and feel and overall user experience.

When Jeff Rosica stopped in to observe, talk and listen to the crowd, he explained that while Avid Technology has many irons in the fire, he believes that Media Composer (and Pro Tools) represent the heart of what the company is all about. In fact, since his tenure began, he has redeployed tremendous resources and financial investment to support and nurture this rebirth of Media Composer.

Rosica promised to make sure Avid would not repeat the mistakes made by others several years ago. He vowed to continue to listen to us and to keep what makes Media Composer the dependable powerhouse that it has been.

As the presentation wound down, a commitment was made by the Avid group to continue to elicit our feedback and keep us in the loop throughout all phases of the redevelopment.

In the end, this tough audience came away optimistic. Yeah, some were still skeptical, but others were elated, expectant and heartened. I know I was.

And I don’t drink Kool-Aid. I hate it in fact.

There is much more in development for MC at Avid in terms of AI integration, facial recognition, media ingest, export functionality and much more. This was just a taste of many more things to come, so stand by.

(Special thanks for access to Marianna Montague, David Colantuoni, Tim Claman, Randy Fayan, and Randy Martens of Avid Technology. If I’ve missed anyone, thank you and apologies.)


Jonathan Moser is a six-time Emmy winning freelance editor/producer based in New York. You can email him at flashcutter@yahoo.com.

VR at NAB 2018: A Parisian’s perspective

By Alexandre Regeffe

Even though my cab driver from the airport to my hotel offered these words of wisdom — “What happens in Vegas, stays in Vegas” — I’ve decided not to listen to him and instead share with you the things that impressed for the VR world at NAB 2018.

Back in September of 2017, I shared with you my thoughts on the VR offerings at the IBC show in Amsterdam. In case you don’t remember my story, I’m a French guy who jumped into the VR stuff three years ago and started a cinematic VR production company called Neotopy with a friend. Three years is like a century in VR. Indeed, this medium is constantly evolving, both technically and financially.

So what has become of VR today? Lots of different things. VR is a big bag where people throw AR, MR, 360, LBE, 180 and 3D. And from all of that, XR (Extended Reality) was born, which means everything.

Insta360 Titan

But if this blurred concept leads to some misunderstanding, is it really good for consumers? Even us pros are finding it difficult to explain what exactly VR is, currently.

While at NAB, I saw a presentation from Nick Bicanic during which he used the term “frameless media.” And, thank you, Nick, because I think that is exactly what‘s in this big bag called VR… or XR. Today, we consume a lot of content through a frame, which is our TV, computer, smartphone or cinema screen. VR allows us to go beyond the frame, and this is a very important shift for cinematographers and content creators.

But enough concepts and ideas, let us start this journey on the NAB show floor! My first stop was the VR pavilion, also called the “immersive storytelling pavilion” this year.

My next stop was to see SGO Mistika. For over a year, the SGO team has been delivering an incredible stitching software with its Mistika VR. In my opinion, there is a “before” and an “after” this tool. Thanks to its optical flow capacities, you can achieve a seamless stitching 99% of the time, even with very difficult shooting situations. The last version of the software provided additional features like stabilization, keyframe capabilities, more cameras presets and easy integration with Kandao and Insta360 camera profiles. VR pros used Mistika’s booth as sort of a base camp, meeting the development team directly.

A few steps from Misitka was Insta360, with a large, yellow booth. This Chinese company is a success story with the consumer product Insta360 One, a small 360 camera for the masses. But I was more interested in the Insta360 Pro, their 8K stereoscopic 3D360 flagship camera used by many content creators.

At the show, Insta360’s big announcement was Titan, a premium version of the Insta360 Pro offering better lenses and sensors. It’s available later this year. Oh, and there was the lightfield camera prototype, the company’s first step into the volumetric capture world.

Another interesting camera manufacturer at the show was Human Eyes Technology, presenting their Vuze+. With this affordable 3D360 camera you can dive into stereoscopic 360 content and learn the basics about this technology. Side note: The Vuze+ was chosen by National Geographic to shoot some stunning sequences in the International Space Station.

Kandao Obsidian

My favorite VR camera company, Kandao, was at NAB showing new features for its Obsidian R and S cameras. One of the best is the 6DoF capabilities. With this technology, you can generate a depth map from the camera directly in Kandao Studio, the stitching software, which comes free when you buy an Obsidian. With the combination of a 360 stitched image and depth map, you can “walk” into your movie. It’s an awesome technique for better immersion. For me this was by far the best innovation in VR technology presented on the show floor

The live capabilities of Obsidian cameras have been improved, with a dedicated Kandao Live software, which allows you to live stream 4K stereoscopic 360 with optical flow stitching on the fly! And, of course, do not forget their new Qoocam camera. With its three-lens-equipped little stick, you can either do VR 180 stereoscopic or 360 monoscopic, while using depth map technology to refocus or replace the background in post — all with a simple click. Thanks to all these innovations, Kandao is now a top player in the cinematic VR industry.

One Kandao competitor is ZCam. They were there with a couple of new products: the ZCam V1, a 3D360 camera with a tiny form factor. It’s very interesting for shooting scenes where things are very close to the camera. It keeps a good stereoscopy even on nearby objects, which is a major issue with most of VR cameras and rigs. The second one is the small E2 – while it’s not really a VR camera, it can be used as an underwater rig, for example.

ZCam K1 Pro

The ZCam product range is really impressive and completely targeting professionals, from ZCam S1 to ZCam V1 Pro. Important note: take a look at their K1 Pro, a VR 180 camera, if you want to produce high-end content for the Google VR180 ecosystem.

Another VR camera at NAB was Samsung’s Round, offering stereoscopic capabilities. This relatively compact device comes with a proprietary software suite for stitching and viewing 360 shots. Thanks to IP65 normalization, you can use this camera outdoors in difficult weather conditions, like rain, dust or snow. It was great to see the live streaming 4K 3D360 operating on the show floor, using several Round cameras combined with powerful Next Computing hardware.

VR Post
Adobe Creative Cloud 2018 remains the must-have tool to achieve VR post production without losing your mind. Numerous 360-specific functionalities have been added during the last year, after Adobe bought the Mettle Skybox suite. The most impressive feature is that you can now stay in your 360 environment for editing. You just put your Oculus rift headset on and manipulate your Premiere timeline with touch controllers and proceed to edit your shots. Think of it as a Minority Report-style editing interface! I am sure we can expect more amazing VR tools from Adobe this year.

Google’s Lightfield technology

Mettle was at the Dell booth showing their new Adobe CC 360 plugin, called Flux. After an impressive Mantra release last year, Flux is now available for VR artists, allowing them to do 3D volumetric fractals and to create entire futuristic worlds. It was awesome to see the results in a headset!

Distributing VR
So once you have produced your cinematic VR content, how can you distribute it? One option is to use the Liquid Cinema platform. They were at NAB with a major update and some new features, including seamless transitions between a “flat” video and a 360 video. As a content creator you can also manage your 360 movies in a very smart CMS linked to your app and instantly add language versions, thumbnails, geoblocking, etc. Another exciting thing is built-in 6DoF capability right in the editor with a compatible headset — allowing you to walk through your titles, graphics and more!

I can’t leave without mentioning Voysys for live-streaming VR; Kodak PixPro and its new cameras ; Google’s next move into lightfield technology ; Bonsai’s launch of a new version of the Excalibur rig ; and many other great manufacturers, software editors and partners.

See you next time, Sin City.

NAB: Imagine Products and StorageDNA enhance LTO and LTFS

By Jonathan S. Abrams

That’s right. We are still taking NAB. There was a lot to cover!

So, the first appointment I booked for NAB Show 2018, both in terms of my show schedule (10am Monday) and the vendors I was in contact with, was with StorageDNA’s Jeff Krueger, VP of worldwide sales. Weeks later, I found out that StorageDNA was collaborating with Imagine Products on myLTOdna, so I extended my appointment. Doug Hynes, senior director of business development for StorageDNA, and Michelle Maddox, marketing director of Imagine Products, joined me to discuss what they had ready for the show.

The introduction of LTFS during NAB 2010 allowed LTO tape to be accessed as if it was a hard drive. Since LTO tape is linear, executing multiple operations at once and treating it like a hard drive results in performance falling off of a cliff. It also could cause the drive to engage in shoeshining, or shuttling of the tape back-and-forth over the same section.

Imagine Products’ main screen.

Eight years later, these performance and operation issues have been addressed by StorageDNA’s creation of HyperTape, which is their enhanced Linear File Transfer System that is part of Imagine Products’ myLTOdna application. My first question was “Is HyperTape yet another tape format?” Fortunately for myself and other users, the answer is “No.”

What is HyperTape? It is a workflow powered by dnaLTFS. The word “enhanced” in the description of HyperTape as an enhanced Linear File Transfer System refers to a middleware in their myLTOdna application for Mac OS. There are three commands that can be executed to put an LTO drive into either read-only, write-only or training mode. Putting the LTO drive into an “only mode” allows it to achieve up to 300MB/s of throughput. This is where the Hyper in HyperTape comes from. These modes can also be engaged from the command line.

Training mode allows for analyzing the files stored on an LTO tape and then storing that information in a Random Access Database (RAD). The creation of the RAD can be automated using Imagine Products’ PrimeTranscoder. Otherwise, each file on the tape must be opened in order to train myLTOdna and create a RAD.

As for shoeshining, or shuttling of the tape back-and-forth over the same section, this is avoided by intelligently writing files to LTO tape. This intelligence is proprietary and is built into the back-end of the software. The result is that you can load a clip in Avid’s Media Composer, Blackmagic’s DaVinci Resolve or Adobe’s Premiere Pro and then load a subclip from that content into your project. You still should not load a clip from tape and just press play. Remember, this is LTO tape you are reading from.

The target customer for myLTOdna is a DIT with camera masters who wants to reduce how much time it takes to backup their footage. Previously, DITs would transfer the camera card’s contents to a hard drive using an application such as Imagine Products’ ShotPut Pro. Once the footage had been transferred to a hard drive, it could then be transferred to LTO tape. Using myLTOdna in read-only mode allows a DIT to bypass the hard drive and go straight from the camera card to an LTO tape. Because the target customer is already using ShotPut Pro, the UI for myLTOdna was designed to be comfortable and not difficult to use or understand.

The licensing for dnaLTFS is tied to the serial number of an LTO drive. StorageDNA’s Krueger explained that, “dnaLTFS is the drive license that works with stand alone mac LTO drives today.” Purchasing a license for dnaLTFS allows the user to later upgrade to StorageDNA’s DNAevolution M Series product if they need automation and scheduling features without having to purchase another drive license if the same LTO drive is used.

Krueger went on to say, “We will have (dnaLTFS) integrated into our DNAevolution product in the future.” DNAevolution’s cost of entry is $5,000. A single LTO drive license starts at $1,250. Licensing is perpetual, and updates are available without a support contract. myLTOdna, like ShotPut Pro and PrimeTranscoder, is a one-time purchase (perpetual license). It will phone home on first launch. Remote support is available for $250 per year.

I also envision myLTOdna being useful outside of the DIT market. Indeed, this was the thinking when the collaboration between Imagine Products and StorageDNA began. If you do not mind doing manual work and want to keep your costs low, myLTOdna is for you. If you later need automation and can budget for the efficiencies that you get with it, then DNAevolution is what you can upgrade to.


Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource, located in New York City.

High-performance flash storage at NAB 2018

By Tom Coughlin

After years of watching the development of flash memory-based storage for media and entertainment applications, especially for post, it finally appears that these products are getting some traction. This is driven by the decreasing cost of flash memory and also the increase in 4K up to 16K workflows with high frame rates and multi-camera video projects. The performance demanded for working storage to support multiple UHD raw video streams makes high performance storage attractive. Examples of 8K workflows were everywhere at the 2018 NAB show.

Flash memory is the clear leader in professional video camera media, increasing from 19% in 2009 to 66% in 2015, 54% in 2016 and 59% in 2017. The 2017 media and entertainment professional survey results are shown below.

Flash memory capacity used in M&E applications is believed to have been about 3.1% in 2016, but will be larger in coming years. Overall, revenues for flash memory in M&E should increase by more than 50% in the next few years as flash prices go down and it becomes a more standard primary storage for many applications.

At the 2018 NAB Show, and the NAB ShowStoppers, there were several products geared for this market and in discussion with vendors it appears that there is some real traction for solid state memory for some post applications, in addition to cameras and content distribution. This includes solid-state storage systems built with SAS, SATA and the newer NVMe interface. Let’s look at some of these products and developments.

Flash-Based Storage Systems
Excelero reports that its NVMe software-defined block storage solution with its low-latency and high-bandwidth improves the interactive editing process and enables customers to stream high-resolution video without dropping frames. Technicolor has said that it achieved 99.8% of local NVMe storage server performance across the network in an initial use of Excelero’s NVMesh. Below is the layout of the Pixit Media Excelero demonstration for 8K+ workflows at the NAB show.

“The IT infrastructure required to feed dozens of workstations of 4K files at 24ps is mindboggling — and that doesn’t even consider what storage demands we’ll face with 8K or even 16K formats,” says Amir Bemanian, engineering director at Technicolor. “It’s imperative that we can scale to future film standards today. Now, with innovations like the shared NVMe storage such as Excelero provides, Technicolor can enjoy a hardware-agnostic approach, enabling flexibility for tomorrow while not sacrificing performance.”

Excelero was showcasing 16K post production workflows with the Quantum StorNext storage and data management platform and Intel on the Technicolor project and at Mellanox with its 100Gb Ethernet switch.

Storbyte, a company based in Washington, DC, was showing its Eco Flash servers at the NAB show. Their product featured hot-swappable and accessible flash storage bays and redundant hot-swappable server controllers. The product features the company’s Hydra Dispersed Algorithmic Modeling (HDAM) that allows them to avoid having a flash transition layer, garbage collection, as well as dirty block management resulting in less performance overhead. Their Data Remapping Accelerator Core (DRACO) is said to offer up to a 4X performance increase over conventional flash architectures that can maintain peak performance even at 100% drive capacity and life and thus eliminates a write cliff and other problems that flash memory is subject to.

DDN was showing its ExaScaler DGX solution that combined a DDN ExaScaler ES14KX high-performance all-flash array integrated with a single Nvidia DGX-1 GPU server (initially announced at the 2018 GPU Technology Conference). Performance of the combination achieved up to 33GB/s of throughput. The company was touting this combination to accelerate machine learning, reducing the load times of large datasets to seconds for faster training. According to DDN, the combination also allows massive ingest rates and cost-effective capacity scaling and achieved more than 250,000 random read 4K IOPS. In addition to HDD-based storage, DDN offers hybrid HDD/SSD as well as all-flash array products. The new DDN SFA200NV all-flash platform product was on display at the 2018 NAB show

Dell EMC was showing its Isilon F800 all-flash scale-out NAS for creative applications. According to the company, the Isilon all-flash array gives visual effects artists and editors the power to work with multiple streams of uncompressed, full-aperture 4K material, enabling collaborative, global post and VFX pipelines for episodic and feature projects.

 

Dell EMC said this allows a true scale-out architecture with high concurrency and super-fast all-flash network-attached storage with low latency for high-throughput and random-access workloads. The company was demonstrating 4K editing of uncompressed DPX files with Adobe Premiere using a shared Isilon F800 all-flash array. They were also showing 4K and UHD workflows with Blackmagic’s DaVinci Resolve.

NetApp had a focus on solid-state storage for media workflows in their “Lunch and Learn sessions,” co-hosted by Advanced Systerms Group (ASG). The sessions discussed how NVMe and Storage Class Memory (SCM) are reshaping the storage industry. NetApp provides SSD-based E-series products that are used in the media and entertainment industry.

Promise Technology had its own NVMe SSD-based products. The company had data sheets on two NVMe fabric products. One was an HA storage appliance in a 2RU form factor (NVF-9000) with 24 NVMe drive slots and 100GbE ports offering up to 15M IOPS and 40GB/s throughout and many other enterprise features. The company said that its fabric allows servers to connect to a pool of storage nodes as if they had local NVMe SSDs. Promise’s NVMe Intelligent Storage is a 1U appliance (NVF-7000) with multiple 100 GbE connectors offering up to 5M IOPS and 20GB/s throughput. Both products offer RAID redundancy and end-to-end RDMA memory access.

Qumulo was showing its Qumulo P-Series NVMe all-flash solution. The P-series combines Qumulo File Fabric (QF2) software with high-speed NVMe, Intel Skylake SP processors and high-bandwidth Intel SSDs and 100GbE networking. It offers 16GB/s in a minimum four-node configuration (4GB/s per node). The P-series nodes come in 23 and 92TB size. According to Qumulo, QF2 provides realtime visibility and control regardless of the size of the file system, realtime capacity quotas, continuous replication, support for both SMB and NFS protocols, complete programmability with REST API and fast rebuild times. Qumulo says the P-series can run on-premise or in the cloud and can create a data fabric that interconnects every QF2 cluster whether it is all-flash, hybrid SSD/HDD or running on EC2 instances in AWS.

AIC was at the show with its J2024-04 2U 24-bay NVMe all-flash array using a Broadcom PCIe switch. The product includes dual hot-swap redundant 1.3 KW power supplies. AIC was also showing this AFA product providing a Storage Software Fabric platform with EXTEN smart NICs using Broadcom chips to create a storage software fabric platform, as well as an NVMe JBOF.

Companies such as Luma Forge were showing various hierarchical storage options, including flash memory, as shown in the image below.

Some other solid-state products included the use of two SATA SSDs for performance improvements for the SoftIron HyperDrive Ceph-based object storage appliance. Scale Logic has a hybrid SSD SAN/NAS product called Genesis Unlimited, which can support multiple 4K streams with a combination of HDDs and SSDs. Another NVMe offering was the RAIDIX NVMEXP software RAID engine for building NVMe-based arrays offering 4M IOPS and 30GB/s per 1U and offering RAID levels 5, 6 and 7.3. Nexsan has all-flash versions of its Unity storage products. Pure Storage had a small booth in the back of the South Hall lower showing their flash array products. Spectra Logic was showing new developments in its flash-based Black Pearl product, but we will cover that in another blog.

External Flash Storage Products
Other World Computing (OWC) was showing its solid-state and HDD-based products. They had a line-up of Thunderbolt 3 storage products, including the ThunderBlade and the Envoy Pro EX (VE) with Thunderbolt 3. The ThunderBlade uses a combination of M.2 SSDs to achieve transfer speeds up to 2.8 GB/s read and 2.45 GB/s write (pretty symmetrical R/W) with 1TB to 8TB storage capacity. It is fanless and has a dimmable LED so it won’t interfere with production work. OWC’s mobile bus-powered SSD product, Envoy Pro EX (VE) with Thunderbolt 3 provides sustained data rates up to 2.6 GB/s read and 1.6 GB/s write. This small 1TB to 2TB drive can be carried in a backpack or coat pocket.

Western Digital and Seagate had external SSD drives they were showing. Below is shown the G-Drive Mobile SSD-R, introduced late in 2017.

Memory Cards and SSDs
Samsung was at the NAB showing their EVO 860 2.5-inch. These SATA SSDs provide up to 4TB capacity and 550MB/s sequential read and 520MB/s sequential write speeds for media workstation applications. However, there were also showings of the product used in all-flash arrays as shown below.

ProGrade was showing its line of professional memory cards for high-end digital cameras. These included their SFExpress 1.0 memory card with 1TB capacity and 1.4GB/s read data transfer speed as well as burst write speed greater than 1GB/s. This new Compact Flash standard is a successor to both the C FAST and XQD formats. The product uses two lanes of PCIe and includes NVMe support. The product is interoperable with the XQD form factor. They also announced their V90 premium line of SDXC UHS-II memory cards with sustained read speeds of up to 250MB/s and sustained write speeds up to 200MB/s.

2018 Creative Storage Conference
For those who love storage, the 12th Annual Creative Storage Conference (CS 2018) will be held on June 7 at the Double Tree Hotel West Los Angeles in Culver City. This event brings together digital storage providers, equipment and software manufacturers and professional media and entertainment end users to explore the conference theme: “Enabling Immersive Content: Storage Takes Off.”

Also, my company Coughlin Associate is conducting a survey of digital storage requirements and practices for media and entertainment professionals with results presented at the 2018 Creative Storage Conference. M&E professionals can participate in the survey through this link. Those who complete the survey, with their contact information, will receive a free full pass to the conference.

Our main image: Seagate products in an editing session, including products in a Pelican case for field work. 


Tom Coughlin is president of Coughlin Associates, a digital storage analyst and  technology consultant. He has over 35 years in the data storage industry. He is also the founder of the Annual Storage Visions Conference and the Creative Storage Conference.

 

NAB 2018: My key takeaways

By Twain Richardson

I traveled to NAB this year to check out gear, software, technology and storage. Here are my top takeaways.

Promise Atlas S8+
First up is storage and the Promise Atlas S8+. The Promise Atlas S8+ is a network attached storage solution for small groups that features easy and fast NAS connectivity over Thunderbolt3 and 10GB Ethernet.

The Thunderbolt 3 version of the Atlas S8+ offers two Thunderbolt 3 ports, four 1Gb Ethernet ports, five USB 3.0 ports and one HMDI output. The 10g BaseT version swaps in two 10Gb/s Ethernet ports for the Thunderbolt 3 connections. It can be configured up to 112TB. The unit comes empty, and you will have to buy hard drives for it. The Atlas S8+ will be available later this year.

Lumaforge

Lumaforge Jellyfish Tower
The Jellyfish is designed for one thing and one thing only: collaborative video workflow. That means high bandwidth, low latency and no dropped frames. It features a direct connection, and you don’t need a 10GbE switch.

The great thing about this unit is that it runs quiet, and I mean very quiet. You could place it under your desk and you wouldn’t hear it running. It comes with two 10GbE ports and one 1GbE port. It can be configured for more ports and goes up to 200TB. The unit starts at $27,000 and is available now.

G-Drive Mobile Pro SSD
The G-Drive Mobile Pro SSD is blazing-fast storage with data transfer rates of up to 2800MB/s. It was said that you could transfer as much as a terabyte of media in seven minutes or less. That’s fast. Very fast.

It provides up to three-meter drop protection and comes with a single Thunderbolt 3 port and is bus powered. It also features a 1000lb crush-proof rating, which makes it ideal for being used in the field. It will be available in May with a capacity of 500GB. 1TB and 2TB versions will be available later this year.

OWC Thunderblade
Designed to be rugged and dependable as well as blazing fast, the Thunderblade has a rugged and sleek design, and it comes with a custom-fit ballistic hard-shell case. With capacities of up 8TB and data transfer rates of up to 2800MB/s, this unit is ideal for on-set workflows. The unit is not bus powered, but you can connect two ThunderBlades that can reach speeds of up to 3800MB/s. Now that’s fast.

OWC Thunderblade

It starts at $1,199 for the 1TB and is available now for purchase.

OWC Mercury Helios FX External Expansion Chassis
Add the power of a high-performance GPU to your Mac or PC via Thunderbolt 3. Performance is plug-and-play, and upgrades are easy. The unit is quiet and runs cool, making it a great addition to your environment.

It starts at $319 and is available now.

Flanders XM650U
This display is beautiful, absolutely beautiful.

The XM650U is a professional reference monitor designed for color-critical monitoring of 4K, UHD, and HD signals. It features the latest large-format OLED panel technology, offering outstanding black levels and overall picture performance. The monitor also features the ability to provide a realtime downscaled HD resolution output.

The FSI booth was showcasing the display playing HD, UHD, and UHD HDR content, which demonstrates how versatile the device is.

The monitor goes for $12,995 and is available for purchase now.

DaVinci Resolve 15
What could arguably be the biggest update yet to Resolve is version 15. It combines editing, color correction, audio and now visual effects all in one software tool with the addition of Fusion. Other additions include ADR tools in Fairlight and a sound library. The color and edit page has additions such as a LUT browser, shared grades, stacked timelines, closed captioning tools and more.

You can get DR15 for free — yes free — with some restrictions to the software and you can purchase DR15 Studio for $299. It’s available as a beta at the moment.

Those were my top take aways from NAB 2018. It was a great show, and I look forward to NAB 2019.


Twain Richardson is a co-founder of Frame of Reference, a boutique post production company located on the beautiful island of Jamaica. Follow the studio and Twain on Twitter: @forpostprod @twainrichardson

NAB 2018: A closer look at Firefly Cinema’s suite of products

By Molly Hill

Firefly Cinema, a French company that produces a full set of post production tools, premiered Version 7 of its products at NAB 2018. I visited with co-founder Philippe Reinaudo and head of business development Morgan Angove at the Flanders Scientific booth. They were knowledgeable and friendly, and they helped me to better understand their software.

Firefly’s suite includes FirePlay, FireDay, FirePost and the brand-new FireVision. All the products share the same database and Éclair color management, making for a smooth and complete workflow. However, Reinaudo says their programs were designed with specific UI/UXs to better support each product’s purpose.

Here is how they break down:
FirePlay: This is an on-set media player that supports most any format or file. The player is free to use, but there’s a paid option to include live color grading.

FireDay: Firefly Cinema’s dailies software includes a render tree for multiple versions and supports parallel processing.

FirePost: This is Firefly Cinema’s proprietary color grading software. One of its features was a set of “digital filters,” which were effects with adjustable parameters (not just pre-set LUTs). I was also excited to see the inclusion of curve controls similar to Adobe Lightroom’s Vibrance setting, which increases the saturation of just the more muted colors.

FireVision: This new product is a cloud-based review platform, with smooth integration into FirePost. Not only do tags and comments automatically move between FirePost and FireVision, but if you make a grading change in the former and hit render, the version in FireVision automatically updates. While other products such as Frame.io have this feature, Firefly Cinema offers all of these in the same package. The process was simple and impressive.

One of the downsides of their software package is its lack of support for HDR, but Raynaud says that’s a work in progress. I believe this will likely begin with ÉclairColor HDR, as Reinaudo and his co-founder Luc Geunard are both former Éclair employees. It’s also interesting that they have products for every step after shooting except audio and editing, but perhaps given the popularity of Avid Media Composer, Adobe Premiere and Avid Pro Tools, those are less of a priority for a young company.

Overall, their set of products was professional, comprehensive and smooth to operate, and I look forward to seeing what comes next for Firefly Cinema.


Molly Hill is a motion picture scientist and color nerd, soon-to-be based out of San Francisco. You can follow her on Twitter @mollymh4.

NAB 2018: How Fortium’s MediaSeal protects your content

By Jonathan Abrams

Having previously used Fortium‘s MediaSeal, and seeing it as the best solution for protecting content, I set up a meeting with the company’s CEO, Mathew Gilliat-Smith, at NAB 2018. He talked with me about the product’s history and use cases, and he demonstrated the system in action.

Fortium’s MediaSeal was created at the request of NBCUniversal in 2014, so it was a product born out of need. NBCUniversal did not want any unencrypted files to be in use on sound stages. The solution was to create a product that works on any file residing on any file system and that easily fits existing workflows. The use of encryption on the files would eliminate human error and theft as methods of obtaining usable content.

MediaSeal’s decryptor application works on Mac OS, Linux and Windows (oh my!). The decryptor application runs at the file level of the OS. This is where the objective of easily fitting an existing workflow is achieved. By running on the file level of the OS, any file can be handed off to any application. The application being used to open a file has no idea that the file it is opening has been encrypted.

Authentication is the process of proving who you are to the decryptor application. This can be done three ways. The simplest way is to only use a password. But if this is the only method that is used, anyone with the password can decrypt the file. This is important in terms of protection because nothing prevents the person with the password from sharing both the file and the decryptor password with someone else. “But this is clearly a lot better than having sensitive files sitting unprotected and vulnerable,” explained Gilliat-Smith during my demo.

The second and more secure method of authenticating with the decryptor application is to use an iLok license. Even if a user shares the decryptor password, the user would need an iLok with the appropriate asset attached to their computer in order to decrypt the file.

The third and most secure method of authenticating with the decryptor application is to use a key server. This can be hosted either locally or on Amazon Web Services (AWS). “Authentication on AWS is secure following MPAA guidelines,” said Gilliat-Smith. The key server has an address book of authorized users and allows the content owner to dictate who can access the protected content and when. With the password and the iLok license combined, this gives the person protecting their content great control. A user would need to know the decryption password, have the iLok license and be authorized by the key server in order to access the protected file.

Once a file is decrypted, the decryptor application sends access logs to a key server. These log entries include file copy and export/save operations. Can a file be saved out of encryption while it is in a decrypted state? Yes it can. The operation will be logged with the key server. A rogue user will have the content they seek, though the owners of the content will know that the security has been circumvented. There is no such thing as perfect security. This scenario shows the balance between a strong level of security, where the user has to provide up to three authentication levels for access, and usability, where the OS has no idea that an encrypted file is being decrypted for access.

During the demonstration, the iLok with the decryption license was removed from the computer (Windows OS). Within seconds, a yellow window with black text appeared and access to the encrypted asset was revoked. MediaSeal also works with iLok licenses assigned to a machine instead of a physical iLok. This would make transferring the asset more difficult. Each distributed decryptor asset is unique.

For content providers looking to encrypt their assets, the process is as simple as right-clicking a file and selecting encrypt. Those looking to encrypt multiple files can choose to encrypt a folder recursively. If content is added to a watch folder, it is encrypted without user intervention. Encryption can also be nested. This allows the content provider to send a folder of files to
users and allow one set of users access to some files while allowing a second set of users access to additional files. “MediaSeal uses AES (Advanced Encryption Standard) encryption, which is tested by NGS Secure and ISE,” said Gilliat-Smith. He went on to explain that “Fortium has a system for monitoring the relatively easy steps of getting users onboard and helping them out as
needed.”

MediaSeal can also be integrated with Aspera Faspex. The use of MediaSeal would allow a vendor to meet MPAA DS 11.4, which is to encrypt content at rest and in motion using a scalable approach where full file system encryption (such as\ FileVault 2 on Mac OS) is not desirable. Content providers who want their key server on premises can setup an MPAA Approved system with firewalls and two proxy servers. Vendors have a similar setup when the content provider uses a key server.

While there are many use cases for MediaSeal, the one use case we discussed was localization. If a content provider needs multiple language versions of their content, they can distribute the mix-minus language to localization vendors and assign each vendor a unique decryptor key. If the content provider uses all three authentication methods (password, iLok, key server), they can control the duration of the localization vendor’s access.

My own personal experience with MediaSeal was as simple as one could hope for. I downloaded an iLok license to the iLok being used to decrypt the content, and Avid’s Pro Tools worked with the decrypted asset as if it were any other file.

Fortium’s MediaSeal achieves the directive that NBCUniversal issued in 2014 with aplomb. It is my hope that more content providers who trust vendors with their content adopt this system because it allows the work to flow, and that benefits everyone involved in the creative process.


Jonathan S. Abrams is the chief technical engineer at Nutmeg, a New York City-based creative marketing, production and post studio.

Colorfront supports HDR, UHD, partners again with AJA

By Molly Hill

Colorfront released new products and updated current product support as part of NAB 2018, expanding their partnership with AJA. Both companies had demos of the new HDR Image Analyzer for UHD, HDR and WCG analysis. It can handle 4K, HDR and 60fps in realtime and shows information in various view modes including parade, pixel picker, color gamut and audio.

Other software updates include support for new cameras in On-Set Dailies and Express Dailies, as well as the inclusion of HDR analysis tools. QC Player and Transkoder 2018 were also released, with the latter now optimized for HDR and UHD.

Colorfront also demonstrated its tone-mapping capabilities (SDR/HDR) right in the Transkoder software, without the FS-HDR hardware (which is meant more for broadcast). Static (one light) or dynamic (per shot) mapping is available in either direction. Customization is available for different color gamuts, as well as peak brightness on a sliding scale, so it’s not limited to a pre-set LUT. Even just the static mapping for SDR-to-HDR looked great, with mostly faithful color reproduction.

The only issues were some slight hue shifts from blue to green, and clipping in some of the highlights in the HDR version, despite detail being available in the original SDR. Overall, it’s an impressive system that can save time and money for low-budget films when there isn’t the budget to hire a colorist to do a second pass.

Samsung’s 360 Round for 3D video

Samsung showed an enhanced Samsung 360 Round camera solution at NAB, with updates to its live streaming and post production software. The new solution gives professional video creators the tools they need — from capture to post — to tell immersive 360-degree and 3D stories for film and broadcast.

“At Samsung, we’ve been innovating in the VR technology space for many years, including introducing the 360 Round camera with its ruggedized design, superior low light and live streaming capabilities late last year,” says Eric McCarty of Samsung Electronics America.

The Samsung 360 Round offers realtime 3D video to PCs using the 360 Round’s bundled software so video creators can now view live video on their mobile devices using the 360 Round live preview app. In addition, the 360 Round live preview app allows creators to remotely control the camera settings, via Wi-Fi router, from afar. The updated 360 Round PC software now provides dual monitor support, which allows the editor to make adjustments and show the results on a separate monitor dedicated to the director.

Limiting luminance levels to 16-135, noise reduction and sharpness adjustments, as well as a hardware IR filter make it possible to get a clear shot in almost no light. The 360 Round also offers advanced stabilization software and the ability to color-correct on the fly, with an intuitive, easy-to-use histogram. In addition, users can set up profiles for each shot and save the camera settings, cutting down on the time required to prep each shot.

The 360 Round comes with Samsung’s advanced Stitching software, which weaves together video from each of the 360 Round’s 17 lenses. Creators can stitch, preview and broadcast in one step on a PC without the need for additional software. The 360 Round also enables fine-tuning of seamlines during a live production, such as moving them away from objects in realtime and calibrating individual stitchlines to fix misalignments. In addition, a new local warping feature allows for individual seamline calibrations in post, without requiring a global adjustment to all seamlines, giving creators quick and easy, fine-grain control of the final visuals.

The 360 Round delivers realtime 4K x 4K (3D) streaming with minimal latency. SDI capture card support enables live streaming through multiple cameras and broadcasting equipment with no additional encoding/decoding required. The newest update further streamlines the switching workflow for live productions with audio over SDI, giving producers less complex events (one producer managing audio and video switching) and a single switching source as the production transitions from camera to camera.

Additional new features:

  • Ability to record, stream and save RAW files simultaneously, making the process of creating dailies and managing live productions easier. Creators can now save the RAW files to make further improvements to live production recordings and create a higher quality post version to distribute as VOD.
  • Live streaming support for HLS over HTTP, which adds another transport streaming protocol in addition to the RTMP and RTSP protocols. HLS over HTTP eliminates the need to modify some restrictive enterprise firewall policies and is a more resilient protocol in unreliable networks.
  • Ability to upload direct (via 360 Round software) to Samsung VR creator account, as well as Facebook and YouTube, once the files are exported.

Blackmagic releases Resolve 15, with integrated VFX and motion graphics

Blackmagic has released Resolve 15, a massive update that fully integrates visual effects and motion graphics, making it the first solution to combine professional offline and online editing, color correction, audio post production, multi-user collaboration and visual effects together in one software tool. Resolve 15 adds an entirely new Fusion page with over 250 tools for compositing, paint, particles, animated titles and more. In addition, the solution includes a major update to Fairlight audio, along with over 100 new features and improvements that professional editors and colorists have asked for.

DaVinci Resolve 15 combines four high-end applications into different pages in one single piece of software. The edit page has all the tools professional editors need for both offline and online editing, the color page features advanced color correction tools, the Fairlight audio page is designed specifically for audio post production and the new Fusion page gives visual effects and motion graphics artists everything they need to create feature film-quality effects and animations. A single click moves the user instantly between editing, color, effects and audio, giving individual users creative flexibility to learn and explore different toolsets. The workflow also enables collaboration, which speeds up post by eliminating the need to import, export or translate projects between different software applications or to conform when changes are made. Everything is in the same software application.

The free version of Resolve 15 can be used for professional work and has more features than most paid applications. Resolve 15 Studio, which adds multi-user collaboration, 3D, VR, additional filters and effects, unlimited network rendering and other advanced features such as temporal and spatial noise reduction, is available to own for $299. There are no annual subscription fees or ongoing licensing costs. Resolve 15 Studio costs less than other cloud-based software subscriptions and does not require an internet connection once the software has been activated. That means users won’t lose work in the middle of a job if there is no internet connection.

“DaVinci Resolve 15 is a huge and exciting leap forward for post production because it’s the world’s first solution to combine editing, color, audio and now visual effects into a single software application,” says Grant Petty, CEO of Blackmagic Design. “We’ve listened to the incredible feedback we get from customers and have worked really hard to innovate as quickly as possible. DaVinci Resolve 15 gives customers unlimited creative power to do things they’ve never been able to do before. It’s finally possible to bring teams of editors, colorists, sound engineers and VFX artists together so they can collaborate on the same project at the same time, all in the same software application!”

Resolve 15 Overview

Resolve 15 features an entirely new Fusion page for feature-film-quality visual effects and motion graphics animation. Fusion was previously only available as a standalone application, but it is now built into Resolve 15. The new Fusion page gives customers a true 3D workspace with over 250 tools for compositing, vector paint, particles, keying, rotoscoping, text animation, tracking, stabilization and more. The addition of Fusion to Resolve will be completed over the next 12-18 months, but users can get started using Fusion now to complete nearly all of their visual effects and motion graphics work. The standalone version of Fusion is still available for those who need it.

In addition to bringing Fusion into Resolve 15, Blackmagic has also added support for Apple Metal, multiple GPUs and CUDA acceleration, making Fusion in Resolve faster than ever. To add visual effects or motion graphics, users simply select a clip in the timeline on the Edit page and then click on the Fusion page where they can use Fusion’s dedicated node-based interface, which is optimized for visual effects and motion graphics. Compositions created in the standalone version of Fusion can also be copied and pasted into Resolve 15 projects.

Resolve 15 also features a huge update to the Fairlight audio page. The Fairlight page now has a complete ADR toolset, static and variable audio retiming with pitch correction, audio normalization, 3D panners, audio and video scrollers, a fixed playhead with scrolling timeline, shared sound libraries, support for legacy Fairlight projects and built-in cross platform plugins such as reverb, hum removal, vocal channel and de-esser. With Resolve 15, FairlightFX plugins run natively on Mac, Windows and Linux, so users no longer have to worry about audio plugins when moving between the platforms.

Professional editors will find new features in Resolve 15 specifically designed to make cutting, trimming, organizing and working with large projects even better. Load times have been improved so that large projects with hundreds of timelines and thousands of clips now open instantly. New stacked timelines and timeline tabs let editors see multiple timelines at once, so they can quickly cut, paste, copy and compare scenes between timelines. There are also new markers with on-screen annotations, subtitle and closed captioning tools, auto save with versioning, improved keyboard customization tools, new 2D and 3D Fusion title templates, image stabilization on the Edit page, a floating timecode window, improved organization and metadata tools, Netflix render presets with IMF support and much more.

Colorists get an entirely new LUT browser for quickly previewing and applying LUTs, along with new shared nodes that are linked so when one is changed they all change. Multiple playheads allow users to quickly reference different shots in a program. Expanded HDR support includes GPU accelerated Dolby Vision metadata analysis and native HDR 10+ grading controls. The new ResolveFX lets users quickly patch blemishes or remove unwanted elements in a shot using smart fill technology, and allows for dust and scratch removal, lens and aperture diffraction effects and more.

For the ultimate high-speed workflow, users can add a Resolve Micro Panel, Resolve Mini Panel or a Resolve Advanced Panel. All controls are placed near natural hand positions. Smooth, high-resolution weighted trackballs and precision engineered knobs and dials provide the right amount of resistance to accurately adjust settings. The Resolve control panels give colorists and editors fluid, hands-on control over multiple parameters at the same time, allowing them to create looks that are simply impossible with a standard mouse.

In addition, Blackmagic also introduced new Fairlight audio consoles for audio post production that will be available later this year. The new Fairlight consoles will be available in two-, three- and five- bay configurations.

Availability and Price

The public beta of Resolve 15 is available today as a free download from the Blackmagic website for all current Resolve and Resolve Studio customers. Resolve Studio is available for $299 from Blackmagic resellers.

The Fairlight consoles will be available later this year and with prices starting at $21,995 for the Fairlight 2 Bay console. The Fairlight consoles will be available from Blackmagic resellers.

NAB Day 2 thoughts: AJA, Sharp, QNAP

By Mike McCarthy

During my second day walking the show floor at NAB, I was able to follow up a bit more on a few technologies that I found intriguing the day before.

AJA released a few new products and updates at the show. Their Kumo SDI switchers now have options supporting 12G SDI, but their Kona cards still do not. The new Kona 1 is a single channel of 3G SDI in and out, presumably to replace the aging Kona LHe since analog is being phased out in many places.

There is also a new Kona HDMI, which just has four dedicated HDMI inputs for streaming and switching. This will probably be a hit with people capturing and streaming competitive video gaming. Besides a bunch of firmware updates to existing products, they are showing off the next step in their partnership with ColorFront in the form of a 1RU HDR image analyzer. This is not a product I need personally, but I know it will have an important role to fill as larger broadcast organizations move into HDR production and workflows.

Sharp had an entire booth dedicated to 8K video technologies and products. They were showing off 8Kp120 playback on what I assume is a prototype system and display. They also had 8K broadcast-style cameras on display in operation, outputting Quad 12G SDI that eventually fed an 8K TV with Quad HDMI. They also had a large curved video wall, composed of eight individual 2Kx5K panels. It obviously had large seams, but it had a more immersive feel that the LED based block walls I see elsewhere.

I was pleasantly surprised to discover that NAS vendor QNAP has released a pair of 10GbE switches, with both SFP+ and RJ45 ports. I was quoted a price under $600, but I am not sure if that was for the eight- or 12-port version. Either way, that is a good deal for users looking to move into 10GbE, with three to 10 clients — two clients can just direct connect. It also supports the new NBASE-T standard that connects at 2.5Gb or 5Gb instead of 10Gb, depending on the cables and NICs involved in the link. It is of course compatible with 1Gb and 100Mb connections as well.

On a related note, the release of 25GbE PCIe NICs allows direct connections between two systems to be much faster, for not much more cost than previous 10GbE options. This is significant for media production workflows, as uncompressed 4K requires slightly more bandwidth than 10GbE provides. I also learned all sorts of things about the relationship between 10GbE and its quad-channel variant 40GbE, which with the newest implementations is 25GbE, allowing 100GbE when four channels are combined.

I didn’t previously know that 40GbE ports and 100GB ports on switches could be broken into four independent connections with just a splitter cable, which offers some very interesting infrastructure design options — especially as facilities move towards IP video workflows, and SDI over IP implementations and products.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB First Thoughts: Fusion in Resolve, ProRes RAW, more

By Mike McCarthy

These are my notes from the first day I spent browsing the NAB Show floor this year in Las Vegas. When I walked into the South Lower Hall, Blackmagic was the first thing I saw. And, as usual, they had a number of new products this year. The headline item is the next version of DaVinci Resolve, which now integrates the functionality of their Fusion visual effects editor within the program. While I have never felt Resolve to be a very intuitive program for my own work, it is a solution I recommend to others who are on a tight budget, as it offers the most functionality for the price, especially in the free version.

Blackmagic Pocket Cinema Camera

The Blackmagic Pocket Cinema Camera 4K looks more like a “normal” MFT DSLR camera, although it is clearly designed for video instead of stills. Recording full 4K resolution in RAW or ProRes to SD or CFast cards, it has a mini-XLR input with phantom power and uses the same LP-E6 battery as my Canon DSLR. It uses the same camera software as the Ursa line of devices and includes a copy of Resolve Studio… for $1,300.  If I was going to be shooting more live-action video anytime soon, this might make a decent replacement for my 70D, moving up to 4K and HDR workflows. I am not as familiar with the Panasonic cameras that it is closely competes with in the Micro Four Thirds space.

AMD Radeon

Among other smaller items, Blackmagic’s new UpDownCross HD MiniConverter will be useful outside of broadcast for manipulating HDMI signals from computers or devices that have less control over their outputs. (I am looking at you, Mac users.) For $155, it will help interface with projectors and other video equipment. At $65, the bi-directional MicroConverter will be a cheaper and simpler option for basic SDI support.

AMD was showing off 8K editing in Premiere Pro, the result of an optimization by Adobe that uses the 2TB SSD storage in AMD’s Radeon Pro SSG graphics card to cache rendered frames at full resolution for smooth playback. This change is currently only applicable to one graphics card, so it will be interesting to see if Adobe did this because it expects to see more GPUs with integrated SSDs hit the market in the future.

Sony is showing crystal light emitting diode technology in the form of a massive ZRD video wall of incredible imagery. The clarity and brightness were truly breathtaking, but obviously my camera rendered to the web hardly captures the essence of what they were demonstrating.

Like nearly everyone else at the show, Sony is also pushing HDR in the form of Hybrid Log Gamma, which they are developing into many of their products. They also had an array for their tiny RX0 cameras on display with this backpack rig from Radiant Images.

ProRes RAW
At a higher level, one of the most interesting things I have seen at the show is the release of ProRes RAW. While currently limited to external recorders connected to cameras from Sony, Panasonic and Canon, and only supported in FCP-X, it has the potential to dramatically change future workflows if it becomes more widely supported. Many people confuse RAW image recording with the log gamma look, or other low-contrast visual interpretations, but at its core RAW imaging is a single-channel image format paired with a particular bayer color pattern specific to the sensor it was recorded with.

This decreases the amount of data to store (or compress) and gives access to the “source” before it has been processed to improve visual interpretation — in the form of debayering and adding a gamma curve to reverse engineer the response pattern of the human eye, compared to mechanical light sensors. This provides more flexibility and processing options during post, and reduces the amount of data to store, even before the RAW data is compressed, if at all. There are lots of other compressed RAW formats available; the only thing ProRes actually brings to the picture is widespread acceptance and trust in the compression quality. Existing compressed RAW formats include R3D, CinemaDNG, CineformRAW and Canon CRM files.

None of those caught on as a widespread multi-vendor format, but this ProRes RAW is already supported by systems from three competing camera vendors. And the applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.

There is no technical reason that ProRes RAW couldn’t be implemented on future Arri, Red or BMD cameras, which are all currently capable of recording ProRes and RAW data (but not the combination, yet). And since RAW is inherently a playback-only format, (you can’t alter a RAW image without debayering it), I anticipate we will see support in other applications, unless Apple wants to sacrifice the format in an attempt to increase NLE market share.

So it will be interesting to see what other companies and products support the format in the future, and hopefully it will make life easier for people shooting and producing HDR content.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB: AJA intros HDR Image Analyzer, Kona 1, Kona HDMI

AJA Video Systems is exhibiting a tech preview of its new waveform, histogram, vectorscope and Nit level HDR monitoring solution at NAB. The HDR Image Analyzer simplifies monitoring and analysis of 4K/UltraHD/2K/HD, HDR and WCG content in production, post, quality control and mastering. AJA has also announced two new Kona cards, as well as Desktop Software v14.2. Kona HDMI is a PCIe card for multi-channel HD and single-channel 4K HDMI capture for live production, streaming, gaming, VR and post production. Kona1 is a PCIe card for single-channel HD/SD 3G-SDI capture/playback. Desktop Software v14.2 adds support for Kona 1 and Kona HDMI, plus new improvements for AJA Kona, Io and T-TAP products.

HDR Image Analyzer
A waveform, histogram, vectorscope and Nit level HDR monitoring solution, the HDR Image Analyzer combines AJA’s video and audio I/O with HDR analysis tools from Colorfront in a compact 1RU chassis. The HDR Image Analyzer is a flexible solution for monitoring and analyzing HDR formats including Perceptual Quantizer, Hybrid Log Gamma and Rec.2020 for 4K/UltraHD workflows.

The HDR Image Analyzer is the second technology collaboration between AJA and Colorfront, following the integration of Colorfront Engine into AJA’s FS-HDR. Colorfront has exclusively licensed its Colorfront HDR Image Analyzer software to AJA for the HDR Image Analyzer.

Key features include:

— Precise, high-quality UltraHD UI for native-resolution picture display
— Advanced out-of-gamut and out-of-brightness detection with error intolerance
— Support for SDR (Rec.709), ST2084/PQ and HLG analysis
— CIE graph, Vectorscope, Waveform, Histogram
— Out-of-gamut false color mode to easily spot out-of-gamut/out-of-brightness pixels
— Data analyzer with pixel picker
— Up to 4K/UltraHD 60p over 4x 3G-SDI inputs
— SDI auto-signal detection
— File base error logging with timecode
— Display and color processing look up table (LUT) support
— Line mode to focus a region of interest onto a single horizontal or vertical line
— Loop-through output to broadcast monitors
— Still store
— Nit levels and phase metering
— Built-in support for color spaces from ARRI, Canon, Panasonic, RED and Sony

“As 4K/UltraHD, HDR/WCG productions become more common, quality control is key to ensuring a pristine picture for audiences, and our new HDR Image Analyzer gives professionals an affordable and versatile set of tools to monitor and analyze HDR productions from start to finish, allowing them to deliver more engaging visuals for viewers,” says Rashby.

Adds Aron Jazberenyi, managing director of Colorfront, “Colorfront’s comprehensive UHD HDR software toolset optimizes the superlative performance of AJA video and audio I/O hardware, to deliver a powerful new solution for the critical task of HDR quality control.”

HDR Image Analyzer is being demonstrated as a technology preview only at NAB 2018.

Kona HDMI
An HDMI video capture solution, Kona HDMI supports a range of workflows, including live streaming, events, production, broadcast, editorial, VFX, vlogging, video game capture/streaming and more. Kona HDMI is highly flexible, designed for four simultaneous channels of HD capture with popular streaming and switching applications including Telestream Wirecast and vMix.

Additionally, Kona HDMI offers capture of one channel of UltraHD up to 60p over HDMI 2.0, using AJA Control Room software, for file compatibility with most NLE and effects packages. It is also compatible with other popular third-party solutions for live streaming, projection mapping and VR workflows. Developers use the platform to build multi-channel HDMI ingest systems and leverage VL42 compatibility on Linux. Features include: four full-size HDMI ports; the ability to easily switch between one channel of UltraHD or four channels of 2K/HD; and embedded HDMI audio in, up to eight embedded channels per input.

Kona 1
Designed for broadcast, post production and ProAV, as well as OEM developers, Kona 1 is a cost-efficient single-channel 3G-SDI 2K/HD 60p I/O PCIe card. Kona 1 offers serial control and reference/LTC, and features standard application plug-ins, as well as AJA SDK support. Kona 1 supports 3G-SDI capture, monitoring and/or playback with software applications from AJA, Adobe, Avid, Apple, Telestream and more. Kona 1 enables simultaneous monitoring during capture (pass-through) and includes: full-size SDI ports supporting 3G-SDI formats, embedded 16-channel SDI audio in/out, Genlock with reference/ LTC input and RS-422.

Desktop Software v14.2
Desktop Software v14.2 introduces support for Kona HDMI and Kona 1, as well as a new SMPTE ST 2110 IP video mode for Kona IP, with support for AJA Control Room, Adobe Premiere Pro CC, part of the Adobe Creative Cloud, and Avid Media Composer. The free software update also brings 10GigE support for 2K/HD video and audio over IP (uncompressed SMPTE 2022-6/7) to the new Thunderbolt 3-equipped Io IP and Avid DNxIP, as well as additional enhancements to other Kona, Io and T-TAP products, including HDR capture with Io 4K Plus. Io 4K Plus and DNxIV users also benefit from a new feature allowing all eight analog audio channels to be configured for either output, input or a 4-In/4-Out mode for full 7.1 ingest/monitoring, or I/O for stereo plus VO and discrete tracks.

“Speed, compatibility and reliability are key to delivering high-quality video I/O for our customers. Kona HDMI and Kona 1 give video professionals and enthusiasts new options to work more efficiently using their favorite tools, and with the reliability and support AJA products offer,” says Nick Rashby, president of AJA.

Kona HDMI will be available this June for $895, and Kona 1 will be available in May for $595. Both are available for pre-order now. Desktop Software v14.2 will also be available in May, as a free download from AJA’s support page.

CatDV MAM expands support for enterprise workflows

Square Box Systems has introduced several enhancements geared to larger-scale enterprise use of its flagship CatDV media asset management (MAM) solution. These include expanded customization capabilities for tailored MAM workflows, new enhancements for cloud and hybrid installations, and expanded support for micro-services and distributed deployments.

CatDV now can operate seamlessly in hybrid IT environments consisting of both on-premises and cloud-based resources, enabling transparent management and movement of content across NAS, SAN, cloud or object storage tiers.

New customization features include enhanced JavaScript support and an all-new custom user interface toolkit. Both the desktop and web versions of CatDV and the system’s Worker automation engine now support JavaScript, and the user interface toolkit enables customers to build completely new user experiences for every CatDV component. Recent CatDV customizations, built on these APIs, include a document analyzer that can extract text from PDFs, photos, and MS Office documents for indexing by CatDV; and a tool for uploading assets to YouTube.

CatDV’s new cloud/hybrid enhancements include integration with file acceleration tools from Aspera, as well as extended support for AWS S3 archive, such as KMS encryption and Glacier support with configurable expedited restores. CatDV has also built an all-new AWS deployment template with proxy playback from S3. CatDV also now includes support for Backblaze B2 archive and Contigo object storage.

In addition, the latest version of CatDV now supports deployment of server plug-in components on separate servers. Examples include data movers for archive plug-ins such as Black Pearl, S3, Azure, and B2.

EditShare intros software-only Flow MAM, more at NAB

During NAB 2018, EditShare launched a new standalone version of its Flow MAM software, designed for non-EditShare storage environments such as Avid Nexis, Storage DNA and Amazon S3. Flow adds an intelligent media management layer to an existing storage infrastructure that can manage millions of assets across multiple storage tiers in different locations.

EditShare will spotlight the new Flow version as well as a new family of solutions in its QScan Automated Quality Control (AQC) software line, offering cost-effective compliance and delivery check capabilities and integration across production, post and delivery. In addition, EditShare will unveil its new XStream EFS auditing dashboard, aligned with Motion Picture Association of America (MPAA) best practices to promote security in media-engineered EFS storage platforms.

The Flow suite of apps helps users manage content and associated metadata from ingest through to archive. At the core of Flow are workflow engines that enable collaboration through ingest, search, review, logging, editing and delivery, and a workflow automation engine for automating tasks such as transcoding and delivery. Flow users are able to review content remotely and also edit content on a timeline with voiceover and effects from anywhere in the world.

Along with over 500 software updates, the latest version of Flow features a redesigned and unified UI across web-based and desktop apps. Flow also has new capabilities for remotely viewing Avid Media Composer or Adobe Premiere edits in a web browser; range markers for enhanced logging and review capabilities; and new software licensing with a customer portal and license management tools. A new integration with EditShare’s QScan AQC software makes AQC available at any stage of the post workflow.

Flow caters to the increased demand for remote post workflows by enabling full remote access to content, as well as integration with leading NLEs such as Avid Media Composer and Adobe Premiere. Comments James Richings, EditShare managing director, “We are seeing a huge demand from users to interact and collaborate with each other from different locations. The ability to work from anywhere without incurring the time and cost of physically moving content around is becoming much more desirable. With a simple setup, Flow helps these users track their assets, automate workflows and collaborate from anywhere in the world. We are also introducing a new pay-as-you-go model, making asset management affordable for even the smallest of teams.”

Flow will be available through worldwide authorized sales partners and distributors by the end of May, with monthly pricing starting at $19 per user.

Atomos at NAB offering ProRes RAW recorders

Atomos is at this year’s NAB showing support for ProRes RAW, a new format from Apple that combines the performance of ProRes with the flexibility of RAW video. The ProRes RAW update will be available free for the Atomos Shogun Inferno and Sumo 19 devices.

Atomos devices are currently the only monitor recorders to offer ProRes RAW, with realtime recording from the sensor output of Panasonic, Sony and Canon cameras.

The new upgrade brings ProRes RAW and ProRes RAW HQ recording, monitoring, playback and tag editing to all owners of an Atomos Shogun Inferno or Sumo19 device. Once installed, it will allow the capture of RAW images in up to 12-bit RGB — direct from many of our industry’s most advanced cameras onto affordable SSD media. ProRes RAW files can be imported directly into Final Cut Pro 10.4.1 for high-performance editing, color grading, and finishing on Mac laptop and desktop systems.
Eight popular cine cameras with a RAW output — including the Panasonic AU-EVA1, Varicam LT, Sony FS5/FS7 and Canon C300mkII/C500 — will be supported with more to follow.

With this ProRes RAW support, filmmakers can work easily with RAW – whether they are shooting episodic TV, commercials, documentaries, indie films or social events.

Shooting ProRes RAW preserves maximum dynamic range, with a 12-bit depth and wide color gamut — essential for HDR finishing. The new format, which is available in two compression levels — ProRes RAW and ProRes RAW HQ — preserves image quality with low data rates and file sizes much smaller than uncompressed RAW.

Atomos recorders through ProRes RAW allow for increased flexibility in captured frame rates and resolutions. Atomos can record ProRes RAW up to 2K at 240 frames a second, or 4K at up to 120 frames per second. Higher resolutions such as 5.7K from the Panasonic AU-EVA1 are also supported.

Atomos’ OS, AtomOS 9, gives users filming tools to allow them to work efficiently and creatively with ProRes RAW in portable devices. Fast connections in and out and advanced HDR screen processing means every pixel is accurately and instantly available for on-set creative playback and review. Pull the SSD out and dock to your Mac over Thunderbolt 3 or USB-C 3.1 for immediate super fast post production.

Download the AtomOS 9 update for Shogun Inferno and Sumo 19 at www.atomos.com/firmware.

AlterMedia rolling out rebuild of its Studio Suite 12 at NAB

At this year’s NAB, AlterMedia is showing Studio Suite 12, a ground-up rebuild of its studio, production and post management application. The rebuilt codebase and streamlined interface have made the application lighter, faster and more intuitive; it functions as a web application and yet still has the ability to be customized easily to adapt to varying workflows.

“We literally started over with a blank slate with this version,” says AlterMedia founder Joel Stoner. “The goal was really to reconsider everything. We took the opportunity to shed tons of old code and tired interface paradigms. That said, we maintained the basic structure and flow so existing users would feel comfortable jumping right in. Although there are countless new features, the biggest is that every user can now access Studio Suite 12 through a browser from anywhere.”

Studio Suite 12 now provides better integration within the Internet ecosystem by connecting with Slack and Twillio (for messaging), as well as Google Calendar, Exchange Calendar, Apple Calendar, IMDB, Google Maps, Ebay, QuickBooks and Xero accounting software and more.

Editor Dylan Tichenor to headline SuperMeet at NAB 2018

For those of you heading out to Las Vegas for NAB 2018, the 17th annual SuperMeet will take place on Tuesday, April 10 at the Rio Hotel. Speaking this year will be Oscar-nominated film editor Dylan Tichenor (There Will be Blood, Zero Dark Thirty). Additionally, there will be presentations from Blackmagic, Adobe, Frame.io, HP/Nvidia, Atomos and filmmaker Bradley Olsen, who will walk the audience through his workflow on Off the Tracks, a documentary about Final Cut Pro X.

Blackmagic Resolve designers Paul Saccone, Mary Plummer, Peter Chamberlain and Rohit Gupta will answer all questions on all things DaVinci Resolve, Fusion or Fairlight Audio.

Adobe Premiere Pro product manager Patrick Palmer will reveal new features in Adobe video solutions for their editing, color, graphics and audio editing workflows.

Frame.io CEO Emery Wells will preview the next generation of its collaboration and workflow tool, which will be released this summer.

Atomos’ Jeromy Young will talk about some of their new partners. He says, “It involves software and camera makers alike.”

As always, the evening will round out with the SuperMeet’s ”World Famous Raffle,” where the total value of prizes has now reached over $101,000. Part of that total includes a Blackmagic Advanced Control Panel, worth $29,995.

Doors will open at 4:30pm with the SuperMeet Vendor Showcase, which features 23 software and hardware developers. Those attending can enjoy a few cocktails and mingle with industry peers.

To purchase tickets, and for complete daily updates on the SuperMeet, including agenda updates, directions, transportation options and a current list of raffle prizes, visit the SuperMeet website.

NAB: Adobe’s spring updates for Creative Cloud

By Brady Betzel

Adobe has had a tradition of releasing Creative Cloud updates prior to NAB, and this year is no different. The company has been focused on improving existing workflows and adding new features, some based on Adobe’s Sensei technology, as well as improved VR enhancements.

In this release, Adobe has announced a handful of Premiere Pro CC updates. While I personally don’t think that they are game changing, many users will appreciate the direction Adobe is going. If you are color correcting, Adobe has added the Shot Match function that allows you to match color between two shots. Powered by Adobe’s Sensei technology, Shot Match analyzes one image and tries to apply the same look to another image. Included in this update is the long-requested split screen to compare before and after color corrections.

Motion graphic templates have been improved with new adjustments like 2D position, rotation and scale. Automatic audio ducking has been included in this release as well. You can find this feature in the Essential Sound panel, and once applied it will essentially dip the music in your scene based on dialogue waveforms that you identify.

Still inside of Adobe Premiere Pro CC, but also applicable in After Effects, is Adobe’s enhanced Immersive Environment. This update is for people who use VR headsets to edit and or process VFX. Team Project workflows have been updated with better version tracking and indicators of who is using bins and sequences in realtime.

New Timecode Panel
Overall, while these updates are helpful, none are barn burners, the thing that does have me excited is the new Timecode Panel — it’s the biggest new update to the Premiere Pro CC app. For years now, editors have been clamoring for more than just one timecode view. You can view sequence timecodes, source media timecodes from the clips on the different video layers in your timeline, and you can even view the same sequence timecode in a different frame rate (great for editing those 23.98 shows to a 29.97/59.94 clock!). And one of my unexpected favorites is the clip name in the timecode window.

I was testing this feature in a pre-release version of Premiere Pro, and it was a little wonky. First, I couldn’t dock the timecode window. While I could add lines and access the different menus, my changes wouldn’t apply to the row I had selected. In addition, I could only right click and try to change the first row of contents, but it would choose a random row to change. I am assuming the final release has this all fixed. If it the wonkiness gets flushed out, this is a phenomenal (and necessary) addition to Premiere Pro.

Codecs, Master Property, Puppet Tool, more
There have been some compatible codec updates, specifically Raw Sony X-OCN (Venice), Canon Cinema Raw Light (C200) and Red IPP2.

After Effects CC has also been updated with Master Property controls. Adobe said it best during their announcement: “Add layer properties, such as position, color or text, in the Essential Graphics panel and control them in the parent composition’s timeline. Use Master Property to push individual values to all versions of the composition or pull selected changes back to the master.”

The Puppet Tool has been given some love with a new Advanced Puppet Engine, giving access to improving mesh and starch workflows to animate static objects. Beyond updates to Add Grain, Remove Grain and Match Grain effects, making them multi-threaded, enhanced disk caching and project management improvements have been added.

My favorite update for After Effects CC is the addition of data-driven graphics. You can drop a CSV or JSON data file and pick-whip data to layer properties to control them. In addition, you can drag and drop data right onto your comp to use the actual numerical value. Data-driven graphics is a definite game changer for After Effects.

Audition
While Adobe Audition is an audio mixing application, it has some updates that will directly help anyone looking to mix their edit in Audition. In the past, to get audio to a mixing program like Audition, Pro Tools or Fairlight you would have to export an AAF (or if you are old like me possibly an OMF). In the latest Audition update you can simply open your Premiere Pro projects directly into Audition, re-link video and audio and begin mixing.

I asked Adobe whether you could go back and forth between Audition and Premiere, but it seems like it is a one-way trip. They must be expecting you to export individual audio stems once done in Audition for final output. In the future, I would love to see back and forth capabilities between apps like Premiere Pro and Audition, much like the Fairlight tab in Blackmagic’s Resolve. There are some other updates like larger tracks and under-the-hood updates which you can find more info about on: https://theblog.adobe.com/creative-cloud/.

Adobe Character Animator has some cool updates like overall character building updates, but I am not too involved with Character Animator so you should definitely read about things like the Trigger Improvements on their blog.

Summing Up
In the end, it is great to see Adobe moving forward on updates to its Creative Cloud video offerings. Data-driven animation inside of After Effects is a game-changer. Shot color matching in Premiere Pro is a nice step toward a professional color correction application. Importing Premiere Pro projects directly into Audition is definitely a workflow improvement.

I do have a wishlist though: I would love for Premiere Pro to concentrate on tried-and-true solutions before adding fancy updates like audio ducking. For example, I often hear people complain about how hard it is to export a QuickTime out of Premiere with either stereo or mono/discrete tracks. You need to set up the sequence correctly from the jump, adjust the pan on the tracks, as well as adjust the audio settings and export settings. Doesn’t sound streamlined to me.

In addition, while shot color matching is great, let’s get an Adobe SpeedGrade-style view tab into Premiere Pro so it works like a professional color correction app… maybe Lumetri Pro? I know if the color correction setup was improved I would be way more apt to stay inside of Premiere Pro to finish something instead of going to an app like Resolve.

Finally, consolidating and transcoding used clips with handles is hit or miss inside of Premiere Pro. Can we get a rock-solid consolidate and transcode feature inside of Premiere Pro? Regardless of some of the few negatives, Premiere Pro is an industry staple and it works very well.

Check out Adobe’s NAB 2018 update video playlist for details on each and every update.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

NextComputing, Z Cam, Assimilate team on turnkey VR studio

NextComputing, Z Cam and Assimilate have teamed up to create a complete turnkey VR studio. Foundation VR Studio is designed to provide all aspects of the immersive production process and help the creatives be more creative.

According to Assimilate CEO Jeff Edson, “Partnering with Z Cam last year was an obvious opportunity to bring together the best of integrated 360 cameras with a seamless workflow for both live and post productions. The key is to continue to move the market from a technology focus to a creative focus. Integrated cameras took the discussions up a level of integration away from the pieces. There have been endless discussions regarding capable platforms for 360; the advantage we have is we work with just about every computer maker as well as the component companies, like CPU and GPU manufacturers. These are companies that are willing to create solutions. Again, this is all about trying to help the market focus on the creative as opposed to debates about the technology, and letting creative people create great experiences and content. Getting the technology out of their way and providing solutions that just work helps with this.”

These companies are offering a few options with their Power VR Studio.

The Foundation VR Studio, which costs $8,999 and is available now includes:
• NextComputing Edge T100 workstation
o CPU: 6-core Intel core i7-8700K 3.7GHz processor
o Memory: 16GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

Then there is the Power VR Studio, for $10,999, which is also available now. It includes:
• NextComputing Edge T100 workstation
o CPU: 10-core Intel core i9-7900K 3.3GHz processor
o Memory: 32GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

These companies will be at NAB demoing the systems.