Tag Archives: NAB 2019

Cobalt Digital’s card-based solution for 4K/HDR conversions

Cobalt Digital was at NAB showing with card-based solutions for openGear frames for 4K and HDR workflows. Cobalt’s 9904-UDX-4K up/down/cross converter and image processor offers an economical SDR-to-HDR and HDR-to-SDR conversion for 4K.

John Stevens, director of engineering at Burbank post house The Foundation, calls it “a swiss army knife” for a post facility.

The 9904-UDX-4K upconverts 12G/6G/3G/HD/SD to either UHD1 3840×2160 square division multiplex (SDM) or two-sample interleave (2SI) quad 3G-SDI-based formats, or it can output SMPTE ST 2082 12G-SDI for single-wire 4K transport. With both 12G-SDI and quad 3G-SDI inputs, the 9904-UDX-4K can downconvert 12G and quad UHD. The 9904-UDX-4K provides an HDMI 2.0 output for economical 4K video monitoring and offers numerous options, including SDR-to-HDR conversion and color correction.

The 9904-UDX-4K-IP model offers the same functionality as the 9904-UDX-4K SDI-based model, plus it also provides dual 10GigE ports to support for the emerging uncompressed video/audio/data over IP standards.

The 9904-UDX-4K-DSP model provides the same functionality as the 9904-UDX-4K model, and additionally also offers a DSP-based platform that supports multiple audio DSP options, including Dolby realtime loudness leveling (automatic loudness processing), Dolby E/D/D+ encode/decode and Linear Acoustic Upmax automatic upmixing. Embedded audio and metadata are properly delayed and re-embedded to match any video processing delay, with full adjustment available for audio/video offset.

The product’s high-density openGear design allows for up to five 9904-UDX-4K cards to be installed in one 2RU openGear frame. Card control/monitoring is available via the DashBoard user interface, integrated HTML5 web interface, SNMP or Cobalt’s RESTful-based Reflex protocol.

“I have been looking for a de-embedder that will work with SMPTE ST-2048 raster sizes — specifically 2048×1080 and 4096×2160,” explains Stevens. “The reason this is important is Netflix deliverables require these rasters. We use all embedded audio and I need to de-embed for monitoring. The same Cobalt Digital card will take almost every SDI input from quad link to 12G and output HDMI. There are other converters that will do some of the same things, but I haven’t seen anything that does what this product does.”

NAB 2019: An engineer’s perspective

By John Ferder

Last week I attended my 22nd NAB, and I’ve got the Ross lapel pin to prove it! This was a unique NAB for me. I attended my first 20 NABs with my former employer, and most of those had me setting up the booth visits for the entire contingent of my co-workers and making sure that the vendors knew we were at each booth and were ready to go. Thursday was my “free day” to go wandering and looking at the equipment, cables, connectors, test gear, etc., that I was looking for.

This year, I’m part of a new project, so I went with a shopping list and a rough schedule with the vendors we needed to see. While I didn’t get everywhere I wanted to go, the three days were very full and very rewarding.

Beck Video IP panel

Sessions and Panels
I also got the opportunity to attend the technical sessions on Saturday and Sunday. I spent my time at the BEITC in the North Hall and the SMPTE Future of Cinema Conference in the South Hall. Beck TV gave an interesting presentation on constructing IP-based facilities of the future. While SMPTE ST2110 has been completed and issued, there are still implementation issues, as NMOS is still being developed. Today’s systems are and will for the time being be hybrid facilities. The decision to be made is whether the facility will be built on an IP routing switcher core with gateways to SDI, or on an SDI routing switcher core with gateways to IP.

Although more expensive, building around an IP core would be more efficient and future-proof. Fiber infrastructure design, test equipment and finding engineers who are proficient in both IP and broadcast (the “Purple Squirrels”) are large challenges as well.

A lot of attention was also paid to cloud production and distribution, both in the BEITC and the FoCC. One such presentation, at the FoCC, was on VFX in the cloud with an eye toward the development of 5G. Nathaniel Bonini of BeBop Technology reported that BeBop has a new virtual studio partnership with Avid, and that the cloud allows tasks to be performed in a “massively parallel” way. He expects that 5G mobile technology will facilitate virtualization of the network.

VFX in the Cloud panel

Ralf Schaefer, of the Fraunhofer Heinrich-Hertz Institute, expressed his belief that all devices will be attached to the cloud via 5G, resulting in no cables and no mobile storage media. 5G for AR/VR distribution will render the scene in the network and transmit it directly to the viewer. Denise Muyco of StratusCore provided a link to a virtual workplace: https://bit.ly/2RW2Vxz. She felt that 5G would assist in the speed of the collaboration process between artist and client, making it nearly “friction-free.” While there are always security concerns, 5G would also help the prosumer creators to provide more content.

Chris Healer of The Molecule stated that 5G should help to compress VFX and production workflows, enable cloud computing to work better and perhaps provide realtime feedback for more perfect scene shots, showing line composites of VR renders to production crews in remote locations.

The Floor
I was very impressed with a number of manufacturers this year. Ross Video demonstrated new capabilities of Inception and OverDrive. Ross also showed its new Furio SkyDolly three-wheel rail camera system. In addition, 12G single-link capability was announced for Acuity, Ultrix and other products.

ARRI AMIRA (Photo by Cotch Diaz)

ARRI showed a cinematic multicam system built using the AMIRA camera with a DTS FCA fiber camera adapter back and a base station controllable by Sony RCP1500 or Skaarhoj RCP. The Sony panel will make broadcast-centric people comfortable, but I was very impressed with the versatility of the Skaarhoj RCP. The system is available using either EF, PL, or B4 mount lenses.

During the show, I learned from one of the manufacturers that one of my favorite OLED evaluation monitors is going to be discontinued. This was bad news for the new project I’ve embarked on. Then we came across the Plura booth in the North Hall. Plura as showing a new OLED monitor, the PRM-224-3G. It is a 24.5-inch diagonal OLED, featuring two 3G/HD/SD-SDI and three analog inputs, built-in waveform monitors and vectorscopes, LKFS audio measurement, PQ and HLG, 10-bit color depth, 608/708 closed caption monitoring, and more for a very attractive price.

Sony showed the new HDC-3100/3500 3xCMOS HD cameras with global shutter. These have an upgrade program to UHD/HDR with and optional processor board and signal format software, and a 12G-SDI extension kit as well. There is an optional single-mode fiber connector kit to extend the maximum distance between camera and CCU to 10 kilometers. The CCUs work with the established 1000/1500 series of remote control panels and master setup units.

Sony’s HDC-3100/3500 3xCMOS HD camera

Canon showed its new line of 4K UHD lenses. One of my favorite lenses has been the HJ14ex4.3B HD wide-angle portable lens, which I have installed in many of the studios I’ve worked in. They showed the CJ14ex4.3B at NAB, and I even more impressed with it. The 96.3-degree horizontal angle of view is stunning, and the minimization of chromatic aberration is carried over and perhaps improved from the HJ version. It features correction data that support the BT.2020 wide color gamut. It works with the existing zoom and focus demand controllers for earlier lenses, so it’s  easily integrated into existing facilities.

Foot Traffic
The official total of registered attendees was 91,460, down from 92,912 in 2018. The Evertz booth was actually easy to walk through at 10a.m. on Monday, which I found surprising given the breadth of new interesting products and technologies. Evertz had to show this year. The South Hall had the big crowds, but Wednesday seemed emptier than usual, almost like a Thursday.

The NAB announced that next year’s exhibition will begin on Sunday and end on Wednesday. That change might boost overall attendance, but I wonder how adversely it will affect the attendance at the conference sessions themselves.

I still enjoy attending NAB every year, seeing the new technologies and meeting with colleagues and former co-workers and clients. I hope that next year’s NAB will be even better than this year’s.

Main Image: Barbie Leung.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

NAB 2019: A cinematographer’s perspective

By Barbie Leung

As an emerging cinematographer, I always wanted to attend an NAB show, and this year I had my chance. I found that no amount of research can prepare you for the sheer size of the show floor, not to mention the backrooms, panels and after-hours parties. As a camera operator as well as a cinematographer who is invested in the post production and exhibition end of the spectrum, I found it absolutely impossible to see everything I wanted to or catch up with all the colleagues and vendors I wanted to. This show is a massive and draining ride.

Panasonic EV1

There was a lot of buzz in the ether about 5G technology. Fast and accurate, the consensus seems to be that 5G will be the tipping point in implementing a lot of the tech that’s been talked about for years but hasn’t quite taken off yet, including the feasibility of autonomous vehicles and 8K streaming stateside.

It’s hard to deny the arrival of 8K technology while staring at the detail and textures on an 80-inch Sharp 8K professional display. Every roof tile, every wave in the ocean is rendered in rich, stunning detail.

In response to the resolution race, on the image capture end of things, Arri had already announced and started taking orders for the Alexa Mini LF — its long-awaited entry into the large format game — in the week before NAB.

Predictably, at NAB we saw many lens manufacturers highlighting full-frame coverage. Canon introduced its Sumire Prime lenses, while Fujinon announced the Premista 28-100mm T2.9 full-format zoom.

Sumire Prime lenses

Camera folks, including many ASC members, are embracing large format capture for sure, but some insist the appeal lies not so much in the increased resolution, but rather in the depth and overall image quality.

Meanwhile, back in 35mm sensor land, Panasonic continues its energetic push of the EVA1 camera. Aside from presentations at their booth emphasizing “cinematic” images from this compact 5.7K camera, they’ve done a subtle but not-to-subtle job of disseminating the EVA1 throughout the trade show floor. If you’re at the Atomos booth, you’ll find director/cinematographers like Elle Schneider presenting work shot with Atomos with the EVA1 balanced on a Ronin-S, and if you stop by Tiffen you’ll find an EVA1 being flown next to the Alexa Mini.

I found a ton of motion control at the show. From Shotover’s new compact B1 gyro stabilized camera system to the affable folks at Arizona-based Defy, who showed off their Dactylcam Pro, an addictively smooth-to-operate cable-suspension rig. The Bolt high-speed Cinebot had high-speed robotic arms complete with a spinning hologram.

Garret Brown at the Tiffen booth.

All this new gimbal technology is an ever-evolving game changer. Steadicam inventor Garrett Brown was on hand at the Tiffen booth to show the new M2 sled, which has motors elegantly built into the base. He enthusiastically heralded that camera operators can go faster and more “dangerously” than ever. There was so much motion control that it vied for attention alongside all the talk of 5G, 8K and LED lighting.

Some veterans of the show have expressed that this year’s show felt “less exciting” than shows of the past eight to 10 years. There were fewer big product launch announcements, perhaps due to past years where companies have been unable to fulfill the rush of post-NAB orders for new products for 12 or even 18 months. Vendors have been more conservative with what to hype, more careful with what to promise.

For a new attendee like me, there was more than enough new tech to explore. Above all else, NAB is really about the people you meet. The tech will be new next year, but the relationships you start and build at NAB are meant to last a career.

Main Image: ARRI’s Alexa Mini LF.


Barbie Leung is a New York-based cinematographer and camera operator working in independent film and branded content. Her work has played Sundance, the Tribeca Film Festival and Outfest. You can follow her on Instagram at @barbieleungdp.

Colorfront at NAB with 8K HDR, product updates

Colorfront, which makes on-set dailies and transcoding systems, has rolled out new 8K HDR capabilities and updates across its product lines. The company has also deepened its technology partnership with AJA and entered into a new collaboration with Pomfort to bring more efficient color and HDR management on-set.

Colorfront Transkoder is a post workflow tool for handling UHD, HDR camera, color and editorial/deliverables formats, with recent customers such as Sky, Pixelogic, The Picture Shop and Hulu. With a new HDR GUI, Colorfront’s Transkoder 2019 performs the realtime decompression/de-Bayer/playback of Red and Panavision DXL2 8K R3D material displayed on a Samsung 82-inch Q900R QLED 8K Smart TV in HDR and in full 8K resolution (7680 X 4320). The de-Bayering process is optimized through Nvidia GeForce RTX graphics cards with Turing GPU architecture (also available on Colorfront On-Set Dailies 2019), with 8K video output (up to 60p) using AJA Kona 5 video cards.

“8K TV sets are becoming bigger, as well as more affordable, and people are genuinely awestruck when they see 8K camera footage presented on an 8K HDR display,” said Aron Jaszberenyi, managing director, Colorfront. “We are actively working with several companies around the world originating 8K HDR content. Transkoder’s new 8K capabilities — across on-set, post and mastering — demonstrate that 8K HDR is perfectly accessible to an even wider range of content creators.”

Powered by a re-engineered version of Colorfront Engine and featuring the HDR GUI and 8K HDR workflow, Transkoder 2019 supports camera/editorial formats including Apple ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE (High Density Encoding).

Transkoder 2019’s mastering toolset has been further expanded to support Dolby Vision 4.0 as well as Dolby Atmos for the home with IMF and Immersive Audio Bitstream capabilities. The new Subtitle Engine 2.0 supports CineCanvas and IMSC 1.1 rendering for preservation of content, timing, layout and styling. Transkoder can now also package multiple subtitle language tracks into the timeline of an IMP. Further features support fast and efficient audio QC, including solo/mute of individual tracks on the timeline, and a new render strategy for IMF packages enabling independent audio and video rendering.

Colorfront also showed the latest versions of its On-Set Dailies and Express Dailies products for motion pictures and episodic TV production. On-Set Dailies and Express Dailies both now support ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE. As with Transkoder 2019, the new version of On-Set Dailies supports real-time 8K HDR workflows to support a set-to-post pipeline from HDR playback through QC and rendering of HDR deliverables.

In addition, AJA Video Systems has released v3.0 firmware for its FS-HDR realtime HDR/WCG converter and frame synchronizer. The update introduces enhanced coloring tools together with several other improvements for broadcast, on-set, post and pro AV HDR production developed by Colorfront.

A new, integrated Colorfront Engine Film Mode offers an ACES-based grading and look creation toolset with ASC Color Decision List (CDL) controls, built-in LOOK selection including film emulation looks, and variable Output Mastering Nit Levels for PQ, HLG Extended and P3 colorspace clamp.

Since launching in 2018, FS-HDR has been used on a wide range of TV and live outside broadcast productions, as well as motion pictures including Paramount Pictures’ Top Gun: Maverick, shot by Claudio Miranda, ASC.

Colorfront licensed its HDR Image Analyzer software to AJA for AJA’s HDR Image Analyzer in 2018. A new version of AJA HDR Image Analyzer is set for release during Q3 2019.

Finally, Colorfront and Pomfort have teamed up to integrate their respective HDR-capable on-set systems. This collaboration, harnessing Colorfront Engine, will include live CDL reading in ACES pipelines between Colorfront On-Set/Express Dailies and Pomfort LiveGrade Pro, giving motion picture productions better control of HDR images while simplifying their on-set color workflows and dailies processes.

AWS at NAB with a variety of partners, cloud workflows

During NAB 2019, Amazon Web Services (AWS) showcased advances for content creation, media supply chains and content distribution that improve agility and enhance quality across video workflows. Demonstrations included enhanced live and on-demand video workflows, such as next-gen transcoding, studio in the cloud, content protection, low latency and personalization. The company also highlighted cloud-based machine learning capabilities for content redaction, highlight creation, video clipping, live subtitling and metadata extraction.

AWS was joined by 12 technology partners in showing solutions that help users create, protect, distribute and monetize streaming video content. More than 60 Amazon Partner members across the show floor demonstrated media solutions built on AWS and interoperable with AWS services to deliver scalable video workflows.

Here are some workflows highlighted:
• Studio in the cloud – Users can deploy a creative studio in the cloud for visual effects, animation and editing workloads. They can scale rendering, virtual workstations and data storage globally with AWS Thinkbox Deadline, Amazon Elastic Compute Cloud (EC2) instances and AWS Cloud storage options such as Amazon Simple Storage Service (Amazon S3), Amazon FSx and more.
• Next-generation transcoding – AWS Elemental MediaConvert spotlighted advanced features for file-based video processing. Support for IMF inputs and CMAF output simplifies video delivery, and integrated Quality-Defined Variable Bitrate (QVBR) rate control enables high-quality video while lowering bitrates, storage and bandwidth requirements.
• Cloud DVR services – AWS Elemental MediaPackage enables an end-to-end cloud DVR workflow that lets content providers deliver DVR-like experiences, such as catch-up and start-over functionality for viewing on mobile and other over-the-top (OTT) devices.

AWS also highlighted intelligent workflows and automated capabilities:
• Media-to-cloud migration – Media asset management tools integrate with AWS Elemental MediaConvert, Amazon S3 and Amazon CloudFront to accelerate migration of large-scale video archives into the cloud. Built-in metadata tools improve search and management for massive media archives.
• Smart language workflows – AWS Elemental Media Services and Amazon Machine Learning work together to automate realtime transcription, caption creation and multi-language subtitling and dubbing, as well as creation of video clips based on caption text.
• Deep media archive – The new Amazon S3 Glacier Deep Archive storage class is a low-cost cloud storage offering that enables customers to eliminate digital tape from their media infrastructures. It is ideally suited to cold media archives and to second copy and disaster recovery needs.

Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.

HP shows off new HP Z6 and Z8 G4 workstations at NAB

HP was at NAB demoing their new HP Z6 and Z8 G4 workstations, which feature Intel Xeon scalable processors and Intel Optane DC persistent memory technology to eliminate the barrier between memory and storage for compute-intensive workflows, including machine learning, multimedia and VFX. The new workstations offer accelerated performance with a processor-architecture that allows users to work faster and more efficiently.

Intel Optane DC allows users to improve system performance by moving large datasets closer to the CPU so it can be assessed, processed and analyzed in realtime and in a more affordable way. This will allow for no data loss after a power cycle or application closure. Once applications are written to take advantage of this new technology, users will benefit from accelerated workflows and little or no downtime.

Targeting 8K video editing in realtime and for rendering workflows, the HP Z6 G4 workstation is equipped with two next-generation Intel Xeon processors providing up to 48 total processor cores in one system, Nvidia and AMD graphics and 384GB of memory. Users can install professional-grade storage hardware without using standard PCIe slots, offering the ability to upgrade over time.

Powered by up to 56 processing cores and up to 3TB of high-speed memory, the HP Z8 G4 workstation can run complex 3D simulations, supporting VFX workflows and handling advanced machine learning algorithms. They are certified for some of the most-used software apps, including Autodesk Flame and DaVinci Resolve.

HP’s Remote Graphics Software (RGS), included with all HP Z workstations, enables remote workstation access from any Windows, Linux or Mac device.

Avid is collaborating with HP to test RGS with Media Composer|Cloud VM.

The HP Z6 G4 workstation with new Intel Xeon processors is available now for the base price of $2,372. The HP Z8 G4 workstation starts at $2,981.

AI and deep learning at NAB 2019

By Tim Nagle

If you’ve been there, you know. Attending NAB can be both exciting and a chore. The vast show floor spreads across three massive halls and several hotels, and it will challenge even the most comfortable shoes. With an engineering background and my daily position as a Flame artist, I am definitely a gear-head, but I feel I can hardly claim that title at these events.

Here are some of my takeaways from the show this year…

Tim Nagle

8K
Having listened to the rumor mill, this year’s event promised to be exciting. And for me, it did not disappoint. First impressions: 8K infrastructure is clearly the goal of the manufacturers. Massive data rates and more Ks are becoming the norm. Everybody seemed to have an 8K workflow announcement. As a Flame artist, I’m not exactly looking forward to working on 8K plates. Sure, it is a glorious number of pixels, but the challenges are very real. While this may be the hot topic of the show, the fact that it is on the horizon further solidifies the need for the industry at large to have a solid 4K infrastructure. Hey, maybe we can even stop delivering SD content soon? All kidding aside, the systems and infrastructure elements being designed are quite impressive. Seeing storage solutions that can read and write at these astronomical speeds is just jaw dropping.

Young Attendees
Attendance remained relatively stable this year, but what I did notice was a lot of young faces making their way around the halls. It seemed like high school and university students were able to take advantage of interfacing with manufacturers, as well as some great educational sessions. This is exciting, as I really enjoy watching young creatives get the opportunity to express themselves in their work and make the rest of us think a little differently.

Blackmagic Resolve 16

AI/Deep Learning
Speaking of the future, AI and deep learning algorithms are being implemented into many parts of our industry, and this is definitely something to watch for. The possibilities to increase productivity are real, but these technologies are still relatively new and need time to mature. Some of the post apps taking advantage of these algorithms come from Blackmagic, Autodesk and Adobe.

At the show, Blackmagic announced their Neural Engine AI processing, which is integrated into DaVinci Resolve 16 for facial recognition, speed warp estimation and object removal, to name just a few. These features will add to the productivity of this software, further claiming its place among the usual suspects for more than just color correction.

Flame 2020

The Autodesk Flame team has implemented deep learning in to their app as well. It portends really impressive uses for retouching and relighting, as well as creating depth maps of scenes. Autodesk demoed a shot of a woman on the beach, with no real key light possibility and very flat, diffused lighting in general. With a few nodes, they were able to relight her face to create a sense of depth and lighting direction. This same technique can be used for skin retouch as well, which is very useful in my everyday work.

Adobe has also been working on their implementation of AI with the integration of Sensei. In After Effects, the content-aware algorithms will help to re-texture surfaces, remove objects and edge blend when there isn’t a lot of texture to pull from. Watching a demo artist move through a few shots, removing cars and people from plates with relative ease and decent results, was impressive.

These demos have all made their way online, and I encourage everyone to watch. Seeing where we are headed is quite exciting. We are on our way to these tools being very accurate and useful in everyday situations, but they are all very much a work in progress. Good news, we still have jobs. The robots haven’t replaced us yet.


Tim Nagle is a Flame artist at Dallas-based Lucky Post.

NAB 2019: First impressions

By Mike McCarthy

There are always a slew of new product announcements during the week of NAB, and this year was no different. As a Premiere editor, the developments from Adobe are usually the ones most relevant to my work and life. Similar to last year, Adobe was able to get their software updates released a week before NAB, instead of for eventual release months later.

The biggest new feature in the Adobe Creative Cloud apps is After Effects’ new “Content Aware Fill” for video. This will use AI to generate image data to automatically replace a masked area of video, based on surrounding pixels and surrounding frames. This functionality has been available in Photoshop for a while, but the challenge of bringing that to video is not just processing lots of frames but keeping the replaced area looking consistent across the changing frames so it doesn’t stand out over time.

The other key part to this process is mask tracking, since masking the desired area is the first step in that process. Certain advances have been made here, but based on tech demos I saw at Adobe Max, more is still to come, and that is what will truly unlock the power of AI that they are trying to tap here. To be honest, I have been a bit skeptical of how much AI will impact film production workflows, since AI-powered editing has been terrible, but AI-powered VFX work seems much more promising.

Adobe’s other apps got new features as well, with Premiere Pro adding Free-Form bins for visually sorting through assets in the project panel. This affects me less, as I do more polishing than initial assembly when I’m using Premiere. They also improved playback performance for Red files, acceleration with multiple GPUs and certain 10-bit codecs. Character Animator got a better puppet rigging system, and Audition got AI-powered auto-ducking tools for automated track mixing.

Blackmagic
Elsewhere, Blackmagic announced a new version of Resolve, as expected. Blackmagic RAW is supported on a number of new products, but I am not holding my breath to use it in Adobe apps anytime soon, similar to ProRes RAW. (I am just happy to have regular ProRes output available on my PC now.) They also announced a new 8K Hyperdeck product that records quad 12G SDI to HEVC files. While I don’t think that 8K will replace 4K television or cinema delivery anytime soon, there are legitimate markets that need 8K resolution assets. Surround video and VR would be one, as would live background screening instead of greenscreening for composite shots. No image replacement in post, as it is capturing in-camera, and your foreground objects are accurately “lit” by the screens. I expect my next major feature will be produced with that method, but the resolution wasn’t there for the director to use that technology for the one I am working on now (enter 8K…).

AJA
AJA was showing off the new Ki Pro Go, which records up to four separate HD inputs to H.264 on USB drives. I assume this is intended for dedicated ISO recording of every channel of a live-switched event or any other multicam shoot. Each channel can record up to 1080p60 at 10-bit color to H264 files in MP4 or MOV and up to 25Mb.

HP
HP had one of their existing Z8 workstations on display, demonstrating the possibilities that will be available once Intel releases their upcoming DIMM-based Optane persistent memory technology to the market. I have loosely followed the Optane story for quite a while, but had not envisioned this impacting my workflow at all in the near future due to software limitations. But HP claims that there will be options to treat Optane just like system memory (increasing capacity at the expense of speed) or as SSD drive space (with DIMM slots having much lower latency to the CPU than any other option). So I will be looking forward to testing it out once it becomes available.

Dell
Dell was showing off their relatively new 49-inch double-wide curved display. The 4919DW has a resolution of 5120×1440, making it equivalent to two 27-inch QHD displays side by side. I find that 32:9 aspect ratio to be a bit much for my tastes, with 21:9 being my preference, but I am sure there are many users who will want the extra width.

Digital Anarchy
I also had a chat with the people at Digital Anarchy about their Premiere Pro-integrated Transcriptive audio transcription engine. Having spent the last three months editing a movie that is split between English and Mandarin dialogue, needing to be fully subtitled in both directions, I can see the value in their tool-set. It harnesses the power of AI-powered transcription engines online and integrates the results back into your Premiere sequence, creating an accurate script as you edit the processed clips. In my case, I would still have to handle the translations separately once I had the Mandarin text, but this would allow our non-Mandarin speaking team members to edit the Mandarin assets in the movie. And it will be even more useful when it comes to creating explicit closed captioning and subtitles, which we have been doing manually on our current project. I may post further info on that product once I have had a chance to test it out myself.

Summing Up
There were three halls of other products to look through and check out, but overall, I was a bit underwhelmed at the lack of true innovation I found at the show this year.

Full disclosure, I was only able to attend for the first two days of the exhibition, so I may have overlooked something significant. But based on what I did see, there isn’t much else that I am excited to try out or that I expect to have much of a serious impact on how I do my various jobs.

It feels like most of the new things we are seeing are merely commoditized versions of products that may originally have been truly innovative when they were initially released, but now are just slightly more fleshed out versions over time.

There seems to be much less pioneering of truly new technology and more repackaging of existing technologies into other products. I used to come to NAB to see all the flashy new technologies and products, but now it feels like the main thing I am doing there is a series of annual face-to-face meetings, and that’s not necessarily a bad thing.

Until next year…


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Sony’s NAB updates — a cinematographer’s perspective

By Daniel Rodriguez

With its NAB offerings, Sony once again showed that they have a firm presence in nearly every stage of production, be it motion picture, broadcast media or short form. The company continues to keep up to date with the current demands while simultaneously preparing for the inevitable wave of change that seems to come faster and faster each year. While the introduction of new hardware was kept to a short list this year, many improvements to existing hardware and software were released to ensure Sony products — both new and existing — still have a firm presence in the future.

The ability to easily access, manipulate, share and stream media has always been a priority for Sony. This year at NAB, Sony continued to demonstrate its IP Live, SR Live, XDCAM Air and Media Backbone Hive platforms, which give users the opportunity to manage media all over the globe. IP Live allows users to access remote production, which contains core processing hardware while accessing it anywhere. This extends to 4K and HDR/SDR streaming as well, which is where SR Live comes into play. SR Live allows for a native 4K HDR signal to be processed into full HD and regular SDR signals, and a core improvement is the ability to adjust the curves during a live broadcast for any issues that may arise in converting HDR signals to SDR.

For other media, including XDCAM-based cameras, XDCAM Air allows for the wireless transfer and streaming of most media through QoS services, and turns almost any easily accessible camera with wireless capabilities into a streaming tool.

Media Backbone Hive allows users to access their media anywhere they want. Rather than just being an elaborate cloud service, Media Backbone Hive allows internal Adobe Cloud-based editing, accepts nearly every file type, allows a user to embed metadata and makes searching simple with keywords and phrases that are spoken in the media itself.

For the broadcast market, Sony introduced the Sony HDC-5500 4K HDR three-CMOS sensor camcorder which they are calling their “flagship” camera in this market. Offering 4K HDR and high frame rates, the camera also offers a global shutter — which is essential for dealing with strobing from lights — and can now capture fast action without the infamous rolling shutter blur. The camera allows for 4K output over 12G SDI, allowing for 4K monitoring and HDR, and as these outputs continue to be the norm, the introduction of the HDC-5500 will surely be a hit with users, especially with the addition of global shutter.

Sony is very much a company that likes to focus on the longevity of their previous releases… cameras especially. Sony’s FS7 is a camera that has excelled in its field since its introduction in 2014, and to this day is an extremely popular choice for short form, narrative and broadcast media. Like other Sony camera bodies, the FS7 allows for modular builds and add-ons, and this is where the new CBK-FS7BK ENG Build-Up Kit comes in. Sporting a shoulder mount and ENG viewfinder, the kit includes an extension in the back that allows for two wireless audio inputs, RAW output, streaming and file transfer via Wireless LAN or 4G/LTE connection, as well as QoS streaming (only through XDCAM Air) and timecode input. This CBK-FS7BK ENG Build-Up Kit turns the FS7 into an even more well-rounded workhorse.

The Sony Venice is Sony’s flagship Cinema camera, replacing the Sony F65, which is still brilliant and a popular camera. Having popped up as recently as last year’s Annihilation, the Venice takes a leap further in entering the full-frame, VistaVision market. Boasting top-of-the-line specs and a smaller, more modular build than the F65, the camera isn’t exactly a new release — it came out in November 2017 — but Sony has secured longevity in their flagship camera in a time when other camera manufacturers are just releasing their own VistaVision-sensored cameras and smaller alternatives.

Sony recently released a firmware update to the Venice that allows X-OCN XT — their highest form of compressed 16-bit RAW — two new imager modes, allowing the camera to sample 5.7K 16:9 in full frame and 6K 2.39:1 in full width, as well as 4K signal over 6G/12G SDI output and wireless remote control with the CBK-WA02. Since the Venice is smaller and able to be mounted on harder-to-reach mounts, wireless control is quickly becoming a feature that many camera assistants need. Newer anamorphic desqueeze modes for 1.25x, 1.3x, 1.5x and 1.8x have also been added, which is huge, since many older and newer lenses are constantly being created and revisited, such as the Technovision 1.5x — made famous by Vittorio Storaro on Apocalypse Now (1979) — and the Cooke Full Frame Anamorphics 1.8X. With VistaVision full frame now being an easily accessible way of filming, new forms of lensing are now becoming common, so systems like anamorphic are no longer limited to 1.3X and 2X. It’s reassuring to see Sony look out for storytellers who may want to employ less common anamorphic desqueeze sizes.

As larger resolutions and higher frame rates become the norm, Sony has introduced the new Sony SxS Pro X cards. A follow up to the hugely successful Sony SxS Pro+ cards, these new cards boost an incredible transfer speed of 10Gbps (1250Mbps) in 120GB and 240GB cards. This is a huge step up from the previous SxS Pro+ cards that offered a read speed of 3.5Gbps and a write speed of 2.8Gbps. Probably the most exciting part of these new cards being introduced is the corresponding SBAC-T40 card reader which guarantees a full 240GB card to be offloaded in 3.5 minutes.

Sony’s newest addition to the Venice camera is the Rialto extension system. Using the Venice’s modular build, the Rialto is a hardware extension that allows you to remove the main body’s sensor and install it into a smaller body unit which is then tethered either nine or 18 feet by cable back to the main body. Very reminiscent of the design of ARRI’s Alexa M unit, the Rialto goes further by being an extension of its main system rather than a singular system, which may bring its own issues. The Rialto allows users to reach spots where it may otherwise prove difficult using the actual Venice body. Its lightweight design allows users to mount it nearly anywhere. Where other camera bodies that are designed to be smaller end up heavy when outfitted with accessories such as batteries and wireless transmitters, the Rialto can easily be rigged to aerials, handhelds, and Steadicams. Though some may question why you wouldn’t just get a smaller body from another camera company, the big thing to consider is that the Rialto isn’t a solution to the size of the Venice body — which is already very small, especially compared to the previous F65 — but simply another tool to get the most out of the Venice system, especially considering you’re not sacrificing anything as far as features or frame rates. The Rialto is currently being used on James Cameron’s Avatar sequels, as its smaller body allows him to employ two simultaneously for true 3D recording whilst giving all the options of the Venice system.

With innovations in broadcast and motion picture production, there is a constant drive to push boundaries and make capture/distribution instant. Creating a huge network for distribution, streaming, capture, and storage has secured Sony not only as the powerhouse that it already is, but also ensures its presence in the ever-changing future.


Daniel Rodriguez is a New York based director and cinematographer. Having spent years working for such companies as Light Iron, Panavision and ARRI Rental, he currently works as a freelance cinematographer, filming narrative and commercial work throughout the five boroughs.