Author Archives: Randi Altman

HP shows off new HP Z6 and Z8 G4 workstations at NAB

HP was at NAB demoing their new HP Z6 and Z8 G4 workstations, which feature Intel Xeon scalable processors and Intel Optane DC persistent memory technology to eliminate the barrier between memory and storage for compute-intensive workflows, including machine learning, multimedia and VFX. The new workstations offer accelerated performance with a processor-architecture that allows users to work faster and more efficiently.

Intel Optane DC allows users to improve system performance by moving large datasets closer to the CPU so it can be assessed, processed and analyzed in realtime and in a more affordable way. This will allow for no data loss after a power cycle or application closure. Once applications are written to take advantage of this new technology, users will benefit from accelerated workflows and little or no downtime.

Targeting 8K video editing in realtime and for rendering workflows, the HP Z6 G4 workstation is equipped with two next-generation Intel Xeon processors providing up to 48 total processor cores in one system, Nvidia and AMD graphics and 384GB of memory. Users can install professional-grade storage hardware without using standard PCIe slots, offering the ability to upgrade over time.

Powered by up to 56 processing cores and up to 3TB of high-speed memory, the HP Z8 G4 workstation can run complex 3D simulations, supporting VFX workflows and handling advanced machine learning algorithms. They are certified for some of the most-used software apps, including Autodesk Flame and DaVinci Resolve.

HP’s Remote Graphics Software (RGS), included with all HP Z workstations, enables remote workstation access from any Windows, Linux or Mac device.

Avid is collaborating with HP to test RGS with Media Composer|Cloud VM.

The HP Z6 G4 workstation with new Intel Xeon processors is available now for the base price of $2,372. The HP Z8 G4 workstation starts at $2,981.

AI and deep learning at NAB 2019

By Tim Nagle

If you’ve been there, you know. Attending NAB can be both exciting and a chore. The vast show floor spreads across three massive halls and several hotels, and it will challenge even the most comfortable shoes. With an engineering background and my daily position as a Flame artist, I am definitely a gear-head, but I feel I can hardly claim that title at these events.

Here are some of my takeaways from the show this year…

Tim Nagle

8K
Having listened to the rumor mill, this year’s event promised to be exciting. And for me, it did not disappoint. First impressions: 8K infrastructure is clearly the goal of the manufacturers. Massive data rates and more Ks are becoming the norm. Everybody seemed to have an 8K workflow announcement. As a Flame artist, I’m not exactly looking forward to working on 8K plates. Sure, it is a glorious number of pixels, but the challenges are very real. While this may be the hot topic of the show, the fact that it is on the horizon further solidifies the need for the industry at large to have a solid 4K infrastructure. Hey, maybe we can even stop delivering SD content soon? All kidding aside, the systems and infrastructure elements being designed are quite impressive. Seeing storage solutions that can read and write at these astronomical speeds is just jaw dropping.

Young Attendees
Attendance remained relatively stable this year, but what I did notice was a lot of young faces making their way around the halls. It seemed like high school and university students were able to take advantage of interfacing with manufacturers, as well as some great educational sessions. This is exciting, as I really enjoy watching young creatives get the opportunity to express themselves in their work and make the rest of us think a little differently.

Blackmagic Resolve 16

AI/Deep Learning
Speaking of the future, AI and deep learning algorithms are being implemented into many parts of our industry, and this is definitely something to watch for. The possibilities to increase productivity are real, but these technologies are still relatively new and need time to mature. Some of the post apps taking advantage of these algorithms come from Blackmagic, Autodesk and Adobe.

At the show, Blackmagic announced their Neural Engine AI processing, which is integrated into DaVinci Resolve 16 for facial recognition, speed warp estimation and object removal, to name just a few. These features will add to the productivity of this software, further claiming its place among the usual suspects for more than just color correction.

Flame 2020

The Autodesk Flame team has implemented deep learning in to their app as well. It portends really impressive uses for retouching and relighting, as well as creating depth maps of scenes. Autodesk demoed a shot of a woman on the beach, with no real key light possibility and very flat, diffused lighting in general. With a few nodes, they were able to relight her face to create a sense of depth and lighting direction. This same technique can be used for skin retouch as well, which is very useful in my everyday work.

Adobe has also been working on their implementation of AI with the integration of Sensei. In After Effects, the content-aware algorithms will help to re-texture surfaces, remove objects and edge blend when there isn’t a lot of texture to pull from. Watching a demo artist move through a few shots, removing cars and people from plates with relative ease and decent results, was impressive.

These demos have all made their way online, and I encourage everyone to watch. Seeing where we are headed is quite exciting. We are on our way to these tools being very accurate and useful in everyday situations, but they are all very much a work in progress. Good news, we still have jobs. The robots haven’t replaced us yet.


Tim Nagle is a Flame artist at Dallas-based Lucky Post.

NAB 2019: First impressions

By Mike McCarthy

There are always a slew of new product announcements during the week of NAB, and this year was no different. As a Premiere editor, the developments from Adobe are usually the ones most relevant to my work and life. Similar to last year, Adobe was able to get their software updates released a week before NAB, instead of for eventual release months later.

The biggest new feature in the Adobe Creative Cloud apps is After Effects’ new “Content Aware Fill” for video. This will use AI to generate image data to automatically replace a masked area of video, based on surrounding pixels and surrounding frames. This functionality has been available in Photoshop for a while, but the challenge of bringing that to video is not just processing lots of frames but keeping the replaced area looking consistent across the changing frames so it doesn’t stand out over time.

The other key part to this process is mask tracking, since masking the desired area is the first step in that process. Certain advances have been made here, but based on tech demos I saw at Adobe Max, more is still to come, and that is what will truly unlock the power of AI that they are trying to tap here. To be honest, I have been a bit skeptical of how much AI will impact film production workflows, since AI-powered editing has been terrible, but AI-powered VFX work seems much more promising.

Adobe’s other apps got new features as well, with Premiere Pro adding Free-Form bins for visually sorting through assets in the project panel. This affects me less, as I do more polishing than initial assembly when I’m using Premiere. They also improved playback performance for Red files, acceleration with multiple GPUs and certain 10-bit codecs. Character Animator got a better puppet rigging system, and Audition got AI-powered auto-ducking tools for automated track mixing.

Blackmagic
Elsewhere, Blackmagic announced a new version of Resolve, as expected. Blackmagic RAW is supported on a number of new products, but I am not holding my breath to use it in Adobe apps anytime soon, similar to ProRes RAW. (I am just happy to have regular ProRes output available on my PC now.) They also announced a new 8K Hyperdeck product that records quad 12G SDI to HEVC files. While I don’t think that 8K will replace 4K television or cinema delivery anytime soon, there are legitimate markets that need 8K resolution assets. Surround video and VR would be one, as would live background screening instead of greenscreening for composite shots. No image replacement in post, as it is capturing in-camera, and your foreground objects are accurately “lit” by the screens. I expect my next major feature will be produced with that method, but the resolution wasn’t there for the director to use that technology for the one I am working on now (enter 8K…).

AJA
AJA was showing off the new Ki Pro Go, which records up to four separate HD inputs to H.264 on USB drives. I assume this is intended for dedicated ISO recording of every channel of a live-switched event or any other multicam shoot. Each channel can record up to 1080p60 at 10-bit color to H264 files in MP4 or MOV and up to 25Mb.

HP
HP had one of their existing Z8 workstations on display, demonstrating the possibilities that will be available once Intel releases their upcoming DIMM-based Optane persistent memory technology to the market. I have loosely followed the Optane story for quite a while, but had not envisioned this impacting my workflow at all in the near future due to software limitations. But HP claims that there will be options to treat Optane just like system memory (increasing capacity at the expense of speed) or as SSD drive space (with DIMM slots having much lower latency to the CPU than any other option). So I will be looking forward to testing it out once it becomes available.

Dell
Dell was showing off their relatively new 49-inch double-wide curved display. The 4919DW has a resolution of 5120×1440, making it equivalent to two 27-inch QHD displays side by side. I find that 32:9 aspect ratio to be a bit much for my tastes, with 21:9 being my preference, but I am sure there are many users who will want the extra width.

Digital Anarchy
I also had a chat with the people at Digital Anarchy about their Premiere Pro-integrated Transcriptive audio transcription engine. Having spent the last three months editing a movie that is split between English and Mandarin dialogue, needing to be fully subtitled in both directions, I can see the value in their tool-set. It harnesses the power of AI-powered transcription engines online and integrates the results back into your Premiere sequence, creating an accurate script as you edit the processed clips. In my case, I would still have to handle the translations separately once I had the Mandarin text, but this would allow our non-Mandarin speaking team members to edit the Mandarin assets in the movie. And it will be even more useful when it comes to creating explicit closed captioning and subtitles, which we have been doing manually on our current project. I may post further info on that product once I have had a chance to test it out myself.

Summing Up
There were three halls of other products to look through and check out, but overall, I was a bit underwhelmed at the lack of true innovation I found at the show this year.

Full disclosure, I was only able to attend for the first two days of the exhibition, so I may have overlooked something significant. But based on what I did see, there isn’t much else that I am excited to try out or that I expect to have much of a serious impact on how I do my various jobs.

It feels like most of the new things we are seeing are merely commoditized versions of products that may originally have been truly innovative when they were initially released, but now are just slightly more fleshed out versions over time.

There seems to be much less pioneering of truly new technology and more repackaging of existing technologies into other products. I used to come to NAB to see all the flashy new technologies and products, but now it feels like the main thing I am doing there is a series of annual face-to-face meetings, and that’s not necessarily a bad thing.

Until next year…


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Blackmagic’s Resolve 16: speedy cut page, Resolve Editor Keyboard, more

Blackmagic was at NAB with Resolve 16, which in addition to dozens of new features includes a new editing tab focused on speed. While Resolve still has its usual robust editing offerings, this particular cut page is designed for those working on short-form projects and on tight deadlines. Think of having a client behind you watching you cut something together, or maybe showing your director a rough cut. You get in, you edit and you go — it’s speedy, like editing triage.

For those who don’t want to edit this way, no worries, you don’t have to use this new tab. Just ignore it and move on. It’s an option, and only an option. That’s another theme with Resolve 16 — if you don’t want to see the Fairlight tab, turn it off. You want to see something in a different way, turn it on.

Blackmagic also introduced the DaVinci Resolve Editor Keyboard, a new premium keyboard for Resolve that helps improve the speed of editing. It allows the use of two hands while editing, so transport control and selecting clips can be done while performing edits. The Resolve Editor Keyboard will be available in August for $995.

The keyboard combined with the new cut page is designed to further speed up editing. This alternate edit page lets users import, edit, trim, add transitions, titles, automatically match color, mix audio and more. Whether you’re delivering for broadcast or for YouTube, the cut page allows editors to do all things in one place. Plus, the regular edit page is still available, so customers can switch between edit and cut pages to change editing styles right in the middle of a job.

“The new cut page in DaVinci Resolve 16 helps television commercial and other high-end editors meet super tight deadlines on fast turn-around projects,” says Grant Petty, Blackmagic CEO. “We’ve designed a whole new high-performance, nonlinear workflow. The cut page is all about power and speed. Plus, editors that need to work on more complex projects can still use the regular edit page. DaVinci Resolve 16 gives different editors the choice to work the way they want.”

The cut page is reminiscent of how editors used to work in the days of tape, where finding a clip was easy because customers could just spool up and down the tape to see their media and select shots. Today, finding the right clip in a bin with hundreds of files can be slow. With source tape, users no longer have to hunt through bins to find the clip they need. They can click on the source tape button and all of the clips in their bin appear in the viewer as a single long “tape.” This makes it easy to scrub through all of the shots, find the parts they want and quickly edit them to the timeline. Blackmagic calls it an “old-fashioned” concept that’s been modernized to help editors find the shots they need fast.

The new cut page features a dual timeline so editors don’t have to zoom in or out. The upper timeline shows users the entire program, while the lower timeline shows the current work area. Both timelines are fully functional, allowing editors to move and trim clips in whichever timeline is most convenient.

Also new is the DaVinci Neural Engine, which uses deep neural networks and learning, along with AI, to power new features such as speed warp motion estimation for retiming, super scale for up-scaling footage, auto color and color matching, facial recognition and more. The DaVinci Neural Engine is entirely cross-platform and uses the latest GPU innovations for AI and deep learning. The Neural Engine provides simple tools to solve complex, repetitive and time-consuming problems. For example, it enables facial recognition to automatically sort and organize clips into bins based on people in the shot.

DaVinci Resolve 16 also features new adjustment clips that let users apply effects and grades to clips on the timeline below; quick export that can be used to upload projects to YouTube, Vimeo and Frame.io from anywhere in the application; and new GPU-accelerated scopes providing more technical monitoring options than before. So now sharing your work on social channels, or for collaboration via Frame.io., is simple because it’s integrated into Resolve 16 Studio

DaVinci Resolve 16 Studio features improvements to existing ResolveFX, along with several new plugins that editors and colorists will like. There are new ResolveFX plugins for adding vignettes, drop shadows, removing objects, adding analog noise and damage, chromatic aberration, stylizing video and more. There are also improvements to the scanline, beauty, face refinement, blanking fill, warper, dead pixel fixer and colorspace transformation plugins. Plus, users can now view and edit ResolveFX keyframes from the timeline curve editor on the edit page or from the keyframe panel on the color page.

Here are all the updates within Resolve 16:

• DaVinci Neural Engine for AI and deep learning features
• Dual timeline to edit and trim without zooming and scrolling
• Source tape to review all clips as if they were a single tape
• Trim interface to view both sides of an edit and trim
• Intelligent edit modes to auto-sync clips and edit
• Timeline review playback speed based on clip length
• Built-in tools for retime, stabilization and transform
• Render and upload directly to YouTube and Vimeo
• Direct media import via buttons
• Scalable interface for working on laptop screens
• Create projects with different frame rates and resolutions
• Apply effects to multiple clips at the same time
• DaVinci Neural Engine detects faces and auto-creates bins
• Frame rate conversions and motion estimation
• Cut and edit page image stabilization
• Curve editor ease in and out controls
• Tape-style audio scrubbing with pitch correction
• Re-encode only changed files for faster rendering
• Collaborate remotely with Frame.io integration
• Improved GPU performance for Fusion 3D operations
• Cross platform GPU accelerated tools
• Accelerated mask operations including B-Spline and bitmap
• Improved planar and tracker performance
• Faster user and smart cache
• GPU-accelerated scopes with advanced technical monitoring
• Custom and HSL curves now feature histogram overlay
• DaVinci Neural Engine auto color and shot match
• Synchronize SDI output to viewer zoom
• Mix and master immersive 3D audio
• Elastic wave audio alignment and retiming
• Bus tracks with automation on timeline
• Foley sampler, frequency analyzer, dialog processor, FairlightFX
• 500 royalty-free Foley sounds effects
• Share markers and notes in collaboration workflows
• Individual user cache for collaborative projects
• Resolve FX plugins with timeline and keyframes

Avid offers rebuilt engine and embraces cloud, ACES, AI, more

By Daniel Restuccio

During its Avid Connect conference just prior to NAB, Avid announced a Media Composer upgrade, support for ACES color standard and additional upgrades to a number of its toolsets, apps and services, including Avid Nexis.

The chief news from Avid is that Media Composer, its flagship video editing system, has been significantly retooled: sporting a new user interface, rebuilt engine, and additional built-in audio, visual effects, color grading and delivery features.

In a pre-interview with postPerspective, Avid president/CEO Jeff Rosica said, “We’re really trying to leap frog and jump ahead to where the creative tools need to go.”

Avid asked themselves, what did they need to do “to help production and post production really innovate?” He pointed to TV shows and films, and how complex they’re getting. “That means they’re dealing with more media, more elements, and with so many more decisions just in the program itself. Let alone the fact that the (TV or film) project may have to have 20 different variants just to go out the door.”

Jeff Rosica

The new paneled user interface simplifies the workspace, has redesigned bins to find media faster, as well as task-based workspaces showing only what the user wants and needs to see.

Dave Colantuoni, VP of product management at Avid, said they spent the most amount of time studying the way that editors manage and organize bins and content within Media Composer. “Some of our editors use 20, 30, 40 bins at a time. We’ve really spent a lot of time so that we can provide an advantage to you in how you approach organizing your media. “

Avid is also offering more efficient workflow solutions. Users, without leaving Media Composer, can work in 8K, 16K or HDR thanks to the newly built-in 32-bit full float color pipeline. Additionally, Avid continues to work with OTT content providers to help establish future industry standards.

“We’re trying to give as much creative power to the creative people as we can, and bring them new ways to deal with things,” said Rosica. “We’re also trying to help the workflow side. We’re trying to help make sure production doesn’t have to do more with less, or sometimes more with the same budget. Cloud (computing) allows us to bring a lot of new capabilities to the products, and we’re going to be cloud powering a lot of our products… more than you’ve seen before.”

The new Media Composer engine is now native OP1A, can handle more video and audio streams, offers Live Timeline and background rendering, and a distributed processing add-on option to shorten turnaround times and speed up post production.

“This is something our competitors do pretty well,” explained Colantuoni. “And we have different instances of OP1A working among the different Avid workflows. Until now, we’ve never had it working natively inside of Media Composer. That’s super-important because a lot of capabilities started in OP1A, and we can now keep it pristine through the pipeline.”

Said Rosica, “We are also bringing the ability to do distributive rendering. An editor no longer has to render or transcode on their machine. They can perform those tasks in a distributed or centralized render farm environment. That allows this work to get done behind the scenes. This is actually an Avid Supply solution, so it will be very powerful and reliable. Users will be able to do background rendering, as well as distributive rendering and move things off the machine to other centralized machines. That’s going to be very helpful for a lot of post workflows.”

Avid had previously offered three main flavors of Media Composer: Media Composer First, the free version; Media Composer; and Media Composer Ultimate. Now they are also offering a new Enterprise version.

For the first time, large production teams can customize the interface for any role in the organization, whether the user is a craft editor, assistant, logger or journalist. It also offers unparalleled security to lock down content, reducing the chances of unauthorized leaks of sensitive media. Enterprise also integrates with Editorial Management 2019.

“The new fourth tier at the top is what we are calling the Enterprise Edition or Enterprise. That word doesn’t necessarily mean broadcast,” says Rosica. “It means for business deployment. This is for post houses and production companies, broadcast, and even studios. This lets the business, or the enterprise, or production, or post house to literally customize interfaces and customize work spaces to the job role or to the user.”

Nexis Cloudspaces
Avid also announced Avid Nexis|Cloudspaces. So Instead of resorting to NAS or external drives for media storage, Avid Nexis|Cloudspaces allows editorial to offload projects and assets not currently in production. Cloudspaces extends Avid Nexis storage directly to Microsoft Azure.

“Avid Nexis|Cloudspaces brings the power of the cloud to Avid Nexis, giving organizations a cost-effective and more efficient way to extend Avid Nexis storage to the cloud for reliable backup and media parking,” said Dana Ruzicka, chief product officer/senior VP at Avid. “Working with Microsoft, we are offering all Avid Nexis users a limited-time free offer of 2TB of Microsoft Azure storage that is auto-provisioned for easy setup and as much capacity as you need, when you need it.”

ACES
The Academy Color Encoding System (ACES) team also announced that Avid is now part of the ACES Logo Program, as the first Product Partner in the new Editorial Finishing product category. ACES is a free, open, device-independent color management and image interchange system and is the global standard for color management, digital image interchange and archiving. Avid will be working to implement ACES in conformance with logo program specifications for consistency and quality with a high quality ACES-color managed video creation workflow.

“We’re pleased to welcome Avid to the ACES logo program,” said Andy Maltz, managing director of the ACES Council. “Avid’s participation not only benefits editors that need their editing systems to accurately manage color, but also the broader ACES end-user community through expanded adoption of ACES standards and best practices.”

What’s Next?
“We’ve already talked about how you can deploy Media Composer or other tools in a virtualized environment, or how you can use these kind of cloud environments to extend or advance production,” said Rosica. “We also see that these things are going to allow us to impact workloads. You’ll see us continue to power our MediaCentral platform, editorial management of MediaCentral, and even things like Media Composer with AI to help them get to the job faster. We can help automate functions, automate environments and use cloud technologies to allow people to collaborate better, to share better, to just power their workloads. You’re going to see a lot from us over time.”

Dell updates Precision 7000 Series workstation line

Dell has updated its Precision 7920 and 7820 towers and Precision 7920 rack workstations to target the media and entertainment industry. Enhancements include processing of large data workloads, AI capabilities, hot-swappable drives, a tool-less external power supply and a flexible 2U rack form factor that boosts cooling, noise reduction and space savings.

Both the Dell Precision 7920 and 7820 towers will be available with the new 2nd Gen Intel Xeon Scalable processors and Nvidia Quadro RTX graphic options to deliver enhanced performance for applications with large datasets, including enhancements for artificial intelligence and machine learning workloads. All Precision workstations come equipped with the Dell Precision Optimizer. The Dell Precision Optimizer Premium is available at an additional cost. This feature uses AI-based technology to tune the workstation based on how it is being used.

In addition, the Precision workstations now feature a multichannel thermal design for advanced cooling and acoustics. An externally accessible tool-less power supply and FlexBays for lockable, hot-swappable drives are also included.

For users needing high-security, remotely accessible 1:1 workstation performance, the updated Dell Precision 7920 rack workstation delivers the same performance and scalability of the Dell Precision 7920 tower in a 2U rack form factor. This rack workstation is targeted to OEMs and users who need to locate their compute resources and valuable data in central environments. This option can save space and help reduce noise and heat, while providing secure remote access to external employees and contractors.

Configuration options will include the recently announced 2nd Gen Intel Xeon Scalable processors, built for advanced workstation professionals, with up to 28 cores, 56 threads and 3TB DDR4 RDIMM per socket. The workstations will also support Intel Deep Learning Boost, a new set of Intel AVX-512 instructions.

The Precision 7000 Series workstations will be available in May with high-performance storage capacity options, including up to 120TB/96TB of Enterprise SATA HDD and up to 16TB of PCIe NVMe SSDs.

Company 3 NY adds senior colorist Joseph Bicknell

Company 3 has added colorist Joseph Bicknell to its New York office. He has relocated following his time as co-director/founder of finishing house Cheat based in London where he worked on commercial campaigns and music videos, including campaigns for Nike, Mercedes and Audi and videos for A$AP Rocky and Skepta.

Bicknell started his career at age 15, working as a runner on London-based productions. After serving in nearly every aspect of production and post, he discovered his true passion lay in color grading, where artists can make creative choices quickly and sees results instantly. He honed his skills first freelancing and then at Cheat.

He will be working on Blackmagic Resolve. And as with all Company 3 colorists, Bicknell is available at locations globally via remote color session.

Video: Machine learning with Digital Domain’s Doug Roble

Just prior to NAB, postPerspective’s Randi Altman caught up with Digital Domain’s senior director of software R&D, Doug Roble, to talk machine learning.

Roble is on a panel on the Monday of NAB 2019 called “Influencers in AI: Companies Accelerating the Future.” It’s being moderated by Google’s technical director for media, Jeff Kember, and features Roble along with
Autodesk’s Evan Atherton, Nvidia’s Rick Champagne, Warner Bros’ Greg Gewickey, Story Tech/Television Academy’s Lori Schwartz.

In our conversation with Roble, he talks about how Digital Domain has been using machine learning in visual effects for a couple of years. He points to the movie Avengers and the character Thanos, which they worked on.

A lot of that character’s facial motion was done with a variety of machine learning techniques. Since then, Digital Domain has pushed that technology further, taking the machine learning aspect and putting it on realtime digital humans — including Doug Roble.

Watch our conversation and find out more…

Autodesk’s Flame 2020 features machine learning tools

Autodesk’s new Flame 2020 offers a new machine-learning-powered feature set with a host of new capabilities for Flame artists working in VFX, color grading, look development or finishing. This latest update will be showcased at the upcoming NAB Show.

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

New creative tools include:
· Z-Depth Map Generator— Enables Z-depth map extraction analysis using machine learning for live-action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
· Human Face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
· Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity updates include:
· Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
· Simpler UX in Core Areas— A new expanded full-width UX layout for MasterGrade, Image surface and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
· Manager for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment.
· Open FX Support—Flame, Flare and Flame Assist version 2020 now include comprehensive support for industry-standard Open FX creative plugins such as Batch/BFX nodes or on the Flame timeline.
· Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene.

For single-user licenses, Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.
Flame, Flare, Flame Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. Pricing details can be found at the Autodesk website.

Atomos’ new Shogun 7: HDR monitor, recorder, switcher

The new Atomos Shogun 7 is a seven-inch HDR monitor, recorder and switcher that offers an all-new 1500-nit, daylight-viewable, 1920×1200 panel with a 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. It also offers ProRes RAW recording and realtime Dolby Vision output. Shogun 7 will be available in June 2019, priced at $1,499.

The Atomos screen uses a combination of advanced LED and LCD technologies which together offer deeper, better blacks the company says rivals OLED screens, “but with the much higher brightness and vivid color performance of top-end LCDs.”

A new 360-zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel, says Atomos, is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered, allowing for the same on-screen dynamic range, palette of colors and shades that your camera sensor sees.

Atomos and Dolby have teamed up to create Dolby Vision HDR “live” — a tool that allows you to see HDR live on-set and carry your creative intent from the camera through into HDR post. Dolby have optimized their target display HDR processing algorithm which Atomos has running inside the Shogun 7. It brings realtime automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and AtomOS 10 automatically analyzes the image, queries the TV and applies the right color and brightness profiles for the maximum HDR experience on the display.

Shogun 7 records images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. Also, 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most quad-link, dual-link or single-link SDI cinema cameras. Pixels are preserved with data rates of up to 1.8Gb/s.

In terms of audio, Shogun 7 eliminates the need for a separate audio recorder. Users can add 48V stereo mics via an optional balanced XLR breakout cable, or select mic or line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. Monitoring selected stereo tracks is via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.

Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the Ninja V.  The new body of Shogun 7 has a Ninja V-like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting.

AtomOS 10 on Shogun 7 has the full range of monitoring tools, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis.

Shogun 7 can also be used as a portable touchscreen-controlled multi-camera switcher with asynchronous quad-ISO recording. Users can switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update.

There is asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, users can import the XML file into an NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed.