Cinnafilm 6.6.19

Category Archives: A.I.

Dell intros two budget-friendly Precision mobile workstations

Dell is offering two new mobile workstations for designers and graphic artists who are looking for entry-level, workstation-class devices — Dell Precision 3540 and 3541. These budget-friendly machines offer a smaller footprint with high performance. Dell’s Precision line has traditionally been used for intensive workloads, such as machine learning and artificial intelligence, and these entry-level versions are designed to allow artists with smaller budgets access to the Precision line’s capabilities.

The Precision 3540 comes with the latest 4-core Intel Core 8th generation processors, up to 32GB of DDR4 memory, AMD Radeon Pro graphics with 2GB of dedicated memory and 2TB of storage. The Precision 3541 will offer additional power, with 9th generation 8-core Intel Core and 6-core Intel Xeon processor options. It will be available with Nvidia Quadro professional graphics with 4GB of dedicated memory. It will also have extreme battery life for on-the-go productivity.

Both models come with Thunderbolt 3 connectivity and optional features to enhance security, such as fingerprint and smartcard readers, an IR camera and a camera shutter. Both models also have a narrow-edge 15.6-inch display. The 3540 model weighs in at 4.04 pounds, and the 3541 model starts at 4.34 pounds.

The Dell Precision 3540 is available now on Dell.com starting at $799, while the Precision 3541 will be available in late May.

NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.

Cinnafilm 6.6.19

AI and deep learning at NAB 2019

By Tim Nagle

If you’ve been there, you know. Attending NAB can be both exciting and a chore. The vast show floor spreads across three massive halls and several hotels, and it will challenge even the most comfortable shoes. With an engineering background and my daily position as a Flame artist, I am definitely a gear-head, but I feel I can hardly claim that title at these events.

Here are some of my takeaways from the show this year…

Tim Nagle

8K
Having listened to the rumor mill, this year’s event promised to be exciting. And for me, it did not disappoint. First impressions: 8K infrastructure is clearly the goal of the manufacturers. Massive data rates and more Ks are becoming the norm. Everybody seemed to have an 8K workflow announcement. As a Flame artist, I’m not exactly looking forward to working on 8K plates. Sure, it is a glorious number of pixels, but the challenges are very real. While this may be the hot topic of the show, the fact that it is on the horizon further solidifies the need for the industry at large to have a solid 4K infrastructure. Hey, maybe we can even stop delivering SD content soon? All kidding aside, the systems and infrastructure elements being designed are quite impressive. Seeing storage solutions that can read and write at these astronomical speeds is just jaw dropping.

Young Attendees
Attendance remained relatively stable this year, but what I did notice was a lot of young faces making their way around the halls. It seemed like high school and university students were able to take advantage of interfacing with manufacturers, as well as some great educational sessions. This is exciting, as I really enjoy watching young creatives get the opportunity to express themselves in their work and make the rest of us think a little differently.

Blackmagic Resolve 16

AI/Deep Learning
Speaking of the future, AI and deep learning algorithms are being implemented into many parts of our industry, and this is definitely something to watch for. The possibilities to increase productivity are real, but these technologies are still relatively new and need time to mature. Some of the post apps taking advantage of these algorithms come from Blackmagic, Autodesk and Adobe.

At the show, Blackmagic announced their Neural Engine AI processing, which is integrated into DaVinci Resolve 16 for facial recognition, speed warp estimation and object removal, to name just a few. These features will add to the productivity of this software, further claiming its place among the usual suspects for more than just color correction.

Flame 2020

The Autodesk Flame team has implemented deep learning in to their app as well. It portends really impressive uses for retouching and relighting, as well as creating depth maps of scenes. Autodesk demoed a shot of a woman on the beach, with no real key light possibility and very flat, diffused lighting in general. With a few nodes, they were able to relight her face to create a sense of depth and lighting direction. This same technique can be used for skin retouch as well, which is very useful in my everyday work.

Adobe has also been working on their implementation of AI with the integration of Sensei. In After Effects, the content-aware algorithms will help to re-texture surfaces, remove objects and edge blend when there isn’t a lot of texture to pull from. Watching a demo artist move through a few shots, removing cars and people from plates with relative ease and decent results, was impressive.

These demos have all made their way online, and I encourage everyone to watch. Seeing where we are headed is quite exciting. We are on our way to these tools being very accurate and useful in everyday situations, but they are all very much a work in progress. Good news, we still have jobs. The robots haven’t replaced us yet.


Tim Nagle is a Flame artist at Dallas-based Lucky Post.


NAB 2019: First impressions

By Mike McCarthy

There are always a slew of new product announcements during the week of NAB, and this year was no different. As a Premiere editor, the developments from Adobe are usually the ones most relevant to my work and life. Similar to last year, Adobe was able to get their software updates released a week before NAB, instead of for eventual release months later.

The biggest new feature in the Adobe Creative Cloud apps is After Effects’ new “Content Aware Fill” for video. This will use AI to generate image data to automatically replace a masked area of video, based on surrounding pixels and surrounding frames. This functionality has been available in Photoshop for a while, but the challenge of bringing that to video is not just processing lots of frames but keeping the replaced area looking consistent across the changing frames so it doesn’t stand out over time.

The other key part to this process is mask tracking, since masking the desired area is the first step in that process. Certain advances have been made here, but based on tech demos I saw at Adobe Max, more is still to come, and that is what will truly unlock the power of AI that they are trying to tap here. To be honest, I have been a bit skeptical of how much AI will impact film production workflows, since AI-powered editing has been terrible, but AI-powered VFX work seems much more promising.

Adobe’s other apps got new features as well, with Premiere Pro adding Free-Form bins for visually sorting through assets in the project panel. This affects me less, as I do more polishing than initial assembly when I’m using Premiere. They also improved playback performance for Red files, acceleration with multiple GPUs and certain 10-bit codecs. Character Animator got a better puppet rigging system, and Audition got AI-powered auto-ducking tools for automated track mixing.

Blackmagic
Elsewhere, Blackmagic announced a new version of Resolve, as expected. Blackmagic RAW is supported on a number of new products, but I am not holding my breath to use it in Adobe apps anytime soon, similar to ProRes RAW. (I am just happy to have regular ProRes output available on my PC now.) They also announced a new 8K Hyperdeck product that records quad 12G SDI to HEVC files. While I don’t think that 8K will replace 4K television or cinema delivery anytime soon, there are legitimate markets that need 8K resolution assets. Surround video and VR would be one, as would live background screening instead of greenscreening for composite shots. No image replacement in post, as it is capturing in-camera, and your foreground objects are accurately “lit” by the screens. I expect my next major feature will be produced with that method, but the resolution wasn’t there for the director to use that technology for the one I am working on now (enter 8K…).

AJA
AJA was showing off the new Ki Pro Go, which records up to four separate HD inputs to H.264 on USB drives. I assume this is intended for dedicated ISO recording of every channel of a live-switched event or any other multicam shoot. Each channel can record up to 1080p60 at 10-bit color to H264 files in MP4 or MOV and up to 25Mb.

HP
HP had one of their existing Z8 workstations on display, demonstrating the possibilities that will be available once Intel releases their upcoming DIMM-based Optane persistent memory technology to the market. I have loosely followed the Optane story for quite a while, but had not envisioned this impacting my workflow at all in the near future due to software limitations. But HP claims that there will be options to treat Optane just like system memory (increasing capacity at the expense of speed) or as SSD drive space (with DIMM slots having much lower latency to the CPU than any other option). So I will be looking forward to testing it out once it becomes available.

Dell
Dell was showing off their relatively new 49-inch double-wide curved display. The 4919DW has a resolution of 5120×1440, making it equivalent to two 27-inch QHD displays side by side. I find that 32:9 aspect ratio to be a bit much for my tastes, with 21:9 being my preference, but I am sure there are many users who will want the extra width.

Digital Anarchy
I also had a chat with the people at Digital Anarchy about their Premiere Pro-integrated Transcriptive audio transcription engine. Having spent the last three months editing a movie that is split between English and Mandarin dialogue, needing to be fully subtitled in both directions, I can see the value in their tool-set. It harnesses the power of AI-powered transcription engines online and integrates the results back into your Premiere sequence, creating an accurate script as you edit the processed clips. In my case, I would still have to handle the translations separately once I had the Mandarin text, but this would allow our non-Mandarin speaking team members to edit the Mandarin assets in the movie. And it will be even more useful when it comes to creating explicit closed captioning and subtitles, which we have been doing manually on our current project. I may post further info on that product once I have had a chance to test it out myself.

Summing Up
There were three halls of other products to look through and check out, but overall, I was a bit underwhelmed at the lack of true innovation I found at the show this year.

Full disclosure, I was only able to attend for the first two days of the exhibition, so I may have overlooked something significant. But based on what I did see, there isn’t much else that I am excited to try out or that I expect to have much of a serious impact on how I do my various jobs.

It feels like most of the new things we are seeing are merely commoditized versions of products that may originally have been truly innovative when they were initially released, but now are just slightly more fleshed out versions over time.

There seems to be much less pioneering of truly new technology and more repackaging of existing technologies into other products. I used to come to NAB to see all the flashy new technologies and products, but now it feels like the main thing I am doing there is a series of annual face-to-face meetings, and that’s not necessarily a bad thing.

Until next year…


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Dell updates Precision 7000 Series workstation line

Dell has updated its Precision 7920 and 7820 towers and Precision 7920 rack workstations to target the media and entertainment industry. Enhancements include processing of large data workloads, AI capabilities, hot-swappable drives, a tool-less external power supply and a flexible 2U rack form factor that boosts cooling, noise reduction and space savings.

Both the Dell Precision 7920 and 7820 towers will be available with the new 2nd Gen Intel Xeon Scalable processors and Nvidia Quadro RTX graphic options to deliver enhanced performance for applications with large datasets, including enhancements for artificial intelligence and machine learning workloads. All Precision workstations come equipped with the Dell Precision Optimizer. The Dell Precision Optimizer Premium is available at an additional cost. This feature uses AI-based technology to tune the workstation based on how it is being used.

In addition, the Precision workstations now feature a multichannel thermal design for advanced cooling and acoustics. An externally accessible tool-less power supply and FlexBays for lockable, hot-swappable drives are also included.

For users needing high-security, remotely accessible 1:1 workstation performance, the updated Dell Precision 7920 rack workstation delivers the same performance and scalability of the Dell Precision 7920 tower in a 2U rack form factor. This rack workstation is targeted to OEMs and users who need to locate their compute resources and valuable data in central environments. This option can save space and help reduce noise and heat, while providing secure remote access to external employees and contractors.

Configuration options will include the recently announced 2nd Gen Intel Xeon Scalable processors, built for advanced workstation professionals, with up to 28 cores, 56 threads and 3TB DDR4 RDIMM per socket. The workstations will also support Intel Deep Learning Boost, a new set of Intel AVX-512 instructions.

The Precision 7000 Series workstations will be available in May with high-performance storage capacity options, including up to 120TB/96TB of Enterprise SATA HDD and up to 16TB of PCIe NVMe SSDs.


Video: Machine learning with Digital Domain’s Doug Roble

Just prior to NAB, postPerspective’s Randi Altman caught up with Digital Domain’s senior director of software R&D, Doug Roble, to talk machine learning.

Roble is on a panel on the Monday of NAB 2019 called “Influencers in AI: Companies Accelerating the Future.” It’s being moderated by Google’s technical director for media, Jeff Kember, and features Roble along with
Autodesk’s Evan Atherton, Nvidia’s Rick Champagne, Warner Bros’ Greg Gewickey, Story Tech/Television Academy’s Lori Schwartz.

In our conversation with Roble, he talks about how Digital Domain has been using machine learning in visual effects for a couple of years. He points to the movie Avengers and the character Thanos, which they worked on.

A lot of that character’s facial motion was done with a variety of machine learning techniques. Since then, Digital Domain has pushed that technology further, taking the machine learning aspect and putting it on realtime digital humans — including Doug Roble.

Watch our conversation and find out more…


Nvidia intros Turing-powered Titan RTX

Nvidia has introduced its new Nvidia Titan RTX, a desktop GPU that provides the kind of massive performance needed for creative applications, AI research and data science. Driven by the new Nvidia Turing architecture, Titan RTX — dubbed T-Rex — delivers 130 teraflops of deep learning performance and 11 GigaRays of raytracing performance.

Turing features new RT Cores to accelerate raytracing, plus new multi-precision Tensor Cores for AI training and inferencing. These two engines — along with more powerful compute and enhanced rasterization — will help speed the work of developers, designers and artists across multiple industries.

Designed for computationally demanding applications, Titan RTX combines AI, realtime raytraced graphics, next-gen virtual reality and high-performance computing. It offers the following features and capabilities:
• 576 multi-precision Turing Tensor Cores, providing up to 130 Teraflops of deep learning performance
• 72 Turing RT Cores, delivering up to 11 GigaRays per second of realtime raytracing performance
• 24GB of high-speed GDDR6 memory with 672GB/s of bandwidth — two times the memory of previous-generation Titan GPUs — to fit larger models and datasets
• 100GB/s Nvidia NVLink, which can pair two Titan RTX GPUs to scale memory and compute
• Performance and memory bandwidth sufficient for realtime 8K video editing
• VirtualLink port, which provides the performance and connectivity required by next-gen VR headsets

Titan RTX provides multi-precision Turing Tensor Cores for breakthrough performance from FP32, FP16, INT8 and INT4, allowing faster training and inference of neural networks. It offers twice the memory capacity of previous-generation Titan GPUs, along with NVLink to allow researchers to experiment with larger neural networks and datasets.

Titan RTX accelerates data analytics with RAPIDS. RAPIDS open-source libraries integrate seamlessly with the world’s most popular data science workflows to speed up machine learning.

Titan RTX will be available later in December in the US and Europe for $2,499.


Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.


Video Coverage: postPerspective Live from SMPTE 2018

The yearly SMPTE Technical Conference and Exhibition was held late last month in Downtown Los Angeles at the Westin Bonaventure Hotel, a new venue for the event.

The conference included presentations that touched on all three of the organization’s “pillars,” which are Standards, Education and Membership.

One of the highlights was a session on autonomous vehicles and how AI and machine learning are making that happen. You might wonder, “What will everyone do with that extra non-driving time?” Well, companies are already thinking of ways to entertain you while you’re on your way to where you need to go. The schedule of sessions and presentations can be found here.

Another highlight at this year’s SMPTE Conference was the Women in Technology lunch, which featured a conversation between Disney’s Kari Grubin and Fem Inc.’s Rachel Payne. Payne is a tech entrepreneur and technology executive who has worked at companies like Google. She was also a Democratic candidate for the 48th Congressional District of California. It was truly inspiring to hear about her path.

Feeling like you might have missed some cool stuff? Well don’t worry, postPerspective’s production crews were capturing interviews with manufacturers in the exhibit hall and with speakers, SMPTE members and so many others throughout the Conference.

A big thank you to AlphaDogs , who shot and posted our videos this year, as well as our other sponsors: Blackmagic Design, The Studio – B&H, LitePanels and Lenovo.

Watch Here!

Quick Chat: AI-based audio mastering

Antoine Rotondo is an audio engineer by trade who has been in the business for the past 17 years. Throughout his career he’s worked in audio across music, film and broadcast, focusing on sound reproduction. After completing college studies in sound design, undergraduate studies in music and music technology, as well as graduate studies in sound recording at McGill University in Montreal, Rotondo went on to work in recording, mixing, producing and mastering.

He is currently an audio engineer at Landr.com, which has released Landr Audio Mastering for Video, which provides professional video editors with AI-based audio mastering capabilities in Adobe Premiere Pro CC.

As an audio engineer how do you feel about AI tools to shortcut the mastering process?
Well first, there’s a myth about how AI and machines can’t possibly make valid decisions in the creative process in a consistent way. There’s actually a huge intersection between artistic intentions and technical solutions where we find many patterns, where people tend to agree and go about things very similarly, often unknowingly. We’ve been building technology around that.

Truth be told there are many tasks in audio mastering that are repetitive and that people don’t necessarily like spending a lot of time on, tasks such as leveling dialogue, music and background elements across multiple segments, or dealing with noise. Everyone’s job gets easier when those tasks become automated.

I see innovation in AI-driven audio mastering as a way to make creators more productive and efficient — not to replace them. It’s now more accessible than ever for amateur and aspiring producers and musicians to learn about mastering and have the resources to professionally polish their work. I think the same will apply to videographers.

What’s the key to making video content sound great?
Great sound quality is effortless and sounds as natural as possible. It’s about creating an experience that keeps the viewer engaged and entertained. It’s also about great communication — delivering a message to your audience and even conveying your artistic vision — all this to impact your audience in the way you intended.

More specifically, audio shouldn’t unintentionally sound muffled, distorted, noisy or erratic. Dialogue and music should shine through. Viewers should never need to change the volume or rewind the content to play something back during the program.

When are the times you’d want to hire an audio mastering engineer and when are the times that projects could solely use an AI-engine for audio mastering?
Mastering engineers are especially important for extremely intricate artistic projects that require direct communication with a producer or artist, including long-form narrative, feature films, television series and also TV commercials. Any project with conceptual sound design will almost always require an engineer to perfect the final master.

Users can truly benefit from AI-driven mastering in short form, non-fiction projects that require clean dialog, reduced background noise and overall leveling. Quick turnaround projects can also use AI mastering to elevate the audio to a more professional level even, when deadlines are tight. AI mastering can now insert itself in the offline creation process, where multiple revisions of a project are sent back and forth, making great sound accessible throughout the entire production cycle.

The other thing to consider is that AI mastering is a great option for video editors who don’t have technical audio expertise themselves, and where lower budgets translate into them having to work on their own. These editors could purchase purpose-built mastering plugins, but they don’t necessarily have the time to learn how to really take advantage of these tools. And even if they did have the time, some would prefer to focus more on all the other aspects of the work that they have to juggle.