Tag Archives: Nvidia

Dell adds to Precision workstation line, targets M&E

During the Computex show, Dell showed new Precision mobile workstations featuring the latest processors, next-gen graphics, new display options and longer battery life. These systems are designed demanding data- and graphics-intensive workloads.

Dell Precision workstations are ISV-certified and come with Dell Precision Optimizer software that automatically tailors the system’s settings to get the best software performance from the workstation. The compact design of the new 5000 and 7000 series models offer a combination of extreme battery life, powerful processor configurations and large storage options. Starting at 3.9 pounds, the Dell Precision 5540 comes with Intel Xeon E or 9th Gen Intel Core eight-core processors.

With a 15.6-inch InfinityEdge display inside a 14-inch chassis, the Precision 5540 houses up to 4TB of storage and up to 64GB of memory, which helps pros to quickly access, transfer and store large 3D, video and multimedia files. Editors and designers will also benefit from contrast ratios, touch capability and picture quality with up to a UHD, 100% Adobe color gamut display or the new OLED display with 100% DCI-P3 color gamut.

The Dell Precision 7540 15-inch mobile workstation comes with a range of 15.6-inch display options, including a UHD HDR 400 display. It supports up to 8K resolution and playback of HDR content via single DisplayPort 1.4. The Precision 7540 can accelerate heavy workflows with up to 3200MHz SuperSpeed memory or up to 128GB of 2666MHz ECC memory.

For creatives whose process requires an even more immersive experience, the new Dell Precision 7740 has a 17.3-inch screen and is Dell’s most powerful and scalable mobile workstation. VR- and AI-ready, it is designed to help users bring their most data-heavy, graphic-intensive ideas to life while keeping applications running smoothly.

The Precision 7740 has been updated to feature up to the latest Intel Xeon E or 9th Gen Intel Core eight-core processors and comes with up to 128GB of ECC memory and a large PCIe SSD storage capacity (up to 8TB). Nvidia Quadro RTX graphics offer realtime raytracing with AI-based graphics acceleration. Additional options include next-generation AMD Radeon Pro GPUs. It is available with a range of display options, including a new 17.3-inch UltraSharp UHD IGZO display featuring 100% Adobe color gamut.

Along with the new Precision mobile workstation models, Dell has also updated its Precision 3000 series towers and the Precision 1U rack workstation. The 3930 1U rack workstation has been updated with Intel Xeon E or 9th Gen Intel Core processor options. The solution now offers up to 128GB of memory and up to one double-width 295W of Nvidia Quadro or AMD Radeon Pro professional graphics support.

The next-gen Dell Precision 3630 and 3431 towers improve response time with up to 128GB or 64GB of 2666MHz ECC or non-ECC memory, respectively, and both offer scalable storage options. All workstations have a range of operating system options, including Windows 10 Pro, Red Hat and Ubuntu Linux.

The Dell Precision 5540, 7540 and 7740 mobile workstations will be available on Dell.com in early July. Starting prices are $1339, $1149 and $1409, respectively. The Dell Precision 3630 tower workstation will be available on dell.com in mid-July starting at $609.

The Dell Precision 3431 Tower workstation will be available on their site in June starting at $609. The Dell Precision 3930 Rack will be available on their site in mid-July starting at $879.

Nvidia, AMD and Intel news from Computex

By Mike McCarthy

A number of new technologies and products were just announced at this year’s Computex event in Taipei, Taiwan. Let’s take a look at ones that seem relevant to media creation pros.

Nvidia released a line of mobile workstation GPUs based on its newest Turing architecture. Like the GeForce lineup, the Turing line has versions without the RTX designation. The Quadro RTX 5000, 4000 and 3000 have raytracing and Tensor cores, while the Quadro T2000 and T1000 do not, similar to the GeForce 16 products. The RTX 5000 matches the desktop version, with slightly more CUDA cores than the GeForce RTX 2080, although at lower clock speeds for reduced power consumption.

Nvidia’s new RTX 5000

The new Quadro RTX 3000 has similar core configuration to the desktop Quadro RTX 4000 and GeForce RTX 2070. This leaves the new RTX 4000 somewhere in between, with more cores than the desktop variant, aiming to provide similar overall performance at lower clock speeds and power consumption. While I can respect the attempt to offer similar performance at given tiers, doing so makes it more complicated than just leaving consistent naming for particular core configurations.

Nvidia also announced a new “RTX Studio” certification program for laptops targeted at content creators. These laptops are designed to support content creation applications with “desktop-like” performance. RTX Studio laptops will include an RTX GPU (either GeForce or Quadro), an H-Series or better Intel CPU, at least 16GB RAM and 512GB SSD, and at least a 1080p screen. Nvidia also announced a new line of studio drivers that are supposed to work with both Quadro and GeForce hardware. They are optimized for content creators and tested for stability with applications from Adobe, Autodesk, Avid, and others. Hopefully these drivers will simplify certain external GPU configurations that mix Quadro and GeForce hardware. It is unclear whether or not these new “Studio” drivers will replace the previously announced “Creator Ready” series of drivers.

Intel announced a new variant of its top end 9900K CPU. The i9-9900KS has a similar configuration, but runs at higher clock speeds on more cores, with a 4GHz base frequency and allowing 5GHz boost speeds on all eight cores. Intel also offered more details on its upcoming 10nm Ice Lake products with Gen 11 integrated graphics, which offers numerous performance improvements and VNNI support to accelerate AI processing. Intel is also integrating support for Thunderbolt 3 and Wi-Fi 6 into the new chipsets, which should lead to wider support for those interfaces. The first 10nm products to be released will be the lower-power chip for tablets and ultra portable laptops with higher power variants coming further in the future.

AMD took the opportunity to release new generations of both CPUs and GPUs. On the CPU front, AMD has a number of new third-generation 7nm Ryzen processors, with six to 12 cores in the 4GHz range and supporting 20 lanes of fourth-gen PCIe. Priced between $200 and $500, they are targeted at consumers and gamers and are slated to be available July 7th. These CPUs compete with Intel’s 9900K and similar CPUs, which have been offering top performance for Premiere and After Effects users due to their high clock speed. It will be interesting to see if AMD’s new products offer competitive performance at that price point.

AMD also finally publicly released its Navi generation GPU architecture, in the form of the new Radeon 5700. The 5000 series has an entirely new core design, which they call Radeon DNA (RDNA) to replace the GCN architecture first released seven years ago. RDNA is supposed to offer 25% more performance per clock cycle and 50% more performance per watt. This is important, because power consumption was AMD’s weak point compared to competing products from Nvidia.

AMD president and CEO Dr. Lisa Su giving her keynote.

While GPU power consumption isn’t as big of a deal for gamers using it a couple hours a day, commercial compute tasks that run 24/7 see significant increases in operating costs for electricity and cooling when power consumption is higher. AMD’s newest Radeon 5700 is advertised to compete performance-wise with the GeForce RTX 2070, meaning that Nvidia still holds the overall performance crown for the foreseeable future. But the new competition should drive down prices in the mid-range performance segment, which are the cards most video editors need.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Autodesk Arnold 5.3 with Arnold GPU in public beta

Autodesk has made its Arnold 5.3 with Arnold GPU available as a public beta. The release provides artists with GPU rendering for a set number of features, and the flexibility to choose between rendering on the CPU or GPU without changing renderers.

From look development to lighting, support for GPU acceleration brings greater interactivity and speed to artist workflows, helping reduce iteration and review cycles. Arnold 5.3 also adds new functionality to help maximize performance and give artists more control over their rendering processes, including updates to adaptive sampling, a new version of the Randomwalk SSS mode and improved Operator UX.

Arnold GPU rendering makes it easier for artists and small studios to iterate quickly in a fast working environment and scale rendering capacity to accommodate project demands. From within the standard Arnold interface, users can switch between rendering on the CPU and GPU with a single click. Arnold GPU currently supports features such as arbitrary shading networks, SSS, hair, atmospherics, instancing, and procedurals. Arnold GPU is based on the Nvidia OptiX framework and is optimized to leverage Nvidia RTX technology.

New feature summary:
— Major improvements to quality and performance for adaptive sampling, helping to reduce render times without jeopardizing final image quality
— Improved version of Randomwalk SSS mode for more realistic shading
— Enhanced usability for Standard Surface, giving users more control
— Improvements to the Operator framework
— Better sampling of Skydome lights, reducing direct illumination noise
— Updates to support for MaterialX, allowing users to save a shading network as a MaterialX look

Arnold 5.3 with Arnold GPU in public beta will be available March 20 as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. You can also try Arnold GPU with a free 30-day trial of Arnold. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, Houdini, Cinema 4D and Katana.

New codec, workflow options via Red, Nvidia and Adobe

By Mike McCarthy

There were two announcements last week that will impact post production workflows. The first was the launch of Red’s new SDK, which leverages Nvidia’s GPU-accelerated CUDA framework to deliver realtime playback of 8K Red footage. I’ll get to the other news shortly. Nvidia was demonstrating an early version of this technology at Adobe Max in October, and I have been looking forward to this development since I am about to start post on a feature film shot on the Red Monstro camera. This should effectively render the RedRocket accelerator cards obsolete, replacing them with cheaper, multipurpose hardware that can also accelerate other computational tasks.

While accelerating playback of 8K content at full resolution requires a top-end RTX series card from Nvidia (Quadro RTX 6000, Titan RTX or GeForce RTX 2080Ti), the technology is not dependent on RTX’s new architecture (RT and Tensor cores), allowing earlier generation hardware to accelerate smooth playback at smaller frame sizes. Lots of existing Red footage is shot at 4K and 6K, and playback of these files will be accelerated on widely deployed legacy products from previous generations of Nvidia GPU architecture. It will still be a while before this functionality is in the hands of end users, because now Adobe, Apple, Blackmagic and other software vendors have to integrate the new SDK functionality into their individual applications. But hopefully we will see those updates hitting the market soon (targeting late Q1 of 2019).

Encoding ProRes on Windows via Adobe apps
The other significant update, which is already available to users as of this week, is Adobe’s addition of ProRes encoding support on its video apps in Windows. Developed by Apple, ProRes encoding has been available on Mac for a long time, and ProRes decoding and playback has been available on Windows for over 10 years. But creating ProRes files on Windows has always been a challenge. Fixing this was less a technical challenge than a political one, as Apple owns the codec and it is not technically a standard. So while there were some hacks available at various points during that time, Apple has severely restricted the official encoding options available on Windows… until now.

With the 13.0.2 release of Premiere Pro and Media Encoder, as well as the newest update to After Effects, Adobe users on Windows systems can now create ProRes files in whatever flavor they happen to need. This is especially useful since many places require delivery of final products in the ProRes format. In this case, the new export support is obviously a win all the way around.

Adobe Premiere

Now users have yet another codec option for all of their intermediate files, prompting another look at the question: Which codec is best for your workflow? With this release, Adobe users have at least three major options for high-quality intermediate codecs: Cineform, DNxHR and now ProRes. I am limiting the scope to integrated cross-platform codecs supporting 10-bit color depth, variable levels of image compression and customizable frame sizes. Here is a quick overview of the strengths and weaknesses of each option:

ProRes
ProRes was created by Apple over 10 years ago and has become the de-facto standard throughout the industry, regardless of the fact that it is entirely owned by Apple. ProRes is now fully cross-platform compatible, has options for both YUV and RGB color and has six variations, all of which support at least 10-bit color depth. The variable bit rate compression scheme scales well with content complexity, so encoding black or static images doesn’t require as much space as full-motion video. It also supports alpha channels with compression, but only in the 444 variants of the codec.

Recent tests on my Windows 10 workstation resulted in ProRes taking 3x to 5x as much CPU power to playback as similar DNxHR of Cineform files, especially as frame sizes get larger. The codec supports 8K frame sizes but playback will require much more processing power. I can’t even playback UHD files in ProRes 444 at full resolution, while the Cineform and DNxHR files have no problem, even at 444. This is less of concern if you are only working at 1080p.

Multiply those file sizes by four for UHD content (and by 16 for 8K content).

Cineform
Cineform, which has been available since 2004, was acquired by GoPro in 2011. They have licensed the codec to Adobe, (among other vendors) and it is available as “GoPro Cineform” in the AVI or QuickTime sections of the Adobe export window. Cineform is a wavelet compression codec, with 10-bit YUV and 12-bit RGB variants, which like ProRes support compressed alpha channels in the RGB variant. The five levels of encoding quality are selected separately from the format, so higher levels of compression are available for 4444 content compared to the limited options available in the other codecs.

It usually plays back extremely efficiently on Windows, but my recent tests show that encoding to the format is much slower than it used to be. And while it has some level of support outside of Adobe applications, it is not as universally recognized as ProRes or DNxHD.

DNxHD
DNxHD was created by Avid for compressed HD playback and has now been extended to DNxHR (high resolution). It is a fixed bit rate codec, with each variant having a locked multiplier based on resolution and frame rate. This makes it easy to calculate storage needs but wastes space for files that are black or contain a lot of static content. It is available in MXF and Mov wrappers and has five levels of quality. The top option is 444 RGB, and all variants support alpha channels in Mov but uncompressed, which takes a lot of space. For whatever reason, Adobe has greatly optimized DNxHR playback in Premiere Pro, of all variants, in both MXF and Mov wrappers. On my project 6Below, I was able to get 6K 444 files to playback, with lots of effects, without dropping frames. The encodes to and from DNxHR are faster in Adobe apps as well.

So for most PC Adobe users, DNxHR-LB (low bandwidth) is probably the best codec to use for intermediate work. We are using it to offline my current project, with 2.2K DNxHR-LB, Mov files. People with a heavy Mac interchange may lean toward ProRes, but up your CPU specs for the same level of application performance.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Nvidia intros Turing-powered Titan RTX

Nvidia has introduced its new Nvidia Titan RTX, a desktop GPU that provides the kind of massive performance needed for creative applications, AI research and data science. Driven by the new Nvidia Turing architecture, Titan RTX — dubbed T-Rex — delivers 130 teraflops of deep learning performance and 11 GigaRays of raytracing performance.

Turing features new RT Cores to accelerate raytracing, plus new multi-precision Tensor Cores for AI training and inferencing. These two engines — along with more powerful compute and enhanced rasterization — will help speed the work of developers, designers and artists across multiple industries.

Designed for computationally demanding applications, Titan RTX combines AI, realtime raytraced graphics, next-gen virtual reality and high-performance computing. It offers the following features and capabilities:
• 576 multi-precision Turing Tensor Cores, providing up to 130 Teraflops of deep learning performance
• 72 Turing RT Cores, delivering up to 11 GigaRays per second of realtime raytracing performance
• 24GB of high-speed GDDR6 memory with 672GB/s of bandwidth — two times the memory of previous-generation Titan GPUs — to fit larger models and datasets
• 100GB/s Nvidia NVLink, which can pair two Titan RTX GPUs to scale memory and compute
• Performance and memory bandwidth sufficient for realtime 8K video editing
• VirtualLink port, which provides the performance and connectivity required by next-gen VR headsets

Titan RTX provides multi-precision Turing Tensor Cores for breakthrough performance from FP32, FP16, INT8 and INT4, allowing faster training and inference of neural networks. It offers twice the memory capacity of previous-generation Titan GPUs, along with NVLink to allow researchers to experiment with larger neural networks and datasets.

Titan RTX accelerates data analytics with RAPIDS. RAPIDS open-source libraries integrate seamlessly with the world’s most popular data science workflows to speed up machine learning.

Titan RTX will be available later in December in the US and Europe for $2,499.

postPerspective Impact Award winners from SIGGRAPH 2018

postPerspective has announced the winners of our Impact Awards from SIGGRAPH 2018 in Vancouver. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and professionals. It’s working pros who are going to be using new tools — so we let them make the call.

The awards honor innovative products and technologies for the visual effects, post production and production industries that will influence the way people work. They celebrate companies that push the boundaries of technology to produce tools that accelerate artistry and actually make users’ working lives easier.

While SIGGRAPH’s focus is on VFX, animation, VR/AR, AI and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production, which makes these SIGGRAPH Impact Awards doubly interesting.

The winners are as follows:

postPerspective Impact Award — SIGGRAPH 2018 MVP Winner:

They generated a lot of buzz at the show, as well as a lot of votes from our team of judges, so our MVP Impact Award goes to Nvidia for its Quadro RTX raytracing GPU.

postPerspective Impact Awards — SIGGRAPH 2018 Winners:

  • Maxon for its Cinema 4D R20 3D design and animation software.
  • StarVR for its StarVR One headset with integrated eye tracking.

postPerspective Impact Awards — SIGGRAPH 2018 Horizon Winners:

This year we have started a new Imapct Award category. Our Horizon Award celebrates the next wave of impactful products being previewed at a particular show. At SIGGRAPH, the winners were:

  • Allegorithmic for its Substance Alchemist tool powered by AI.
  • OTOY and Epic Games for their OctaneRender 2019 integration with UnrealEngine 4.

And while these products and companies didn’t win enough votes for an award, our voters believe they do deserve a mention and your attention: Wrnch, Google Lightfields, Microsoft Mixed Reality Capture and Microsoft Cognitive Services integration with PixStor.

 

GTC embraces machine learning and AI

By Mike McCarthy

I had the opportunity to attend GTC 2018, Nvidia‘s 9th annual technology conference in San Jose this week. GTC stands for GPU Technology Conference, and GPU stands for graphics processing unit, but graphics makes up a relatively small portion of the show at this point. The majority of the sessions and exhibitors are focused on machine learning and artificial intelligence.

And the majority of the graphics developments are centered around analyzing imagery, not generating it. Whether that is classifying photos on Pinterest or giving autonomous vehicles machine vision, it is based on the capability of computers to understand the content of an image. Now DriveSim, Nvidia’s new simulator for virtually testing autonomous drive software, dynamically creates imagery for the other system in the Constellation pair of servers to analyze and respond to, but that is entirely machine-to-machine imagery communication.

The main exception to this non-visual usage trend is Nvidia RTX, which allows raytracing to be rendered in realtime on GPUs. RTX can be used through Nvidia’s OptiX API, as well as Microsoft’s DirectX RayTracing API, and eventually through the open source Vulkan cross-platform graphics solution. It integrates with Nvidia’s AI Denoiser to use predictive rendering to further accelerate performance, and can be used in VR applications as well.

Nvidia RTX was first announced at the Game Developers Conference last week, but the first hardware to run it was just announced here at GTC, in the form of the new Quadro GV100. This $9,000 card replaces the existing Pascal-based GP100 with a Volta-based solution. It retains the same PCIe form factor, the quad DisplayPort 1.4 outputs and the NV-Link bridge to pair two cards at 200GB/s, but it jumps the GPU RAM per card from 16GB to 32GB of HBM2 memory. The GP100 was the first Quadro offering since the K6000 to support double-precision compute processing at full speed, and the increase from 3,584 to 5,120 CUDA cores should provide a 40% increase in performance, before you even look at the benefits of the 640 Tensor Cores.

Hopefully, we will see simpler versions of the Volta chip making their way into a broader array of more budget-conscious GPU options in the near future. The fact that the new Nvidia RTX technology is stated to require Volta architecture CPUs leads me to believe that they must be right on the horizon.

Nvidia also announced a new all-in-one GPU supercomputer — the DGX-2 supports twice as many Tesla V100 GPUs (16) with twice as much RAM each (32GB) compared to the existing DGX-1. This provides 81920 CUDA cores addressing 512GB of HBM2 memory, over a fabric of new NV-Link switches, as well as dual Xeon CPUs, Infiniband or 100GbE connectivity, and 32TB of SSD storage. This $400K supercomputer is marketed as the world’s largest GPU.

Nvidia and their partners had a number of cars and trucks on display throughout the show, showcasing various pieces of technology that are being developed to aid in the pursuit of autonomous vehicles.

Also on display in the category of “actually graphics related” was the new Max-Q version of the mobile Quadro P4000, which is integrated into PNY’s first mobile workstation, the Prevail Pro. Besides supporting professional VR applications, the HDMI and dual DisplayPort outputs allow a total of three external displays up to 4K each. It isn’t the smallest or lightest 15-inch laptop, but it is the only system under 17 inches I am aware of that supports the P4000, which is considered the minimum spec for professional VR implementation.

There are, of course, lots of other vendors exhibiting their products at GTC. I had the opportunity to watch 8K stereo 360 video playing off of a laptop with an external GPU. I also tried out the VRHero 5K Plus enterprise-level HMD, which brings the VR experience to whole other level. Much more affordable is TP-Cast’s $300 wireless upgrade Vive and Rift HMDs, the first of many untethered VR solutions. HTC has also recently announced the Vive Pro, which will be available in April for $800. It increases the resolution by 1/3 in both dimensions to 2880×1600 total, and moves from HDMI to DisplayPort 1.2 and USB-C. Besides VR products, they also had all sorts of robots in various forms on display.

Clearly the world of GPUs has extended far beyond the scope of accelerating computer graphics generation, and Nvidia is leading the way in bringing massive information processing to a variety of new and innovative applications. And if that leads us to hardware that can someday raytrace in realtime at 8K in VR, then I suppose everyone wins.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

V-Ray GPU is Chaos Group’s new GPU rendering architecture

Chaos Group has redesigned its V-Ray RT product. The new V-Ray GPU rendering architecture, according to the company, effectively doubles the speed of production rendering for film, broadcast and design artists. This represents a redesign of V-Ray’s kernel structure, ensuring a dual-blend of high-performance speed and accuracy.

Chaos Group has renamed V-Ray RT to V-Ray GPU, wanting to establish the latter as a professional production renderer capable of supporting volumetrics, advanced shading and other smart tech coming down the road.

Current internal tests have V-Ray GPU running 80 percent faster on the Nvidia’s Titan V, a big gain from previous benchmarks on the Titan Xp, and up to 10-15x faster than an Intel Core i7-7700K, with the same high level of accuracy across interactive and production renders. (For its testing, Chaos Group uses a battery of production scenes to benchmark each release.)

“V-Ray GPU might be the biggest speed leap we’ve ever made,” says Blagovest Taskov, V-Ray GPU lead developer at Chaos Group. “Redesigning V-Ray GPU to be modular makes it much easier for us to exploit the latest GPU architectures and to add functionality without impacting performance. With our expanded feature set, V-Ray GPU can be used in many more production scenarios, from big-budget films to data-heavy architecture projects, while providing more speed than ever before.”

Representing over two years of dedicated R&D, V-Ray GPU builds on nine years of GPU-driven development in V-Ray. New gains for production artists include:

• Volume Rendering – Fog, smoke and fire can be rendered with the speed of V-Ray GPU. It’s compatible with V-Ray Volume Grid, which supports OpenVDB, Field3D and Phoenix FD volume caches.
• Adaptive Dome Light – Cleaner image-based lighting is now faster and even more accurate.
• V-Ray Denoising – Offering GPU-accelerated denoising across render elements and animations.
• Nvidia AI Denoiser – Fast, real-time denoising based on Nvidia OptiX AI-accelerated denoising technology.
• Interface Support – Instant filtering of GPU-supported features lets artists know what’s available in V-Ray GPU (starting within 3ds Max).

V-Ray GPU will be made available as part of the next update of V-Ray Next for 3ds Max beta.

Epic Games, Nvidia team on enterprise solutions for VR app developers

Epic Games and Nvidia have teamed up to offer enterprise-grade solutions to help app developers create more immersive VR experiences.

To help ease enterprise VR adoption, Epic has integrated Nvidia Quadro professional GPUs into the test suite for Unreal Engine 4, the company’s realtime toolset for creating applications across PC, console, mobile, VR and AR platforms. This ensures Nvidia technologies integrate seamlessly into developers’ workflows, delivering results for everything from CAVEs and multi-projection systems through to enterprise VR and AR solutions.

“With our expanding focus on industries outside of games, we’ve aligned ourselves ever more closely with Nvidia to offer an enterprise-grade experience,” explains Marc Petit, GM of the Unreal Engine Enterprise business. “Nvidia Quadro professional GPUs empower artists, designers and content creators who need to work unencumbered with the largest 3D models and datasets, tackle complex visualization challenges and deliver highly immersive VR experiences.”

The Human Race

One project that has driven this effort is Epic’s collaboration with GM and The Mill on The Human Race, a realtime short film and mixed reality experience featuring a configurable Chevrolet Camaro ZL1, which was built using Nvidia Quadro pro graphics.

Says Bob Pette, VP of professional visualization at Nvidia, “Unreal, from version 4.16, is the first realtime toolset to meet Nvidia Quadro partner standards. Our combined solution provides leaders in these markets the reliability and performance they require for the optimum VR experience.”

PNY’s PrevailPro mobile workstations feature 4K displays, are VR-capable

PNY has launched the PNY PrevailPro P4000 and P3000, thin and light mobile workstations. With their Nvidia Max-Q design, these innovative systems are designed from the Quadro GPU out.

“Our PrevailPro [has] the ability to drive up to four 4K UHD displays at once, or render vividly interactive VR experiences, without breaking backs or budgets,” says Steven Kaner, VP of commercial and OEM sales at PNY Technologies. “The increasing power efficiency of Nvidia Quadro graphics and our P4000-based P955 Nvidia Max-Q technology platform, allows PNY to deliver professional performance and features in thin, light, cool and quiet form factors.”

P3000

PrevailPro features the Pascal architecture within the P4000 and P3000 mobile GPUs, with Intel Core i7-7700HQ CPUs and the HM175 Express chipset.

“Despite ever increasing mobility, creative professionals require workstation class performance and features from their mobile laptops to accomplish their best work, from any location,” says Bob Pette, VP, Nvidia Professional Visualization. “With our new Max-Q design and powered by Quadro P4000 and P3000 mobile GPUs, PNY’s new PrevailPro lineup offers incredibly light and thin, no-compromise, powerful and versatile mobile workstations.”

The PrevailPro systems feature either a 15.6-inch 4K UHD or FHD display – and the ability to drive three external displays (2x mDP 1.4 and HDMI 2.0 with HDCP), for a total of four simultaneously active displays. The P4000 version supports fully immersive VR, the Nvidia VRWorks software development kit and innovative immersive VR environments based on the Unreal or Unity engines.

With 8GB (P4000) or 6GB (P3000) of GDDR5 GPU memory, up to 32GB of DDR4 2400MHz DRAM, 512GB SSD availability, HDD options up to 2TB, a comprehensive array of I/O ports, and the latest Wi-Fi and Bluetooth implementations, PrevailPro is compatible with all commonly used peripherals and network environments — and provides pros with the interfaces and storage capacity needed to complete business-critical tasks. Depending on the use case, Mobile Mark 2014 projects the embedded Li polymer battery can reach five hours over a lifetime of 1,000 charge/discharge cycles.

PrevailPro’s thin and light form factor measures 14.96×9.8×0.73 inches (379mm x 248mm x 18mm) and weighs 4.8 lbs.

 

Choosing the right workstation set-up for the job

By Lance Holte

Like virtually everything in the world of filmmaking, the number of available options for a perfect editorial workstation are almost infinite. The vast majority of systems can be greatly customized and expanded, whether by custom order, upgraded internal hardware or with expansion chassis and I/O boxes. In a time when many workstations are purchased, leased or upgraded for a specific project, the workstation buying process is largely determined by the project’s workflow and budget.

One of Harbor Picture Company’s online rooms.

In my experience, no two projects have identical workflows. Even if two projects are very similar, there are usually some slight differences — a different editor, a new camera, a shorter schedule, bigger storage requirements… the list goes on and on. The first step for choosing the optimal workstation(s) for a project is to ask a handful of broad questions that are good starters for workflow design. I generally start by requesting the delivery requirements, since they are a good indicator of the size and scope of the project.

Then I move on to questions like:

What are the camera/footage formats?
How long is the post production schedule?
Who is the editorial staff?

Often there aren’t concrete answers to these questions at the beginning of a project, but even rough answers point the way to follow-up questions. For instance, Q: What are the video delivery requirements? A: It’s a commercial campaign — HD and SD ProRes 4444 QTs.

Simple enough. Next question.

Christopher Lam from SF’s Double Fine Productions/ Courtesy of Wacom.

Q: What is the camera format? A: Red Weapon 6K, because the director wants to be able to do optical effects and stabilize most of the shots. This answer makes it very clear that we’re going to be editing offline, since the commercial budget doesn’t allow for the purchase of a blazing system with a huge, fast storage array.

Q: What is the post schedule? A: Eight weeks. Great. This should allow enough time to transcode ProRes proxies for all the media, followed by offline and online editorial.

At this point, it’s looking like there’s no need for an insanely powerful workstation, and the schedule looks like we’ll only need one editor and an assistant. Q: Who is the editorial staff? A: The editor is an Adobe Premiere guy, and the ad agency wants to spend a ton of time in the bay with him. Now, we know that agency folks really hate technical slowdowns that can sometimes occur with equipment that is pushing the envelope, so this workstation just needs to be something that’s simple and reliable. Macs make agency guys comfortable, so let’s go with a Mac Pro for the editor. If possible, I prefer to connect the client monitor directly via HDMI, since there are no delay issues that can sometimes be caused by HDMI to SDI converters. Of course, since that will use up the Mac Pro’s single HDMI port, the desktop monitors and the audio I/O box will use up two or three Thunderbolt ports. If the assistant editor doesn’t need such a powerful system, a high-end iMac could suffice.

(And for those who don’t mind waiting until the new iMac Pro ships in December, Apple’s latest release of the all-in-one workstation seems to signal a committed return for the company to the professional creative world – and is an encouraging sign for the Mac Pro overhaul in 2018. The iMac Pro addresses its non-upgradability by futureproofing itself as the most powerful all-in-one machine ever released. The base model starts at a hefty $4,999, but boasts options for up to a 5K display, 18-core Xeon processor, 128GB of RAM, and AMD Radeon Vega GPU. As more and more applications add OpenCL acceleration (AMD GPUs), the iMac Pro should stay relevant for a number of years.)

Now, our workflow would be very different if the answer to the first question had instead been A: It’s a feature film. Technicolor will handle the final delivery, but we still want to be able to make in-house 4K DCPs for screenings, EXR and DPX sequences for the VFX vendors, Blu-ray screeners, as well as review files and create all the high-res deliverables for mastering.

Since this project is a feature film, likely with a much larger editorial staff, the workflow might be better suited to editorial in Avid (to use project sharing/bin locking/collaborative editing). And since it turns out that Technicolor is grading the film in Blackmagic Resolve, it makes sense to online the film in Resolve and then pass the project over to Technicolor. Resolve will also cover any in-house temp grading and DCP creation and can handle virtually any video file.

PCs
For the sake of comparison, let’s build out some workstations on the PC side that will cover our editors, assistants, online editors, VFX editors and artists, and temp colorist. PC vs. Mac will likely be a hotly debated topic in this industry for some time, but there is no denying that a PC will return more cost-effective power at the expense of increased complexity (and potential for increased technical issues) than a Mac with similar specs. I also appreciate the longer lifespan of machines with easy upgradability and expandability without requiring expansion chassis or external GPU enclosures.

I’ve had excellent success with the HP Z line — using z840s for serious finishing machines and z440s and z640s for offline editorial workstations. There are almost unlimited options for desktop PCs, but only certain workstations and components are certified for various post applications, so it pays to do certification research when building a workstation from the ground up.

The Molecule‘s artist row in NYC.

It’s also important to keep the workstation components balanced. A system is only as strong as its weakest link, so a workstation with an insanely powerful GPU, but only a handful of CPU cores will be outperformed by a workstation with 16-20 cores and a moderately high-end GPU. Make sure the CPU, GPU, and RAM are similarly matched to get the best bang for your buck and a more stable workstation.

Relationships!
Finally, in terms of getting the best bang for your buck, there’s one trick that reigns supreme: build great relationships with hardware companies and vendors. Hardware companies are always looking for quality input, advice and real-world testing. They are often willing to lend (or give) new equipment in exchange for case studies, reviews, workflow demonstrations and press. Creating relationships is not only a great way to stay up to date with cutting edge equipment, it expands support options, your technical network and is the best opportunity to be directly involved with development. So go to trade shows, be active on forums, teach, write and generally be as involved as possible and your equipment will thank you.

Our Main Image Courtesy of editor/compositor Fred Ruckel.

 


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.

Netflix’s ‘Unbreakable Kimmy Schmidt’ gets crisper look via UHD

NYC’s Technicolor Postworks created a dedicated post workflow for the upgrade.

Having compiled seven Emmy Award nominations in its debut season, Netflix’s Unbreakable Kimmy Schmidt returned in mid-April with 13 new episodes in a form that is, quite literally, bigger and better.

The sitcom, from co-creators Tina Fey and Robert Carlock, features the ever-cheerful and ever-hopeful Kimmy Schmidt, whose spirit refuses to be broken, even after being held captive during her formative years. This season the series has boosted its delivery format from standard HD to the crisper, clearer, more detailed look of Ultra High Definition (UHD).

L-R: Pat Kelleher and Roger Doran

As with the show’s first season, post finishing was done at Technicolor PostWorks New York. Online editor Pat Kelleher and colorist Roger Doran once again served as the finishing team, working under the direction of series producer Dara Schnapper, post supervisor Valerie Landesberg and director of photography John Inwood. Almost everything else, however, was different.

The first season had been shot by Inwood with Arri Alexa, capturing in 1080p, and finished in ProRes 4444. The new episodes were shot with Red Dragon, capturing in 5K, and needed to be finished in UHD. That meant that the hardware and workflow used by Kelleher and Doran had to be retooled to efficiently manage UHD files four times larger than ProRes.

“It was an eye opener,” recalls Kelleher of the change. “Obviously, the amount of drive space needed for storage is huge. Everyone from our data manager through to the people who did the digital deliveries had to contend with the higher volume of data. The actual hands-on work is not that different from an HD show, but you need the horses to do it.”

Before post work began, engineers from Technicolor PostWorks’ in-house research unit, The Test Lab, analyzed the workflow requirements of UHD and began making changes. They built an entirely new hardware Unbreakable Kimmy Schmidtsystem for Kelleher to use, running Autodesk’s Flame Premium. It consisted of an HP Z820 workstation with Nvidia Quadro K6000 graphics, 64GB of RAM and dual Intel Xeon Processor E5-2687Ws (20M Cache, 3.10 GHz, 8.00 GT/s Intel QPI). Kelleher described its performance in handling UHD media as “flawless.”

Doran’s color grading suite got a similar overhaul. For him, engineers built a Linux-based workstation to run Blackmagic’s DaVinci Resolve, V11, and set up a dual monitoring system. That included a Panasonic 300 series display to view media in 1080p and a Samsung 9500 series curved LED to view UHD. Doran could then review color decisions in both formats (while maintaining a UHD signal throughout) and spot details or noise issues in UHD that might not be apparent at lower resolution.

While the extra firepower enabled Kelleher and Doran to work with UHD as efficiently as HD, they faced new challenges. “We do a lot of visual effects for this show,” notes Kelleher. “And now that we’re working in UHD, everything has to be much more precise. My mattes have to be tight because you can see so much more.”

Doran’s work in the color suite similarly required greater finesse. “You have to be very, very aware,” he says. “Cosmetically, it’s different. The lighting is different. You have to pay close attention to how the stars look.”

Doran is quick to add that, while grading UHD might require closer scrutiny, it’s justified by the results. “I like the increased range and greater detail,” he says. “I enjoy the extra control. Once you move up, you never want to go back.”

Both Doran and Kelleher credited the Technicolor PostWorks engineering team of Eric Horwitz, Corey Stewart and Randy Main for their ability to “move up” with a minimum of strain. “The engineers were amazing,” Kelleher insists. “They got the workflow to where all I had to think about was editing and compositing. The transition was so smooth, you almost forgot you were working in UHD, except for the image quality. That was amazing.”

Pixspan at NAB with 4K storage workflow solutions powered by Nvidia

During the NAB Show, Pixspan was demonstrating new storage workflows for full-quality 4K images powered by the Nvidia Quadro M6000. Addressing the challenges that higher resolutions and increasing amounts of data present for storage and network infrastructures, Pixspan is offering a solution that reduces storage requirements by 50-80 percent, in turn supporting 4K workflows on equipment designed for 2K while enabling data access times that are two to four times faster.

Pixspan software and the Nvidia Quadro M6000 GPU together deliver bit-accurate video decoding at up to 1.3GBs per second — enough to handle 4K digital intermediates or 4K/6K camera RAW files in realtime. Pixspan’s solution is based on its bit-exact compression technology, where each image is compressed into a smaller data file while retaining all the information from the original image, demonstrating how the processing power of the Quadro M6000 can be put to new uses in imaging storage and networking to save time and help users  meet tight deadlines.

Nvidia’s GTC 2016: VR, A.I. and self driving cars, oh my!

By Mike McCarthy

Last week, I had the opportunity to attend Nvidia’s GPU Technology Conference, GTC 2016. Five thousand people filled the San Jose Convention Center for nearly a week to learn about GPU technology and how to use it to change our world. GPUs were originally designed to process graphics (hence the name), but are now used to accelerate all sorts of other computational tasks.

The current focus of GPU computing is in three areas:

Virtual reality is a logical extension of the original graphics processing design. VR requires high frame rates with low latency to keep up with user’s head movements, otherwise the lag results in motion sickness. This requires lots of processing power, and the imminent release of the Oculus Rift and HTC Vive head-mounted displays are sure to sell many high-end graphics cards. The new Quadro M6000 24GB PCIe card and M5500 mobile GPU have been released to meet this need.

Autonomous vehicles are being developed that will slowly replace many or all of the driver’s current roles in operating a vehicle. This requires processing lots of sensor input data and making decisions in realtime based on inferences made from that information. Nvidia has developed a number of hardware solutions to meet these needs, with the Drive PX and Drive PX2 expected to be the hardware platform that many car manufacturers rely on to meet those processing needs.

This author calls the Tesla P100 "a monster of a chip."

This author calls the Tesla P100 “a monster of a chip.”

Artificial Intelligence has made significant leaps recently, and the need to process large data sets has grown exponentially. To that end, Nvidia has focused their newest chip development — not on graphics, at least initially — on a deep learning super computer chip. The first Pascal generation GPU, the Tesla P100 is a monster of a chip, with 15 billion 16nm transistors on a 600mm2 die. It should be twice as fast as current options for most tasks, and even more for double precision work and/or large data sets. The chip is initially available in the new DGX-1 supercomputer for $129K, which includes eight of the new GPUs connected in NVLink. I am looking forward to seeing the same graphics processing technology on a PCIe-based Quadro card at some point in the future.

While those three applications for GPU computing all had dedicated hardware released for them, Nvidia has also been working to make sure that software will be developed that uses the level of processing power they can now offer users. To that end, there are all sorts of SDKs and libraries they have been releasing to help developers harness the power of the hardware that is now available. For VR, they have Iray VR, which is a raytracing toolset for creating photorealistic VR experiences, and Iray VR Lite, which allows users to create still renderings to be previewed with HMD displays. They also have a broader VRWorks collection of tools for helping software developers adapt their work for VR experiences. For Autonomous vehicles they have developed libraries of tools for mapping, sensor image analysis, and a deep-learning decision-making neural net for driving called DaveNet. For A.I. computing, cuDNN is for accelerating emerging deep-learning neural networks, running on GPU clusters and supercomputing systems like the new DGX-1.

What Does This Mean for Post Production?
So from a post perspective (ha!), what does this all mean for the future of post production? First, newer and faster GPUs are coming, even if they are not here yet. Much farther off, deep-learning networks may someday log and index all of your footage for you. But the biggest change coming down the pipeline is virtual reality, led by the upcoming commercially available head-mounted displays (HMD). Gaming will drive HMDs into the hands of consumers, and HMDs in the hand of consumers will drive demand for a new type of experience for story-telling, advertising and expression.

As I see it, VR can be created in a variety of continually more immersive steps. The starting point is the HMD, placing the viewer into an isolated and large feeling environment. Existing flat video or stereoscopic content can be viewed without large screens, requiring only minimal processing to format the image for the HMD. The next step is a big jump — when we begin to support head tracking — to allow the viewer to control the direction that they are viewing. This is where we begin to see changes required at all stages of the content production and post pipeline. Scenes need to be created and filmed at 360 degrees.

At the conference, this high-fidelity VR simulation that uses scientifically accurate satellite imagery and data from NASA was shown.

The cameras required to capture 360 degrees of imagery produce a series of video streams that need to be stitched together into a single image, and that image needs to be edited and processed. Then the entire image is made available to the viewer, who then chooses which angle they want to view as it is played. This can be done as a flatten image sphere or, with more source data and processing, as a stereoscopic experience. The user can control the angle they view the scene from, but not the location they are viewing from, which was dictated by the physical placement of the 360-camera system. Video-Stitch just released a new all-in-one package for capturing, recording and streaming 360 video called the Orah 4i, which may make that format more accessible to consumers.

Allowing the user to fully control their perspective and move around within a scene is what makes true VR so unique, but is also much more challenging to create content for. All viewed images must be rendered on the fly, based on input from the user’s motion and position. These renders require all content to exist in 3D space, for the perspective to be generated correctly. While this is nearly impossible for traditional camera footage, it is purely a render challenge for animated content — rendering that used to take weeks must be done in realtime, and at much higher frame rates to keep up with user movement.

For any camera image, depth information is required, which is possible to estimate with calculations based on motion, but not with the level of accuracy required. Instead, if many angles are recorded simultaneously, a 3D analysis of the combination can generate a 3D version of the scene. This is already being done in limited cases for advance VFX work, but it would require taking it to a whole new level. For static content, a 3D model can be created by processing lots of still images, but storytelling will require 3D motion within this environment. This all seems pretty far out there for a traditional post workflow, but there is one case that will lend itself to this format.

Motion capture-based productions already have the 3D data required to render VR perspectives, because VR is the same basic concept as motion tracking cinematography, except that the viewer controls the “camera” instead of the director. We are already seeing photorealistic motion capture movies showing up in theaters, so these are probably the first types of productions that will make the shift to producing full VR content.

The Maxwell Kepler family of cards.

Viewing this content is still a challenge, where again Nvidia GPUs are used on the consumer end. Any VR viewing requires sensor input to track the viewer, which much be processed, and the resulting image must be rendered, usually twice for stereo viewing. This requires a significant level of processing power, so Nvidia has created two tiers of hardware recommendations to ensure that users can get a quality VR experience. For consumers, the VR-Ready program includes complete systems based on the GeForce 970 or higher GPUs, which meet the requirements for comfortable VR viewing. VR-Ready for Professionals is a similar program for the Quadro line, including the M5000 and higher GPUs, included in complete systems from partner ISVs. Currently, MSI’s new WT72 laptop with the new M5500 GPU is the only mobile platform certified VR Ready for Pros. The new mobile Quadro M5500 has the same system architecture as the desktop workstation Quadro M5000, with all 2048 CUDA cores and 8GB RAM.

While the new top-end Maxwell-based Quadro GPUs are exciting, I am really looking forward to seeing Nvidia’s Pascal technology used for graphics processing in the near future. In the meantime, we have enough performance with existing systems to start processing 360-degree videos and VR experiences.

Mike McCarthy is a freelance post engineer and media workflow consultant based in Northern California. He shares his 10 years of technology experience on www.hd4pc.com, and he can be reached at mike@hd4pc.com.

Dell embraces VR via Precision Towers

It’s going to be hard to walk the floor at NAB this year without being invited to demo some sort of virtual reality experience. More and more companies are diving in and offering technology that optimizes the creation and viewing of VR content. Dell is one of the latest to jump in.

Dell has been working closely on this topic with their hardware and software partners, and are formalizing their commitment to the future of VR by offering solutions that are optimized for VR consumption and creation alongside the mainstream professional ISV apps used by industry pros.

Dell has introduced new, recommended minimum system hardware configurations to support an optimal VR experience for pro users with HTC Vive or Oculus Rift VR solutions. The VR-ready solutions feature a set of three criteria, whether users are consuming or creating VR content; minimum CPU, memory and graphics requirements to support VR viewing experiences; graphics drivers that are qualified to work with these solutions; and pass performance tests conducted by the company using test criteria based on HMD (head-mounted display) suppliers, ISVs or third-party benchmarks.

Dell has also made upgrades to their Dell Precision Tower, including increased performance, graphics and memory for VR content creation. The refreshed Dell Precision Tower 5810, 7810 and 7910 workstations and rack 7910 have been upgraded with new Intel Broadwell EP processors that have more cores and performance for multi-threaded applications that support professional modeling, analysis and calculations.

Additional upgrades include the latest pro graphics technology from AMD and Nvidia, Dell Precision Ultra-Speed PCle drives with up to 4x faster performance than traditional SATA SSD storage, and up to 1TB of DDR4 Memory running at 2400MHz speed.

Raytracing today and in the future

By Jon Peddie

More papers, patents and PhDs have been written and awarded on ray tracing than any other computer graphic technique.

Ray tracing is a subset of the rendering market. The rendering market is a subset of software for larger markets, including media and entertainment (M&E), architecture, engineering and construction (AEC), computer-aided design (CAD), scientific, entertainment content creation and simulation-visualization. Not all users who have rendering capabilities in their products use it. At the same time there are products that have been developed solely as rendering tools and there are products that include 3D modeling, animation and rendering capabilities, and they may be used primarily for rendering, primarily for modeling or primarily for animation.

Because ray tracing is so important, and at the same time computationally burdensome, individuals and organizations have spent years and millions of dollars trying to speed things up. A typical ray traced scene on an old-fashioned HD screen can tax a CPU so heavily the image can only be upgraded maybe every second or two — certainly not the 33ms needed for realtime rendering.

GPUs can’t help much because one of the characteristics of ray tracing is it has no memory and every frame is a new frame, so the computational load is immutable. Also, the branching that occurs in raytracing defeats the power of a GPU’s SIMD architecture.

Material Libraries Critical
Prior to 2015, all ray tracer engines came with their own materials libraries. Cataloging the characteristics of all the types of materials in the world is beyond the resources of any company’s ability to develop and support. And the lack of standards has held back any cooperative development in the industry. However, a few companies have agreed to work together and share their libraries.

I believe we will see an opening up of libraries and the ability of various ray tracing engines to be able to avail themselves of a much larger library of materials. Nvidia is developing a standard-like capability they are calling the Material Definition Language — (MDL) and using it to allow various libraries to work with a wide range of ray tracing engines.

Rendering Becomes a Function of Price
In the near future, I expect to see 3D rendering become a capability offered as an online service. While it’s not altogether clear how this will affect the market, I think it will boost the use of ray tracing and lower the cost to an as-needed basis. It also offers the promise of being able to apply huge quantities of processing power limited only by the amount of money the user is willing to pay. Ray tracing will resolve to time (to render a scene) divided by cost.

That will continue to bring down the time to generate a ray traced frame for an animation for example, but it probably won’t get us to realtime ray tracing at 4K or beyond.

Shortcuts and Semiconductors
Work continues on finding clever ways to short circuit the computational load by using intelligent algorithms to look at the scene and deterministically allocate what objects will be seen, and which surfaces need to be considered.

Hybrid techniques are being improved and evolved where only certain portions of a scene are ray traced. Objects in the distance for example don’t need to be ray traced and flat, dull colored objects don’t need it.

Chaos Group says the use of variance-based adaptive sampling on this model of Christmas cookies from Autodesk 3ds Max provided a better final image in record time. (Source: Chaos Group)

Semiconductors are being developed to specifically accelerate ray tracing. Imagination Technologies, the company that designs Apple’s iPhone and iPad GPU, has a specific ray tracing engine that, when combined with the advance techniques just described can render an HD scene with partial ray traced elements several times a second. Siliconarts is a startup in Korea that has developed a ray tracing accelerator and I have seen demonstrations of it generating images at 30fps. And Nvidia is working ways to make a standard GPU more ray-tracing friendly.

All these ideas and developments will come together in the very near future and we will begin to realize realtime ray tracing.

Market Size
It is impossible to know how many users there are of ray tracing programs because the major 3D modeling and CAD programs, both commercial and free (e.g., Autodesk, Blender, etc.) have built-in ray tracing engines, as well as the ability to use pluggable add-on software programs for ray tracing.

The potentially available market vs. the totally available market (TAM).

Also, not all users make use of ray tracing on a regular basis— some use it every day, others maybe occasionally or once a project. Furthermore, some users will use multiple ray tracing programs in a project, depending upon their materials library, user interface, specific functional requirements or pipeline functionality.

Free vs. Commercial
A great deal of the raytracing software available on the market is the result of university projects. Some of the developers of such programs have formed companies, others have chosen to stay in academia or work as independent programmers.

The number of new suppliers has not slowed down indicating a continued demand for ray tracing

The non-commercial developers continue to offer their ray tracing rendering software as an open source and for free — and continue to support it, either individually or as part of a group.

Raytracing Engine Suppliers
The market for ray tracing is entering into a new phase. This is partially due to improved and readily available low-cost processors (thank you, Moore’s law), but more importantly it is because of the demand and need for accurate virtual prototyping and improved workflows.

Rendering in the cloud using GPUs (Source OneRender).

As with any market, there is a 20/80 rule, where 20 percent of the suppliers represent 80 percent of the market. The ray tracing market may be even more unbalanced. There would appear to be too many suppliers in the market despite failures and merger and acquisition activities. At the same time many competing suppliers have been able to successfully coexist by offering features customized for their most important customers.

Conclusion
Ray tracing is to manufacturing what a storyboard is to film — the ability to visualize the product before it’s built. Movies couldn’t be made today with the quality they have without ray tracing. Think of how good the characters in Cars looked — that imagery made it possible for you to suspend disbelief and get into the story. It used to be: “Ray tracing — Who needs it?” Today it’s: “Ray tracing? Who doesn’t use it?”

Our Main Image: An example of different materials being applied to the same object (Source Nvidia)

Dr. Jon Peddie is president of Jon Peddie Research, which just completed an in-depth market study on the ray tracing market. He is the former president of Siggraph Pioneers and  serves on advisory boards of several companies. In 2015, he was given the Life Time Achievement award from the CAAD society. His most recent book is “The History of Visual Magic in Computers.”

Pixar to make Universal Scene Description open-sourced

Pixar Animation Studios, whose latest feature film is Inside Out,  will release Universal Scene Description software (USD) as an open-source project by summer 2016. USD addresses the growing need in the CG film and game industries for an effective way to describe, assemble, interchange and modify high-complexity virtual scenes between digital content creation tools employed by studios.

At the core of USD are Pixar’s techniques for composing and non-destructively editing graphics “scene graphs,” techniques that Pixar has been cultivating for close to 20 years, dating back to A Bug’s Life. These techniques, such as file-referencing, layered overrides, variation and inheritance, were completely overhauled into a robust and uniform design for Pixar’s next-generation animation system, Presto.

Although it is still under active development and optimization, USD has been in use for nearly a year in the making of Pixar’s production Finding Dory.

The open-source Alembic project brought standardization of cached geometry interchange to the VFX industry. USD hopes to build on Alembic’s success, taking the next step of standardizing the “algebra” by which assets are aggregated and refined in-context.

The USD distribution will include embeddable direct 3D visualization provided by Pixar’s modern GPU renderer, Hydra, as well as plug-ins for several key VFX DCCs, comprehensive documentation, tutorials and complete python bindings.

Pixar has already been sharing early USD snapshots with a number of industry vendors and studios for evaluation, feedback and advance incorporation. Among the vendors helping to evaluate USD are The Foundry and Fabric Software.

——

In related news, to accelerate production of its computer-animated feature films and short film content, Pixar Animation Studios is licensing a suite of Nvidia technologies related to image rendering.

The multiyear strategic licensing agreement gives Pixar access to Nvidia’s quasi-Monte Carlo (QMC) rendering methods. These methods can make rendering more efficient, especially when powered by GPUs and other massively parallel computing architectures.

As part of the agreement, Nvidia will also contribute raytracing technology to Pixar’s OpenSubdiv Project, an open-source initiative to promote high-performance subdivision surface evaluation on massively parallel CPU and GPU architectures. The OpenSubdiv technology will enable rendering of complex Catmull-Clark subdivision surfaces in animation with incredible precision.

Nvidia takes on VR with DesignWorks VR at SIGGRAPH

At SIGGRAPH in LA, Nvidia introduced DesignWorks VR, a set of APIs, libraries and features that enable both VR headset and application developers to deliver immersive VR experiences. DesignWorks VR includes components that enable VR environments like head-mounted displays (HMDs), immersive VR spaces such as CAVEs and other immersive displays, and cluster solutions. DesignWorks VR builds on Nvidia’s existing GameWorks VR SDK for game developers, with improved support for OpenGL and features for professional VR applications.

Ford VR

At its SIGGRAPH 2015 booth, Nvidia featured a VR demonstration by the Ford Motor Company in which automotive designers and engineers were able to simulate the interiors and exteriors of vehicles in development within an ultra high-definition virtual reality space. By using new tools within DesignWorks VR, Ford and Autodesk realized substantial performance improvements to make the demo smooth and interactive.

In addition, Nvidia highlighted an immersive Lord of the Rings VR experience created by Weta Digital and Epic Games and powered the Nvidia Quadro M6000. At Nvidia’s “Best of GTC Theater,” companies such as Audi and Videostich  spoke on their work with VR in design.

Nvidia’s GPU Technology Conference: Part III

Entrepreneurs, self-driving cars and more

By Fred Ruckel

Welcome to the final installment of my Nvidia GPU Technology Conference experience. If you have read Part I and Part II, I’m confident you will enjoy this wrap-up — from a one-on-one meeting with one of Nvidia’s top dogs to a “shark tank” full of entrepreneurs to my take on the status of self-driving cars. Thanks for following along and feel free to email if you have any questions about my story.

Going One on One
I had the pleasure sitting down with Nvidia marketing manager Greg Estes, along with Gail Laguna, their PR expert in media and entertainment. They allowed me to pick their brains about Continue reading

Nvidia GPU Technology Conference 2015: Part I

By Fred Ruckel

Recently, I had the pleasure of attending the Nvidia GPU Technology Conference 2015 in San Jose, California, a.k.a. Silicon Valley. This was not a conference for the faint of heart; it was an in-depth look at where the development of GPU technology is heading and what strides it had made over the last year. In short, it was the biggest geek fest I have ever known, and I mean that as a compliment. The cast of The Big Bang Theory would have fit right in.

While some look at “geek” as having a negative connotation, in the world of technology geeks Continue reading

Dell Precision M3800 gets 4K UltraHD screen, other updates

By Dariush Derakhshani

Dell has announced today that their super slim mobile workstation will host a number of impressive updates. The Intel i7 CPU-based M3800 has been billed as the thinnest and lightest 15-inch mobile workstation and originally debuted in October 2013 to much excitement. I spoke with Dell about the updates to the laptop recently, and I thought the best update is to their high-end screen option: a jaw dropping 4K UltraHD resolution at 3840×2160.

Furthermore, the screen is made of Corning Gorilla glass and has 10-finger touch capability. Unless you have three hands, I doubt you’d need more touch-capability than that. On top of that, you can drive up to two external displays with the Nvidia Quadro K1100M discrete graphics card (or three displays via a USB dock!). Discrete graphics are important to workstation performance, and the Kepler-based K1100M should do a very good job even Continue reading

HP intros new versions of its mobile and tower workstations

By Mike McCarthy

Last week I got the opportunity to attend HP’s big workstation launch event in Fort Collins, Colorado. HP is releasing new versions of its ZBook mobile workstations and desktop Z workstation towers. I also got to tour their labs and see behind the curtain at the development and testing process.

New “G2” versions of last year’s HP ZBook 15 and ZBook 17 will be available later this month. Both models sport the newest “Haswell” architecture-based Intel CPUs, new AMD and Nvidia GPU cards and M.2 storage options. HP has branded their PCIe-based flash storage solution as the “Z-Turbo Drive,” and it is available in their new ZBooks and workstations. Removing the SATA interface bottleneck greatly improves maximum read and write speeds on the new flash Continue reading

Nvidia next-gen GPUs focus on speed, the cloud and mobile

At the SIGGRAPH show in Vancouver, Nvidia showed its next-generation of Quadro GPUs, which they describe as “an enterprise-grade visual computing platform” that offers 40 percent faster performance. With these cards, Nvidia is focusing on 4K, the cloud, mobile and collaboration, including remote rendering. Pricing is expected to remain the same.

“The next generation of Quadro GPUs not only dramatically increases graphics and compute performance to handle huge data sets it, extends the concept of visual computing from a graphics card in a workstation to a connected environment,” said Jeff Brown, VP of Professional Visualization at Nvidia. “The new Quadro line-up lets users interact with their designs or data locally on a workstation, remotely on a mobile device or in tandem with cloud-based services.”

A shot from Nvidia's New York press event on the set of ABC's The Chew.

A shot from Nvidia’s New York press event on the set of ABC’s The Chew.

Framestore’s CTO, Steve MacPherson, who spoke at an Nvidia press event recently in New York City, says, “From increased efficiency to new workflow models, the results we’ve achieved with the latest Quadro GPUs are fundamental to our future.”

Framestore, which provides VFX and graphics for major feature films such as Guardians of the Galaxy and Edge of Tomorrow, has been doing a lot of work recently in integrated advertising, including virtual reality, something that MacPherson calls “a blending between the physical and CG worlds.”

He points to Nvidia’s Quadro cards as a key part of the studio’s workflow. “Nvidia Quadro is an essential component to helping us keep our edge, and gives us the reassurance of knowing the graphics technology our artists rely on has been developed specifically for professional users with the highest standards of reliability and compatibility.”

Adobe’s Dennis Radke was at the New York event, talking about how these new cards accelerate the Creative Cloud and how it allows Premiere Pro to take in 4K Red Raw files without a Red Rocket card.

Also in New York, Nvidia showed remote collaboration workflows using Google Tablet running Autodesk Maya. Check out the company’s blog on the topic.

So to sum up, the new generation of Quadro GPUs — the K5200, K4200, K2200, K620 and K420 — allows users to interact with data sets or designs up to twice the size handled by previous generations; remotely interact with graphics applications from a Quadro-based workstation from essentially any device, including PCs, Macs and tablets; run major applications, such as Adobe Creative Cloud and the Autodesk Design Suite — on average 40 percent faster than with previous Quadro cards; and switch easily from local GPU rendering to cloud-based offerings using Nvidia Iray rendering.

 

NAB: Nvidia’s Andrew Page

Las Vegas — Nvidia’s Andrew Page came by the postPerspective booth during the NAB Show to discuss GPU acceleration. He even brought a prop: an Nvidia Quadro K6000 flagship board, which offers enhanced performance power for graphics and computing acceleration.

Page talks about partnerships with companies like Adobe, Blackmagic, Quantel and others. Nvidia’s cards can be used in desk-side workstations, mobile workstations and even in the cloud with companies like VMware.

Continue reading

Review: Nvidia’s Quadro K4000 running on an HP Z420

By Brady Betzel

As far as graphics cards go, Nvidia and AMD are at the head of the pack. Given Apple’s recent inclusion of AMD only on their latest Mac Pro, the competition is heating up.

For the longest time I built my own computers, not really focusing on my graphics card Continue reading

Quick Chat: Nvidia’s Greg Estes

By Randi Altman

Greg Estes, Nvidia’s VP of marketing, recently took a few minutes out of his schedule to discuss the industry, trends and how the company goes about creating new products that target the needs of users.

The short answer is listening to what studios and broadcasters need. The long answer is… well give it a read and see for yourself.

Continue reading

Tweak RV-SDI delivers 4K output with AJA, Nvidia GPUDirect for Video support

San Francisco— Tweak Software announced support for AJA Video’s Kona 3G, T-TAP and Io XT, and Nvidia’s GPUDirect for Video in RV-SDI, the advanced playback, review and image manipulation software for screening rooms and theaters.

Tweak’s RV-SDI’s capabilities for playback, review and collaboration include support for hundreds of media formats, all standard SDI output resolutions, completely GPU-based image processing, advanced color management and support for Windows, Mac and Linux. When combined with an AJA Kona 3G and an Nvidia Quadro GPU, RV-SDI enables 4K video output along with a range of digital cinema and video formats.

RV-SDI’s universal media engine can mix multiple media types on a single timeline including different file formats, resolutions, bit depths, and color spaces. RV-SDI delivers the benefits of a floating point, linear color workflow with GPU-accelerated support for CDL, LUTs and shaders, and the ability to build custom color pipelines. The new release of RV-SDI supports DCI 4K, UHD 4K, DCI 2K, HD, and more formats. It supports stereo dual stream playback, and interactivity that is demanded by top studios, VFX facilities, networks and post houses for color critical review and approval.

“Kona 3G and Tweak’s RV-SDI together deliver a robust and cost-effective 4K and stereo output solution for screening rooms,” said Nick Rashby, president, AJA Video Systems. “RV-SDI is unique in that along with a full range of support across multiple DCI and video formats, you can do real-time image processing on the GPU and hand it off for SDI output seamlessly.”

“RV-SDI takes advantage of Nvidia’s GPU Direct for Video exactly the way we intended, enabling low-latency video output from our GPUs for immediate viewing a range of resolutions on any SDI display,” said Andrew Page, product manager, professional video, Nvidia

“RV-SDI has become a trusted solution for review and approval in the pressure cooker environment of high end VFX and animation production. We are pushing the limits of AJA SDI hardware and Nvidia’s Quadro cards to combine SDI output of 4K, 10 bit stereo, and more with RV-SDI’s real-time color, collaboration and review features,” said Seth Rosenthal, president, Tweak Software.”

Tweak Software is currently holding a 2013 Year-End sale and RV-SDI is available as part of a special software bundle discounted at over 40 percent: http://www.tweaksoftware.com/buy/Tweak-2013-Year-End-Sale.

NVidia Grid vGPU technology now available

Santa Clara — Businesses can now offer their designers and engineers — including those working remotely —cost-effective, secure, graphics-intensive apps using Nvidia Grid vGPU (virtual GPU) technology. It launches today with the general availability of Citrix XenDesktop 7.1 and Citrix XenServer 6.2.

Nvidia Grid vGPU technology (http://www.nvidia.com/object/virtual-gpus.html) lets pros use essentially any computing device, including their own notebooks and portable devices, to access all their office productivity and design applications virtually — just as they would at their desks — from anywhere at any time.

Prior to Grid vGPU on Citrix XenDesktop, customers could deploy Grid to virtualize GPU access to end users on a one-to-one basis. Now, they can quickly share access on one GPU to many end users, and easily reallocate access depending on changing project needs.

“Since the launch of our technology preview of XenDesktop and XenServer, we’ve demonstrated to customers worldwide that graphically demanding desktops and applications can be virtualized cost effectively and with high scalability using Nvidia Grid vGPU,” said Calvin Hsu, VP product marketing, Desktops and Apps at Citrix. “With the general availability of XenDesktop 7.1 and XenServer 6.2, businesses everywhere can download a 60-day trial and experience the performance advantages for themselves.”

Certified by leading ISVs, Nvidia Grid vGPU technology allows multiple virtual machines to share a GPU and run the full NVidia driver, which means 100 percent application compatibility.

“Designers, engineers and creative professionals are thought leaders who drive innovation in business, but historically they’ve been limited in where they can work due to the relatively large computers they needed to do their jobs,” said Jeff Brown , VP of the Professional Visualization and Design business at NVidia. “With Nvidia Grid vGPU, these innovators can now work wherever they find insight — onsite with clients, at home or around the office.”

What are companies serving M&E are saying?

“Adobe Creative Cloud delivers tools and services for creatives to explore new mediums and go wherever their ideas take them. With the Nvidia Grid line of products, creative imaging professionals now have even more ways to access ultra-efficient, high-performance versions of Photoshop,” said Pam Clark, director of product management, Photoshop, at Adobe.

“Autodesk is continuously working to provide customers with access to the tools they need, when and where they need them. Utilizing the Nvidia Grid vGPU is one more way we can provide access to our design applications anytime, anywhere and on any device, without compromise,” said Jay Tedeschi, senior industry technology evangelist at Autodesk.