By Randi Altman
Greg Estes, Nvidia’s VP of marketing, recently took a few minutes out of his schedule to discuss the industry, trends and how the company goes about creating new products that target the needs of users.
The short answer is listening to what studios and broadcasters need. The long answer is… well give it a read and see for yourself.
I’ve been interviewing you about Nvidia technology for years, and you always seem to be on top of the latest industry trends. How do you map those back to product development to make sure you deliver tech and products that your customers need?
We have ongoing dialog with all of the major studios and several major broadcasters. We get a lot out of our collaboration with customers that push the envelope, like Pixar, Weta, ESPN, the BBC and so forth, and also from our software partners like Adobe, Autodesk and Vizrt, to name just a few. And we have a technology council we hold each year at GTC, the GPU Technology Conference, which is being held later this month.
That conference brings together the heads of technology of two dozen studios, post houses and VFX shops and a like number of the application partners with not only our product teams from across the company but also engineering and Nvidia Research, so we take that input very seriously. Their input is in large part why we have 12GB of graphics memory on the Quadro K6000, for instance. It increases the complexity of the scene artists can work with interactively in apps like Maya or Houdini.
How did Nvidia work with this year’s Academy VFX nominees and SciTech winners?
Pretty much all of the major studios use commercial software but also develop their own in-house applications using CUDA, which is our platform for parallel computing using our GPUs. We have a team we call Developer Technology, which includes what I think are some of the brightest computer graphics and parallel computing minds in the world. That’s a free resource to our developers, and a lot of great work has come from those collaborations, including Adobe’s Mercury Playback Engine and Panta Ray from Weta. ILM has always been on the leading edge of using our GPUs, and we were so happy for them to have been recognized with the Academy Award for Plume, their GPU-accelerated particle system used for fire and explosions.
You know, being in the industry you get to see how all the visual effects are done, but sometimes you just get excited as a consumer. When I went to see Gravity with my wife the whole time I was thinking that I couldn’t wait to see the breakdown reel from Framestore to see how they did the effects, which I though were amazing (and Oscar-award-winning). Stellar work.
How is Nvidia working with Pixar? They had a cool demo in your booth at SIGGRAPH and it looks like they’re keynoting the M&E track at the Nvidia GPU Technology Conference this year too.
I don’t think it’s any secret that the team at Pixar includes not only amazing artists and storytellers, but world-class computer scientists as well. They represent maybe the best example of the collaboration I mentioned earlier. They stretch us in everything we make – the capabilities of the GPUs themselves, ray tracing software like Nvidia OptiX, what goes into our Linux drivers, how our Grid technology could be deployed for remote workstations, how GPU rendering can assist in artist previews and so forth. They will be articulating how they use our technology across their workflow, and you can imagine how proud we are to be associated with them and their willingness to share insights with the community.
With all those broadcast and film customers presenting at GTC, what do you expect to be some of the highlights?
By calling out a few, I feel like I am doing a disservice to the more than 40 other media/entertainment talks I don’t list… but there are a couple of things I’m particularly excited about. Adobe will be talking about accelerating 2D vector graphics using Nvidia path rendering technology within Adobe Illustrator CC. Bringing GPU acceleration to Illustrator would be phenomenal, and their early work has shown significant speed-ups, so that’s very promising.
Zoic Studios, Blur, Chaos, OTOY and others will be talking about advances in GPU final-frame rendering, which is of great interest to a lot of customers, and there are a number of talks on video processing for broadcast and delivery to mobile devices from the likes of ESPN, Elemental Technologies and of course our own engineers. And there are talks from ILM, Google, and dozens of others. It’s an amazing event.
There was a lot of chatter when the new Mac Pros came out, and some concern from customers who relied on CUDA-accelerated GPU performance for some of their content creation apps. Where are you pointing those customers?
Yeah, unfortunately at this point there is no good pathway to get CUDA acceleration or other benefits of Nvidia GPUs in the new Mac Pro, but there are good options for customers who have invested in a Mac workflow and want to stay on Mac OS.
The first is for customers who want to continue to leverage their investment in the older PCI-based Mac Pro. For them, our testing has shown that for Adobe Premiere Pro they can get better performance than the new Mac Pro simply by adding one or two Quadro K5000s to their existing system. That will of course accelerate DaVinci Resolve, Maya, Nuke and other apps as well.
Final Cut users would get better performance on the new Mac Pro, though — Apple has done a fantastic job of tuning FCP X for the new machine. But our tests show that with Adobe the K5000s will be faster. CUDA is typically 20-30% faster than OpenCL in direct comparisons, so that’s part of the benefit. And then a number of reviews I’ve seen have shown that an iMac (which is Nvidia-based) will perform roughly as well for some apps, so for users who want to stay on Mac OS and get the benefit of Nvidia technology, the iMac or MacBook Pro remain great alternatives.
There’s been a lot of talk about cloud-based workflows in production and post. How can users get Nvidia GPU power through the cloud?
Quite a number of ways, and the number of options are growing seemingly every week. Dell, IBM, Cisco, HP and a number of other system providers all have Grid-based offerings. Grid is our GPU line that is built for the data center with special technology that allows for GPU virtualization and remoting.
In other words, delivering a workstation experience to users who are not physically located next to their system. That could be because the GPUs are in a server in the company data center in the same building or it could be an artist accessing their workstation located in NYC from their tablet while in a customer visit in LA. Typically these installations are best for larger businesses that will put in a virtualization infrastructure based on Citrix or VMware virtualization technology along with Grid GPUs.
For smaller shops that may not have that level of IT infrastructure, there are a couple of alternatives that might make sense. One is Nvidia’s Grid Visual Computing Appliance, or VCA. The VCA is 8 GPUs in a 4U rack system with Nvidia’s own software. It does not require a big IT investment and is more suited to smaller agencies. Then late last year Amazon announced they were putting Grid technology in the Amazon Web Services cloud. That means that companies can now create offerings to stream GPU-accelerated applications to essentially any computing device. So no matter what size of company you are or what your workflow is, there are options available that simply were not possible a year ago.
Any cool surprises for NAB? Any fun events our readers should know about?
We’ll be showing all of these cloud-based options, some new advances in 4K workflows and some new things from partners we can’t quite talk about yet.
We also have a private VIP customer event that we don’t publicize but for readers of postPerspective that will be attending NAB, if they go to the Nvidia booth in the South Hall on Monday April 7 (only) and tell the folks at our registration desk that they are postPerspective readers, we’ll get them a ticket. I promise they will not be disappointed.
Published March 2, 2014