NBCUni 7.26

Nvidia’s GPU Technology Conference: Part II

By Fred Ruckel

A couple of weeks ago I had the pleasure of attending Nvidia’s GPU Technology Conference in San Jose. I spent five days sitting in on conferences, demos, and in a handful of one-on-one meetings. If the Part I of my story had you interested in the new world of GPU technology, take a dive into this installment and learn what other cool things Nvidia has created to enhance your workflow.

Advanced Rendering Solutions
We consider rendering to be the final output of an animation. While that’s true, there’s a lot
more to rendering than just the final animated result. We could jump straight to the previz stage of a production — we know this can help plan for a perfect shoot — but in this instance that’s not where I’m heading. I am referring to the rendering-out of “simulations.” Let’s use an example to illustrate how this technology can make a difference. You may have heard about the drama surrounding the partially melted car, seemingly caused by a “death ray” that bounced from a building in London in 2014? Well sadly, in this instance, advanced rendering wasn’t used. Previsualization would have demonstrated the way in which the sunlight was intensified by the curvature of this building.

Today, computers have given way to an amazing growth of architectural design. Sometimes however, “aspects” may get overlooked. In this case, a beautiful new building, adorned in curved glass, stood high above the streets of London. The curved glass combined with the height of the sun, created a super-magnifying glass. How much of a magnifier? The beam projected to the street was 200-degrees Fahrenheit, almost hot enough to boil water. Needless to say, the prolonged exposure deformed the plastic parts of someone’s luxury car.

This could have been avoided by using a GPU-accelerated simulation render. Using IRAY, a physically based renderer, an artist can assess and test scenarios, like the affect of the sun on the curved glass of the building. The system can calculate the angle of light to glass, magnification from glass to street and even calculate the degree of heat rise generated by the sun’s intensity on any given day of the year. Fortunately, the so called “Death Ray” building only melted parts of the car. Had the angle been just a little different, the magnification would have been so extreme it would have liquified the asphalt or perhaps created an intensively destructive fire.

That’s pretty deep stuff to comprehend, however, by using Nvidia’s GPU technology, it’s possible to carry out real-world scenarios like this in realtime. Through natural technological development, Nvidia is making the streets safer to walk on.

I took a picture of a simulated rendering of a room; the actual picture of the room was recreated via advanced rendering. However, compared side-by-side they were virtually identical. As an artist, you could pan around the room interactively and change the way the sun lit the room, how shadows would look and how the interior of the space would change over time, all with the full materials on the modeling. This was quite impressive.

A very realistic-looking render.

Let’s take a minute to consider perceived trends in the rendering world:
Trend 1: We have reached a point where many of us question what we see: “If it looks right, it must be right?” Exhaustive mathematical simulations allow for extreme accuracy within real-world environments. This opens many doors for designers; worlds with “true realism” can be created like never before.

Trend 2: Physical-based GPU raytracing for practical production is a reality. If you’ve been in this business a while, you’ll probably know how long an acceptable raytraced render can take. That’s usually the one you set off before you leave for the night and cross your fingers that it doesn’t crash mid-render.

Using GPU acceleration allows artists to work with raytracing at an interactive level, making things “just that much better.” After Effects artists can rejoice! OptiX will make it possible for fully raytraced to co-exist within a compositing pipeline, all inside of After Effects.

Construct – A Film Rendered Using GPU
While at the conference, I attended a session called Construct (our main photo for this story) presented by a 12-year veteran of Blur Studio, Kevin Margo. Kevin is known for his development work on Batman and Wolverine. Construct was a short film made using GPU final-frame rendering, Kevin played the Construct trailer. It was BEYOND impressive! The rendering was so clean and the motion capture worked so well that the rigged characters felt real and “alive.”

The rendering performance of Nvidia’s GPU was final movie quality, rendered at 5x speed. This allowed his team to work much more efficiently; it allowed them to be more creative. His teams didn’t have time to wait around, doing the “render wander.” They completed a months worth of work in just four days. Kevin was able to put-together a smaller team, using less equipment, thus keeping his footprint small and his output large. That equals impressive savings and gain for a smaller business.

image001 image003
Construct

You no longer need to build a big renderfarm, nor do you need to send to an external renderfarm. With just a few GPUs inside of a tower, you can achieve the same level of quality. For a small company, this is a perfect solution to a long-standing issue. A little company can now crank out high-end renders at a fraction of the cost and time. In the case of Construct, Kevin actually rendered it out in his apartment, on three computers with GPUs. That’s beyond amazing. (My two cents: While that is great, try to leave work at work — unless you work at home! It’s only really cool if you use that extra time to have a life.)

Faster rendering equals faster development. Using VRayRT allows you to interactively pan around the scene and frame the cameras. You can use multiple cameras and then edit your sequence to have impossible angles filmed, making for a dynamic edit.

Kevin placed emphasis on digital filmmaking, using realtime technologies and taking advantage of the creative benefit. Some key new features can be found in the camera functionality — it can now solve complicated things in realtime. Virtual Camera Parameters, such as depth of field, exposure, white balance, shutter angle and field of view, can all be simulated in realtime thanks to the advancements in GPU technology.

The result means you can now:
– manage a workflow, that has a consistent look, from start to finish
– have more time to be creative
– very large savings of time, which in turn equates to a large saving in money.

GPU rendering for final frame quality has come a long way. It’s worth rethinking your current model, especially if you’re only rendering using CPU — CPU rendering has always been the norm. A change to GPU may just give you your life back.

Fred Ruckel is a technology consultant and entrepreneur. He is also CEO of RuckSack NY, a Manhattan-based soups-to-nuts creative agency.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.