By Andrew C. Jones
There is no shortage of articles online offering tips about 3D rendering. I have to admit that attempting to write one myself gave me a certain amount of trepidation considering how quickly most rendering advice can become obsolete, or even flat-out wrong.
The trouble is that production rendering is a product of the computing environment, available software and the prevailing knowledge of artists at a given time. Thus, the shelf life for articles about rendering tends to be five years or so. Inevitably, computing hardware gets faster, new algorithms get introduced and people shift their focus to new sets of problems.
I bring this up not only to save myself some embarrassment five years from now, but also as a reminder that computer graphics, and rendering in particular, is still an exciting topic that is ripe for innovation and improvement. As artists who spend a lot of time working within rigid production pipelines, it can be easy to forget this.
Below are some thoughts distilled from my own experience working in graphics, which I feel are about as relevant today as they would have been when I started working back in 2003. Along with each item, I have also included some commentary on how I feel the advice is applicable to rendering in 2016, and to Psyop’s primary renderer, Solid Angle’s Arnold, in particular.
Follow Academic Research
This can be intimidating, as reading academic papers takes considerably more effort than more familiar kinds of reading. Rest assured, it is completely normal to need to read a paper several times and to require background research to digest an academic paper. Sometimes the background research is as helpful as the paper itself. Even if you do not completely understand everything, just knowing what problems the paper solves can be useful knowledge.
Papers have to be novel to be published, so finding new rendering research relevant to 2016 is pretty easy. In fact, many useful papers have been overlooked by the production community and can be worth revisiting. A recent example of this is Charles Schmidt and Brian Budge’s paper, “Simple Nested Dielectrics in Ray Traced Images” from 2002, which inspired Jonah Friedman to write his open source JF Nested Dielectric shader for Arnold in 2013. ACM’s digital library is a fantastic resource for finding graphics-related papers.
Study the Photographic Imaging Pipeline
Film, digital cinema and video are engineering marvels, and their complexity is easily taken for granted. They are the template for how people expect light to be transformed into an image, so it is important to learn how they work.
Despite increasing emphasis on physical accuracy over the past few years, a lot of computer graphics workflows are still not consistent with real-world photography. Ten years ago, the no-nonsense, three-word version of this tip would have been “use linear workflow.” Today, the three-word version of the tip should probably be “use a LUT.” In five more years, perhaps people will finally start worrying about handling white balance properly. OpenColorIO and ACES are two recent technologies that fit under this heading.
Examples of recent renders done by Psyop on jobs for online retailer Otto and British Gas.
Study Real-World Lighting
The methodology and equipment of on-set lighting in live-action production can teach us a great deal, both artistically and technically. From an aesthetic standpoint, live-action lighting allows us to focus on learning how to control light to create pleasing images, without having to worry about whether or not physics is being simulated correctly.
Meanwhile, simulating real-world light setups accurately and efficiently in CG can be technically challenging. Many setups rely heavily on indirect effects like diffusion, but these effects can be computationally expensive compared to direct lighting. In Arnold, light filter shaders can help transform simplistic area lights into more advanced light rigs with view-dependent effects.
Fight for Simplicity
As important as it is to push the limits of your workflow and get the technical details right, all of that effort is for naught if the workflow is too difficult to use and artists start making mistakes.
In recent years, simplicity has been a big selling point for path-tracing renderers as brute force path-tracing algorithms tend to require fewer parameters than spatially dependent approximations. Developers are constantly working to make their renderers more intuitive, so that artists can achieve realistic results without visual cheats. For example, Solid Angle recently added per-microfacet fresnel calculations, which help achieve more realistic specular reflections along the edges of surfaces.
Familiarize Yourself With Your Renderer’s API (If it Has One)
Even if you have little coding background, the API can give you a much deeper understanding of how your renderer really works. This can be a significant trade-off for GPU renderers, as the fast-paced evolution of GPU programming makes providing a general purpose API particularly difficult.
Embrace the Statistical Nature of Raytracing
The “DF” in BRDF actually stands for “distribution function.” Even real light is made of individual photons, which can be thought of as particles bouncing off of surfaces according to probability distributions. (Just don’t think of the photons as waves or they will stop cooperating!)
When noise problems occur in a renderer, it is often because a large amount of light is being represented by a small subset of sampled rays. Intuitively, this is a bit like trying to determine the average height of Germans by measuring people all over the world and asking if they are German. Only 1 percent of the world’s population is German, so you will need to measure 100 times more people than if you collected your data from within Germany’s borders.
One way developers can improve a renderer is by finding ways to gather information about a scene using fewer samples. These improvements can be quite dramatic. For example, the most recent Arnold release can render some scenes up to three times as fast, thanks to improvements in diffuse sampling. As an artist, understanding how randomization, sampling and noise are related is the key to optimizing a modern path tracer, and it will help you anticipate long render times.
Learn What Your Renderer Does Not Do
Although some renderers prioritize physical accuracy at any cost, most production renderers attempt to strike a balance between physical accuracy and practicality.
Light polarization is a great example of something most renderers do not simulate. Polarizing filters are often used in photography to control the balance between specular and diffuse light on surfaces and to adjust the appearance of certain scene elements like the sky. Recreating these effects in CG requires custom solutions or artistic cheats. This can make a big difference when rendering things like cars and water.
Plan for New Technology
Technology can change quickly, but adapting production workflows always takes time. By anticipating trends, such as HDR displays, cloud computing, GPU acceleration, virtual reality, light field imaging, etc., we not only get a head start preparing for the future, but also motivate ourselves to think in different ways. In many cases, solutions that are necessary to support tomorrow’s technology can already change the way we work today.
Andrew C. Jones is head of visual effects at NYC- and LA-based Psyop, which supplies animation, design, illustration, 3D, 2D and live-action production to help brands connect with consumers. You can follow them on Twitter @psyop