Tag Archives: rendering

Conductor boosts its cloud rendering with Amazon EC2

Conductor Technologies’ cloud rendering platform will now support Amazon Web Services (AWS) and Amazon Elastic Compute Cloud (Amazon EC2), bringing the virtual compute resources of AWS to Conductor customers. This new capability will provide content production studios working in visual effects, animation and immersive media access to new, secure, powerful resources that will allow them — according to the company — to quickly and economically scale render capacity. Amazon EC2 instances, including cost-effective Spot Instances, are expected to be available via Conductor this summer.

“Our goal has always been to ensure that Conductor users can easily access reliable, secure instances on a massive scale. AWS has the largest and most geographically diverse compute, and the AWS Thinkbox team, which is highly experienced in all facets of high-volume rendering, is dedicated to M&E content production, so working with them was a natural fit,” says Conductor CEO Mac Moore. “We’ve already been running hundreds of thousands of simultaneous cores through Conductor, and with AWS as our preferred cloud provider, I expect we’ll be over the million simultaneous core mark in no time.”

Simple to deploy and highly scalable, Conductor is equally effective as an off-the-shelf solution or customized to a studio’s needs through its API. Conductor’s intuitive UI and accessible analytics provide a wealth of insightful data for keeping studio budgets on track. Apps supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy. Additional software and plug-in support are in progress, and may be available upon request.

Some background on Conductor: it’s a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud. As the only rendering service that is scalable to meet the exact needs of even the largest studios, Conductor easily integrates into existing workflows, features an open architecture for customization, provides data insights and can implement controls over usage to ensure budgets and timelines stay on track.

Review: Maxon Cinema 4D Release 20

By Brady Betzel

Last August, Maxon made available its Cinema 4D Release 20. From the new node-based Material Editor to the all new console used to debug and develop scripts, Maxon has really upped the ante.

At the recent NAB show, Maxon announced that they acquired Redshift Rendering Technologies, the makers of the Redshift rendering engine. This acquisition will hopefully tie in an industry standard GPU-based rendering engine inside of Cinema 4D R20’s workflow and speed up rendering. As of now there is still the same licensing fees attached to Redshift as there were before the acquisition: Node-Locked is $500 and Floating is $600.

Digging In
The first update to Cinema 4D R20 that I wanted to touch on is the new node-based Material Editor. If you are familiar with Blackmagic’s DaVinci Resolve or Nuke’s applications, then you have seen how nodes work. I love how nodes work, allowing the user to not only layer up effects — or in Cinema 4D R20’s case — diffusion to camera distance. There are over 150 nodes inside of the material editor to build textures with.

One small change that I noticed inside of the updated Material Editor was the new gradient settings. When you are working with gradient knots you can now select multiple knots at once and then right click and double the selected knots, invert the knots, select different knot interpolations (including stepped, smooth, cubic, linear, and blend) and even distribute the knots to clean up your pattern. A real nice and convenient update to gradient workflows.

In Cinema 4D R20, not only can you add new nodes from the search menu, but you can also click the node dots in the Basic properties window and route nodes through there. When you are happy with your materials made in the node editor, you can save them as assets in the scene file or even compress them in a .zip file to share with others.

In a related update category, Cinema 4D Release 20 has introduced the Uber Material. In simple terms (and I mean real simple), the Uber Material is a node-based material that is different from standard or physical materials because it can be edited inside of the Attribute Manager or Material Editor but retain the properties available in the Node Editor.

The Camera Tracking and 2D Camera View has been updated. While the Camera Tracking mode has been improved, the new 2D Camera View mode has combined the Film Move mode with the Film Zoom mode. Adding the ability to use standard shortcuts to move around a scene instead of messing with the Film Offset or Focal Length in the Camera Object Properties dialogue. For someone like me who isn’t a certified pro in Cinema 4D, these little shortcuts really make me feel at home. Much more like apps I’m used to such as Mocha Pro or After Effects. Maxon has also improved the 2D tracking algorithm for much tighter tracks as well as added virtual keyframes. The virtual keyframes are an extreme help when you don’t have time for minute adjustments.

Volume Modeling
What seems to be one of the largest updates in Cinema 4D R20 is the addition of Volume Modeling with the OpenVDB-based Volume Builder. According to www.openvdb.org, “OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids,” developed by Ken Museth at DreamWorks Animation. It uses 3D pixels called voxels instead of polygons. When using the Volume Builder you can combine multiple polygon and primitive objects using Boolean operations: Union, Subtract or Intersect. Furthermore you can smooth your volume using multiple techniques, including one that made me do some extra Google work: Laplacian Flow.

Fields
When going down the voxel rabbit hole in Cinema 4D R20, you will run into another new update: Fields. Prior to Cinema 4D R20, we would use Effectors to affect strength values of an object. You would stack and animate multiple effectors to achieve different results. In Cinema 4D R20, under the Falloff tab you will now see a Fields list along with the types of Field Objects to choose from.

Imagine you make a MoGraph object that you want its opacity to be controlled by a box object moving through your MoGraph but also physically modified by a capsule poking through. You can combine these different field object effectors by using compositing functions in the Fields list. In addition you can animate or alter these new fields straight away in the Objects window.

Summing Up
Cinema 4D Release 20 has some amazing updates that will greatly improve efficiency and quality of your work. From tracking updates to field updates, there are plenty of exciting tools to dive into. And if you are reading this as an After Effects user who isn’t sure about Cinema 4D, now is the time to dive in. Once you learn the basics, whether it’s from Youtube tutorials or you sign up for www.cineversity.com classes, you will immediately see an increase in the quality of your work.

Combining Adobe After Effects, Element 3D and Cinema 4D R20 is the ultimate in 3D motion graphics and 2D compositing — accessible to almost everyone. And I didn’t even touch on the dozens of other updates to Cinema 4D R20 like the multitude of ProRender updates, FBX import/export options, new node materials and CAD import support for Cataia, Iges, JT, Solidworks and Step formats. Check out Cinema 4D Release 20’s newest features on YouTube and on their website.

And, finally, I think it’s safe to assume that Maxon’s acquisition of RedShift renderer poses a bright future for Cinema 4D users.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Autodesk Arnold 5.3 with Arnold GPU in public beta

Autodesk has made its Arnold 5.3 with Arnold GPU available as a public beta. The release provides artists with GPU rendering for a set number of features, and the flexibility to choose between rendering on the CPU or GPU without changing renderers.

From look development to lighting, support for GPU acceleration brings greater interactivity and speed to artist workflows, helping reduce iteration and review cycles. Arnold 5.3 also adds new functionality to help maximize performance and give artists more control over their rendering processes, including updates to adaptive sampling, a new version of the Randomwalk SSS mode and improved Operator UX.

Arnold GPU rendering makes it easier for artists and small studios to iterate quickly in a fast working environment and scale rendering capacity to accommodate project demands. From within the standard Arnold interface, users can switch between rendering on the CPU and GPU with a single click. Arnold GPU currently supports features such as arbitrary shading networks, SSS, hair, atmospherics, instancing, and procedurals. Arnold GPU is based on the Nvidia OptiX framework and is optimized to leverage Nvidia RTX technology.

New feature summary:
— Major improvements to quality and performance for adaptive sampling, helping to reduce render times without jeopardizing final image quality
— Improved version of Randomwalk SSS mode for more realistic shading
— Enhanced usability for Standard Surface, giving users more control
— Improvements to the Operator framework
— Better sampling of Skydome lights, reducing direct illumination noise
— Updates to support for MaterialX, allowing users to save a shading network as a MaterialX look

Arnold 5.3 with Arnold GPU in public beta will be available March 20 as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. You can also try Arnold GPU with a free 30-day trial of Arnold. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, Houdini, Cinema 4D and Katana.

Autodesk launches Maya 2019 for animation, rendering, more

Autodesk has released the latest version of Maya, its 3D animation, modeling, simulation and rendering software. Maya 2019 features significant updates for speed and interactivity and addresses some challenges artists face throughout production, providing faster animation playback to reduce the need for playblasts, higher quality 3D previews with Autodesk Arnold updates in viewport 2.0, improved pipeline integration with more flexible development environment support, and performance improvements that most Maya artists will notice in their daily work.

Key new Maya 2019 features include:
• Faster Animation: New cached playback increases animation playback speeds in viewport 2.0, giving animators a more interactive and responsive animating environment to produce better quality animations. It helps reduce the need to produce time-consuming playblasts to evaluate animation work, so animators can work faster.


• Higher Quality Previews Closer to Final Renders: Arnold upgrades improve realtime previews in viewport 2.0, allowing artists to preview higher quality results that are closer to the final Arnold render for better creativity and less wasted time.
• Faster Maya: New performance and stability upgrades help improve daily productivity in a range of areas that most artists will notice in their daily work.
• Refining Animation Data: New filters within the graph editor make it easier to work with motion capture data, including the Butterworth filter and the key reducer to help refine animation curves.
• Rigging Improvements: New updates help make the work of riggers and character TDs easier, including the ability to hide sets from the outliner to streamline scenes, improvements to the bake deformer tool and new methods for saving deformer weights to more easily script rig creation.
• Pipeline Integration Improvements: Development environment updates make it easier for pipeline and tool developers to create, customize and integrate into production pipelines.
• Help for Animators in Training: Sample rigged and animated characters, as well as motion capture samples, make it easier for students to learn and quickly get started animating.

Maya 2019 is available now as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection.

Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.

SIGGRAPH: Nvidia intros Quadro RTX raytracing GPU

At SIGGRAPH, Nvidia announced its first Turing architecture-based GPUs, which enable artists to render photorealistic scenes in realtime, add new AI-based capabilities to their workflows and experience fluid interactivity with complex models and scenes.

The Nvidia Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 enable hardware-accelerated raytracing, AI, advanced shading and simulation. Also announced was the Quadro RTX Server, a reference architecture for highly configurable, on-demand rendering and virtual workstation solutions from the datacenter.

“Quadro RTX marks the launch of a new era for the global computer graphics industry,” says Bob Pette, VP of professional visualization at Nvidia. “Users can now enjoy powerful capabilities that weren’t expected to be available for at least five more years. Designers and artists can interact in realtime with their complex designs and visual effects in raytraced photo-realistic detail. And film studios and production houses can now realize increased throughput with their rendering workloads, leading to significant time and cost savings.”

Quadro RTX GPUs are designed for demanding visual computing workloads, such as those used in film and video content creation, automotive and architectural design and scientific visualization.

Quadro RTX Server

Features include:
• New RT cores to enable realtime raytracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination.
• Turing Tensor Cores to accelerate deep neural network training and inference, which are critical to powering AI-enhanced rendering, products and services.
• New Turing Streaming Multiprocessor architecture, featuring up to 4,608 CUDA cores, that delivers up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second to accelerate complex simulation of real-world physics.
• Advanced programmable shading technologies to improve the performance of complex visual effects and graphics-intensive experiences.
• First implementation of ultra-fast Samsung 16Gb GDDR6 memory to support more complex designs, massive architectural datasets, 8K movie content and more.
• Nvidia NVLink to combine two GPUs with a high-speed link to scale memory capacity up to 96GB and drive higher performance with up to 100GB/s of data transfer.
• Hardware support for USB Type-C and VirtualLink, a new open industry standard being developed to meet the power, display and bandwidth demands of next-generation VR headsets through a single USB-C connector.• New and enhanced technologies to improve performance of VR applications, including Variable-Rate Shading, Multi-View Rendering and VRWorks Audio.

The Quadro RTX Server combines Quadro RTX GPUs with new Quadro Infinity software (available in the 1st quarter of 2019) to deliver a flexible architecture to meet the demands of creative pros. Quadro Infinity will enable multiple users to access a single GPU through virtual workstations, dramatically increasing the density of the datacenter. End-users can also easily provision render nodes and workstations based on their specific needs.

Quadro RTX GPUs will be available starting in the 4th quarter. Pricing is as follows:
Quadro RTX 8000 with 48GB memory: $10,000 estimated street price
Quadro RTX 6000 with 24GB memory: $6,300 ESP
Quadro RTX 5000 with 16GB memory: $2,300 ESP

Siggraph: Chaos Group releases the open beta for V-Ray for Houdini

With V-Ray for Houdini now in open beta, Chaos Group is ensuring that its rendering technology can be used on to each part of the VFX pipeline. With V-Ray for Houdini, artists can apply high-performance raytracing to all of their creative projects, connecting standard applications like Autodesk’s 3ds Max and Maya, and Foundry’s Katana and Nuke.

“Adding V-Ray for Houdini streamlines so many aspects of our pipeline,” says Grant Miller, creative director at Ingenuity Studios. “Combined with V-Ray for Maya and Nuke, we have a complete rendering solution that allows look-dev on individual assets to be packaged and easily transferred between applications.” V-Ray for Houdini was used by Ingenuity on the Taylor Swift music video for Look What You Made Me Do. (See our main image.) 

V-Ray for Houdini uses the same smart rendering technology introduced in V-Ray Next, including powerful scene intelligence, fast adaptive lighting and production-ready GPU rendering. V-Ray for Houdini includes two rendering engines – V-Ray and V-Ray GPU – allowing visual effects artists to choose the one that best takes advantage of their hardware.

V-Ray for Houdini, Beta 1 features include:
• GPU & CPU Rendering – High-performance GPU & CPU rendering capabilities for high-speed look development and final frame rendering.
• Volume Rendering – Fast, accurate illumination and rendering of VDB volumes through the V-Ray Volume Grid. Support for Houdini volumes and Mac OS are coming soon.
• V-Ray Scene Support – Easily transfer and manipulate the properties of V-Ray scenes from applications such as Maya and 3ds Max.
• Alembic Support – Full support for Alembic workflows including transformations, instancing and per object material overrides.
• Physical Hair – New Physical Hair shader renders realistic-looking hair with accurate highlights. Only hair as SOP geometry is supported currently.
• Particles – Drive shader parameters such as color, alpha and particle size through custom, per-point attributes.
• Packed Primitives – Fast and efficient handling of Houdini’s native packed primitives at render time.
• Material Stylesheets – Full support for material overrides based on groups, bundles and attributes. VEX and per-primitive string overrides such as texture randomization are planned for launch.
• Instancing – Supports copying any object type (including volumes) using Packed Primitives, Instancer and “instancepath” attribute.
• Light Instances – Instancing of lights is supported, with options for per-instance overrides of the light parameters and constant storage of light link settings.

To join the beta, check out the Chaos Group website.

V-Ray for Houdini is currently available for Houdini and Houdini Indie 16.5.473 and later. V-Ray for Houdini supports Windows, Linux and Mac OS.

Saddington Baynes adds senior lighting artist Luis Cardoso

Creative production house Saddington Baynes has hired Luis Cardoso as a senior lighting artist, adding to the studio’s creative team with specialist CGI skills in luxury goods, beauty and cosmetics. He joins the team following a four-year stint at Burberry, where he worked on high-end CGI.

He specializes in Autodesk 3ds Max, Chaos Group’s V-Ray and Adobe Photoshop. Cardoso’s past work includes imagery for all Burberry fragrances, clothing and accessories and social media assets for the Pinterest Cat Lashes campaign. He also has experience under his belt as senior CG artist at Sectorlight, and later in his career Assembly Studios.

At Saddington Baynes, Cardoso will be working on new motion cinematic sequences for online video to expand the beauty, fragrance, fashion and beverage departments and take the expertise further, particularly in regards to video lighting.

According to executive creative director James Digby-Jones, “It no longer matters whether elements are static or moving; whether the brief is for a 20,000-pixel image or 4K animation mixed with live action. We stretch creative and technical boundaries with fully integrated production that encompasses everything from CGI and motion to shoot production and VR capability.”

Foundry’s Nuke and Hiero 11.0 now available

Foundry has made available Nuke and Hiero 11.0, the next major release for the Nuke line of products, including Nuke, NukeX, Nuke Studio, Hiero and HieroPlayer. The Nuke family is being updated to VFX Platform 2017, which includes several major updates to key libraries used within Nuke, including Python, Pyside and Qt.

The update also introduces a new type of group node, which offers a powerful new collaborative workflow for sharing work among artists. Live Groups referenced in other scripts automatically update when a script is loaded, without the need to render intermediate stages.

Nuke Studio’s intelligent background rendering is now available in Nuke and NukeX. The Frame Server takes advantage of available resource on your local machine, enabling you to continue working while rendering is happening in the background. The LensDistortion node has been completely revamped, with added support for fisheye and wide-angle lenses and the ability to use multiple frames to produce better results. Nuke Studio now has new GPU-accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.

 

 

GPU-accelerated renderer Redshift now in v.2.0, integrates with 3ds Max

Redshift Rendering has updated its GPU-accelerated rendering software to Redshift 2.0. This new version includes new features and pipeline enhancements to the existing Maya and Softimage plug-ins. Redshift 2.0 also introduces integration with Autodesk 3ds Max. Integrations with Side Effects Houdini and Maxon Cinema 4D are currently in development and are expected later in 2016.

New features across all platforms include realistic volumetrics, enhanced subsurface scattering and a new PBR-based Redshift material, all of which deliver improved final render results. Starting July 5, Redshift is offering 20 percent off new Redshift licenses through July 19.

Age of Vultures

Age of Vultures

A closer look at Redshift 2.0’s new features:

● Volumetrics (OpenVDB) – Render clouds, smoke, fire and other volumetric effects with production-quality results (initial support for OpenVDB volume containers).

● Nested dielectrics – The ability to accurately simulate the intersection of transparent materials with realistic results and no visual artifacts.

● New BRDFS and linear glossiness response – Users can model a wider variety of metallic and reflective surfaces via the latest and greatest in surface shading technologies (GGX and Beckmann/CookTorrance BRDFs).

● New SSS models and single scattering – More realistic results with support for improved subsurface scattering models and single-scattering.

● Redshift material – The ability to use more intuitive, PBR-based main material, featuring effects such as dispersion/chromatic aberration.

● Multiple dome lights – Users can combine multiple dome lights to create more compelling lighting.

● alSurface support – There is now full support for the Arnold shader without having to port settings.

● Baking – Users can save a lot of rendering time with baking for lighting and AOVs.

Users include Blizzard, Jim Henson’s Creature Shop, Glassworks and Blue Zoo.

Main Image: Rendering example from A Large Evil Corporation.

Jon Neill joins Axis as head of lighting, rendering, compositing

Axis Animation in Glasgow, Scotland, has added Jon Neill as their new head of lighting, rendering and compositing (LRC). He has previously held senior positions at MPC and Cinesite, working on such projects as Jungle Book, Skyfall and Harry Potter and the Order of the Phoenix.

His role at Axis will be overseeing the LRC team at both the department and project level, providing technical and artistic leadership across multiple projects and managing the day-to-day production needs.

“Jon’s supervisory skills coupled with knowledge in a diverse range of execution techniques is another step forward in raising the bar in both our short- and long-form projects.” says Graham McKenna, co-founder and head of 3D at Axis.

Thinkbox addresses usage-based licensing

At the beginning of May, Thinkbox Software launched Deadline 8, which introduced on-demand, per-minute licensing as an option for Thinkbox’s Deadline and Krakatoa, The Foundry’s Nuke and Katana, and Chaos Group’s V-Ray. The company also revealed it is offering free on-demand licensing for Deadline, Krakatoa, Nuke, Katana and V-Ray for the month of May.

Chris BondThinkbox founder/CEO Chris Bond explained, “As workflows increasingly incorporate cloud resources, on-demand licensing expands options for studios, making it easy to scale up production, whether temporarily or for a long-term basis. While standard permanent licenses are still the preferred choice for some VFX facilities, the on-demand model is an exciting option for companies that regularly expand and contract based on their project needs.”

Since the announcement, users have been reaching out to Thinkbox with questions about usage-based licensing. We reached out to Bond to help those with questions get a better understanding of what this model means for the creative community.

What is usage-based licensing?
Usage-based licensing is an additional option to permanent and temporary licenses and gives our clients the ability to easily scale up or scale down, without increasing their overhead, on a project-need basis. Instead of one license per render node, you can purchase minutes from the Thinkbox store (as pre-paid bundles of hours) that can be distributed among as many render nodes as you like. And, once you have an account with the Store, purchasing extra time only takes a few minutes and does not require interaction with our sales team.

Can users still purchase perpetual licenses of Deadline?
Yes! We offer both usage-based licensing and perpetual licenses, which can be used separately or together in the cloud or on-premise.

How is Deadline usage tracked?
Usage is tracked per minute. For example, if you have 10,000 hours of usage-based licensing, that can be used on a single node for 10,000 hours, 10,000 nodes for one hour or anything in between. Minutes are only consumed while the Deadline Slave application is rendering, so if it’s sitting idle, minutes won’t be used.

What types of renderfarms are compatible with usage-based licensing?
Usage-based licensing works with both local- and cloud-based renderfarms. It can be used exclusively or alongside existing permanent and temporary licenses. You configure the Deadline Client on each machine for usage-based or standard licensing. Alternatively, Deadline’s Auto-Configuration feature allows you to automatically assign the licensing mode to groups of Slaves in the case of machines that might be dynamically spawned via our Balancer application. It’s easy to do, but if anyone is confused they can send us an email and we’ll schedule a session to step you through the process.

Can people try it out?
Of course! For the month of May, we’re providing free licensing hours of Deadline, Krakatoa, Nuke, Katana and V-Ray. Free hours can be used for on-premise or cloud-based rendering, and users are responsible for compute resources. Hours are offered on a first-come, first-served basis and any unused time will expire at 12am PDT on June 1.

Production Rendering: Tips for 2016 and beyond

By Andrew C. Jones

There is no shortage of articles online offering tips about 3D rendering. I have to admit that attempting to write one myself gave me a certain amount of trepidation considering how quickly most rendering advice can become obsolete, or even flat-out wrong.

The trouble is that production rendering is a product of the computing environment, available software and the prevailing knowledge of artists at a given time. Thus, the shelf life for articles about rendering tends to be five years or so. Inevitably, computing hardware gets faster, new algorithms get introduced and people shift their focus to new sets of problems.

I bring this up not only to save myself some embarrassment five years from now, but also as a reminder that computer graphics, and rendering in particular, is still an exciting topic that is ripe for innovation and improvement. As artists who spend a lot of time working within rigid production pipelines, it can be easy to forget this.

Below are some thoughts distilled from my own experience working in graphics, which I feel are about as relevant today as they would have been when I started working back in 2003. Along with each item, I have also included some commentary on how I feel the advice is applicable to rendering in 2016, and to Psyop’s primary renderer, Solid Angle’s Arnold, in particular.

Follow Academic Research
This can be intimidating, as reading academic papers takes considerably more effort than more familiar kinds of reading. Rest assured, it is completely normal to need to read a paper several times and to require background research to digest an academic paper. Sometimes the background research is as helpful as the paper itself. Even if you do not completely understand everything, just knowing what problems the paper solves can be useful knowledge.

Papers have to be novel to be published, so finding new rendering research relevant to 2016 is pretty easy. In fact, many useful papers have been overlooked by the production community and can be worth revisiting. A recent example of this is Charles Schmidt and Brian Budge’s paper, “Simple Nested Dielectrics in Ray Traced Images” from 2002, which inspired Jonah Friedman to write his open source JF Nested Dielectric shader for Arnold in 2013. ACM’s digital library is a fantastic resource for finding graphics-related papers.

Study the Photographic Imaging Pipeline
Film, digital cinema and video are engineering marvels, and their complexity is easily taken for granted. They are the template for how people expect light to be transformed into an image, so it is important to learn how they work.

Despite increasing emphasis on physical accuracy over the past few years, a lot of computer graphics workflows are still not consistent with real-world photography. Ten years ago, the no-nonsense, three-word version of this tip would have been “use linear workflow.” Today, the three-word version of the tip should probably be “use a LUT.” In five more years, perhaps people will finally start worrying about handling white balance properly. OpenColorIO and ACES are two recent technologies that fit under this heading.

Otto_theLetter.01726    BritishGas_NPLH1
Examples of recent renders done by Psyop on jobs for online retailer Otto and British Gas.

Study Real-World Lighting
The methodology and equipment of on-set lighting in live-action production can teach us a great deal, both artistically and technically. From an aesthetic standpoint, live-action lighting allows us to focus on learning how to control light to create pleasing images, without having to worry about whether or not physics is being simulated correctly.

Meanwhile, simulating real-world light setups accurately and efficiently in CG can be technically challenging. Many setups rely heavily on indirect effects like diffusion, but these effects can be computationally expensive compared to direct lighting. In Arnold, light filter shaders can help transform simplistic area lights into more advanced light rigs with view-dependent effects.

Fight for Simplicity
As important as it is to push the limits of your workflow and get the technical details right, all of that effort is for naught if the workflow is too difficult to use and artists start making mistakes.

In recent years, simplicity has been a big selling point for path-tracing renderers as brute force path-tracing algorithms tend to require fewer parameters than spatially dependent approximations. Developers are constantly working to make their renderers more intuitive, so that artists can achieve realistic results without visual cheats. For example, Solid Angle recently added per-microfacet fresnel calculations, which help achieve more realistic specular reflections along the edges of surfaces.

Familiarize Yourself With Your Renderer’s API (If it Has One)
Even if you have little coding background, the API can give you a much deeper understanding of how your renderer really works. This can be a significant trade-off for GPU renderers, as the fast-paced evolution of GPU programming makes providing a general purpose API particularly difficult.

Embrace the Statistical Nature of Raytracing
The “DF” in BRDF actually stands for “distribution function.” Even real light is made of individual photons, which can be thought of as particles bouncing off of surfaces according to probability distributions. (Just don’t think of the photons as waves or they will stop cooperating!)

When noise problems occur in a renderer, it is often because a large amount of light is being represented by a small subset of sampled rays. Intuitively, this is a bit like trying to determine the average height of Germans by measuring people all over the world and asking if they are German. Only 1 percent of the world’s population is German, so you will need to measure 100 times more people than if you collected your data from within Germany’s borders.

One way developers can improve a renderer is by finding ways to gather information about a scene using fewer samples. These improvements can be quite dramatic. For example, the most recent Arnold release can render some scenes up to three times as fast, thanks to improvements in diffuse sampling. As an artist, understanding how randomization, sampling and noise are related is the key to optimizing a modern path tracer, and it will help you anticipate long render times.

Learn What Your Renderer Does Not Do
Although some renderers prioritize physical accuracy at any cost, most production renderers attempt to strike a balance between physical accuracy and practicality.

Light polarization is a great example of something most renderers do not simulate. Polarizing filters are often used in photography to control the balance between specular and diffuse light on surfaces and to adjust the appearance of certain scene elements like the sky. Recreating these effects in CG requires custom solutions or artistic cheats. This can make a big difference when rendering things like cars and water.

Plan for New Technology
Technology can change quickly, but adapting production workflows always takes time. By anticipating trends, such as HDR displays, cloud computing, GPU acceleration, virtual reality, light field imaging, etc., we not only get a head start preparing for the future, but also motivate ourselves to think in different ways. In many cases, solutions that are necessary to support tomorrow’s technology can already change the way we work today.

Andrew C. Jones is head of visual effects at NYC- and LA-based Psyop, which supplies animation, design, illustration, 3D, 2D and live-action production to help brands connect with consumers. You can follow them on Twitter @psyop 

Company behind Arnold renderer brings on studio tech vet Darin Grant

VFX and animation technology veteran Darin Grant has joined Solid Angle, the developers of the Arnold image renderer, as VP of engineering. Grant has a held tech roles at a variety of VFX houses over the years, including time as CTO of Digital Domain and Method Studios. He also held senior technology titles at DreamWorks Animation and Google.

“Darin has a long history leading massive technology efforts in VFX and computer animation at big global studios,” reports Solid Angle founder/CEO Marcos Fajardo. “And, like we are, he’s passionate about making sure production needs drive development. Having Darin on our team will help us scale our efforts and bring fast, accurate rendering to all kinds of productions.”

Prior to joining Solid Angle, Grant was CTO of Method Studios, where he focused on unifying infrastructure, pipeline and workflows across its many locations. Prior to that he was CTO of Digital Domain and head of production technology at DreamWorks Animation, where he guided teams to maintain and enhance the company’s unified pipeline.

He has also provided strategic consulting to many companies, including The Foundry, Chaos Group and Autodesk, most recently collaborating to create a Pipeline Consulting Services team and advising on cloud enablement and film security for Shotgun Software. He began his career in visual effects, developing shaders as rendering lead at Digital Domain.

“Having followed the development of Arnold over the past 17 years, I have never been more excited about the future of the product than I am today,” says Grant. “Broadening our availability on the cloud and our accessibility from animation tools while allowing Marcos and the team to drive innovation in the renderer itself allows us to move faster than we ever have.”

We’ve known Darin for years and reached out to him for more.

What’s it like going from a creative studio to a software developer?
After spending a career helping teams create amazing content at individual studios, the opportunity to be able to help teams at all the studios was too good to pass up.

How are you taking your background as a studio guy and putting it toward making software?
On the studio side, I managed distributed software teams for the past 15 years, and I can definitely apply that here. The interesting piece is that every client meeting I walk into ends up being with someone who I’ve worked with in the past, so that definitely helps strengthen our already solid relationship with our customers.

The main difference is our company’s focus is on building robust software to scale versus trying to do that while the rest of the studio focused on creating great content.  It’s nice to have that singular focus and vision.

——
Grant is a member of the Scientific and Technical Awards Committee for AMPAS and the Visual Effects Society.

Nvidia’s GPU Technology Conference: Part III

Entrepreneurs, self-driving cars and more

By Fred Ruckel

Welcome to the final installment of my Nvidia GPU Technology Conference experience. If you have read Part I and Part II, I’m confident you will enjoy this wrap-up — from a one-on-one meeting with one of Nvidia’s top dogs to a “shark tank” full of entrepreneurs to my take on the status of self-driving cars. Thanks for following along and feel free to email if you have any questions about my story.

Going One on One
I had the pleasure sitting down with Nvidia marketing manager Greg Estes, along with Gail Laguna, their PR expert in media and entertainment. They allowed me to pick their brains about Continue reading

Nvidia’s GPU Technology Conference: Part II

By Fred Ruckel

A couple of weeks ago I had the pleasure of attending Nvidia’s GPU Technology Conference in San Jose. I spent five days sitting in on conferences, demos, and in a handful of one-on-one meetings. If the Part I of my story had you interested in the new world of GPU technology, take a dive into this installment and learn what other cool things Nvidia has created to enhance your workflow.

Advanced Rendering Solutions
We consider rendering to be the final output of an animation. While that’s true, there’s a lot
more to rendering than just the final animated result. We could jump straight to the previz Continue reading

Maxon intros next-gen Cinema 4D

Maxon has updated its 3D motion graphics, visual effects, visualization, painting and rendering software Cinema 4D to Release 16. Some of the new features in this newest version include a modeling PolyPen “super-tool,” a motion tracker for easily integrating 3D content within live footage and a Reflectance material channel that allows for multi-layered reflections and specularity.

The company will be at Siggraph next week with the new version and it’s scheduled to ship in September.

CINEMA_4D_R16_Packshot_Range_Books_Left_RGB copy

Key highlights include:
Motion Tracker – This offers fast and seamless integration of 3D elements into real-world footage. So footage can be tracked automatically or manually, and aligned to the 3D environment using position, vector and planar constraints.

Interaction Tag – This gives users control over 3D objects and works with the new Tweak mode to provide information on object movement and highlighting. Suited for technical directors and character riggers, the tag reports all mouse interaction and allows object control via XPresso, COFFEE or Python.

PolyPen – With this tool users can paint polygons and polish points as well as easily move, clone, cut and weld points and edges of 3D models. You can even re-topologize complex meshes. Enable snapping for greater precision or to snap to a surface.

Bevel Deformer – The Bevel toolset in Cinema 4D can now be applied nondestructively to entire objects or specific selection sets. Users can also animate and adjust bevel attributes to create all new effects.

Sculpting – R16 offers many improvements and dynamic features to sculpt with precision and expand the overall modeling toolset. The new Select tool gives users access to powerful symmetry and fill options to define point and polygon selections on any editable object. Additional features give users more control and flexibility for sculpting details on parametric objects, creating curves, defining masks, stamps and stencils, as well as tools for users to create their own sculpt brushes and more.

Other modeling features in R16 include an all-new Cogwheel spline primitive to generate involute and ratchet gears; a new Mesh Check tool to evaluate the integrity of a polygonal mesh; Deformer Falloff options and Cap enhancements to easily add textures to the caps of MoText, Extrude, Loft, Lathe and Sweep objects.

Reflectance Channel (main image) – This provides more control over reflections and specularity within a single new channel. Features include the ability to build-up multiple layers for complex surfaces such as metallic car paint, woven cloth surfaces, and options to render separate multi-pass layers for each reflection layer to achieve higher quality realistic imagery.

New Render Engine for Hair & Sketch – A completely new unified effects render engine allows artists to seamlessly raytrace Hair and Sketch lines within the same render pass to give users higher quality results in a fraction of the time.

Rendering

Rendered image

Team Render, introduced by Maxon in 2013, features many new enhancements including a client-server architecture allowing users to control all the render jobs for a studio via a browser.

Other Workflow Features/Updates
Content Library – Completely re-organized and optimized for Release 16, the preset library contains custom made solutions with specific target groups in mind. New house and stair generators, as well as modular doors and windows have been added for architectural visualizers. Product and advertising designers can take advantage of a powerful tool to animate the folding of die-cut packaging, as well as modular bottles, tubes and boxes. Motion designers will enjoy the addition of high-quality models made for MoGraph, preset title animations and interactive chart templates.

Exchange/Pipeline Support – Users can now exchange assets throughout the production pipeline more reliably in R16 with support for the most current versions of FBX and Alembic.

Solo Button – Offers artists a production-friendly solution to isolate individual objects and hierarchies for refinement when modeling. Soloing also speeds up the viewport performance for improved workflow on massive scenes.

Annotations – Tag specific objects, clones or points in any scene with annotations that appear directly in view for a dependable solution to reference online pre-production materials, target areas of a scene for enhancement, and more.

UV Peeler – An effective means to quickly unwrap the UV’s of cylindrical objects for optimized texturing.

ProMax’s Platform server adds cross-platform After Effects rendering

Santa Ana, California — ProMax Systems, a manufacturer of shared storage servers and video editing workstations, has added another significant feature set to the Platform shared storage server line. This new functionality allows both Windows and Mac clients running Adobe After Effects to submit render jobs to the Platform.

This capability eliminates running time-consuming renders on workstations, freeing them to get back to creative tasks.  According to ProMax (www.promax.com), the Platform servers are currently the only shared storage systems on the market that offer this cross-platform, After Effects rendering functionality.

Platform’s universal After Effects rendering capabilities will resonate with post facilities and creative agencies of all sizes that use the strengths of both Windows and Mac systems. The Platform AE Render tool not only leverages Platform’s high-performance CPU advantages but it also enables offloading rendering tasks from individual workstations to the Platform server’s powerful GPUs (via Platform’s expandable capability to add GPU cards). The Platform AE Render features are available now, and are included as part of the latest Platform Series models without additional cost.

Using Platform’s own management software, system administrators designate a Platform Space as an Adobe After Effects render location. Mac or Windows users connected to the Platform, with access to that space, can submit their render jobs to that location.  The Platform system watches the designated Platform Space for render submissions and manages the remote submission through the Platform’s After Effects render node software.