Tag Archives: rendering

Saddington Baynes adds senior lighting artist Luis Cardoso

Creative production house Saddington Baynes has hired Luis Cardoso as a senior lighting artist, adding to the studio’s creative team with specialist CGI skills in luxury goods, beauty and cosmetics. He joins the team following a four-year stint at Burberry, where he worked on high-end CGI.

He specializes in Autodesk 3ds Max, Chaos Group’s V-Ray and Adobe Photoshop. Cardoso’s past work includes imagery for all Burberry fragrances, clothing and accessories and social media assets for the Pinterest Cat Lashes campaign. He also has experience under his belt as senior CG artist at Sectorlight, and later in his career Assembly Studios.

At Saddington Baynes, Cardoso will be working on new motion cinematic sequences for online video to expand the beauty, fragrance, fashion and beverage departments and take the expertise further, particularly in regards to video lighting.

According to executive creative director James Digby-Jones, “It no longer matters whether elements are static or moving; whether the brief is for a 20,000-pixel image or 4K animation mixed with live action. We stretch creative and technical boundaries with fully integrated production that encompasses everything from CGI and motion to shoot production and VR capability.”

Foundry’s Nuke and Hiero 11.0 now available

Foundry has made available Nuke and Hiero 11.0, the next major release for the Nuke line of products, including Nuke, NukeX, Nuke Studio, Hiero and HieroPlayer. The Nuke family is being updated to VFX Platform 2017, which includes several major updates to key libraries used within Nuke, including Python, Pyside and Qt.

The update also introduces a new type of group node, which offers a powerful new collaborative workflow for sharing work among artists. Live Groups referenced in other scripts automatically update when a script is loaded, without the need to render intermediate stages.

Nuke Studio’s intelligent background rendering is now available in Nuke and NukeX. The Frame Server takes advantage of available resource on your local machine, enabling you to continue working while rendering is happening in the background. The LensDistortion node has been completely revamped, with added support for fisheye and wide-angle lenses and the ability to use multiple frames to produce better results. Nuke Studio now has new GPU-accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.

 

 

GPU-accelerated renderer Redshift now in v.2.0, integrates with 3ds Max

Redshift Rendering has updated its GPU-accelerated rendering software to Redshift 2.0. This new version includes new features and pipeline enhancements to the existing Maya and Softimage plug-ins. Redshift 2.0 also introduces integration with Autodesk 3ds Max. Integrations with Side Effects Houdini and Maxon Cinema 4D are currently in development and are expected later in 2016.

New features across all platforms include realistic volumetrics, enhanced subsurface scattering and a new PBR-based Redshift material, all of which deliver improved final render results. Starting July 5, Redshift is offering 20 percent off new Redshift licenses through July 19.

Age of Vultures

Age of Vultures

A closer look at Redshift 2.0’s new features:

● Volumetrics (OpenVDB) – Render clouds, smoke, fire and other volumetric effects with production-quality results (initial support for OpenVDB volume containers).

● Nested dielectrics – The ability to accurately simulate the intersection of transparent materials with realistic results and no visual artifacts.

● New BRDFS and linear glossiness response – Users can model a wider variety of metallic and reflective surfaces via the latest and greatest in surface shading technologies (GGX and Beckmann/CookTorrance BRDFs).

● New SSS models and single scattering – More realistic results with support for improved subsurface scattering models and single-scattering.

● Redshift material – The ability to use more intuitive, PBR-based main material, featuring effects such as dispersion/chromatic aberration.

● Multiple dome lights – Users can combine multiple dome lights to create more compelling lighting.

● alSurface support – There is now full support for the Arnold shader without having to port settings.

● Baking – Users can save a lot of rendering time with baking for lighting and AOVs.

Users include Blizzard, Jim Henson’s Creature Shop, Glassworks and Blue Zoo.

Main Image: Rendering example from A Large Evil Corporation.

Jon Neill joins Axis as head of lighting, rendering, compositing

Axis Animation in Glasgow, Scotland, has added Jon Neill as their new head of lighting, rendering and compositing (LRC). He has previously held senior positions at MPC and Cinesite, working on such projects as Jungle Book, Skyfall and Harry Potter and the Order of the Phoenix.

His role at Axis will be overseeing the LRC team at both the department and project level, providing technical and artistic leadership across multiple projects and managing the day-to-day production needs.

“Jon’s supervisory skills coupled with knowledge in a diverse range of execution techniques is another step forward in raising the bar in both our short- and long-form projects.” says Graham McKenna, co-founder and head of 3D at Axis.

Thinkbox addresses usage-based licensing

At the beginning of May, Thinkbox Software launched Deadline 8, which introduced on-demand, per-minute licensing as an option for Thinkbox’s Deadline and Krakatoa, The Foundry’s Nuke and Katana, and Chaos Group’s V-Ray. The company also revealed it is offering free on-demand licensing for Deadline, Krakatoa, Nuke, Katana and V-Ray for the month of May.

Chris BondThinkbox founder/CEO Chris Bond explained, “As workflows increasingly incorporate cloud resources, on-demand licensing expands options for studios, making it easy to scale up production, whether temporarily or for a long-term basis. While standard permanent licenses are still the preferred choice for some VFX facilities, the on-demand model is an exciting option for companies that regularly expand and contract based on their project needs.”

Since the announcement, users have been reaching out to Thinkbox with questions about usage-based licensing. We reached out to Bond to help those with questions get a better understanding of what this model means for the creative community.

What is usage-based licensing?
Usage-based licensing is an additional option to permanent and temporary licenses and gives our clients the ability to easily scale up or scale down, without increasing their overhead, on a project-need basis. Instead of one license per render node, you can purchase minutes from the Thinkbox store (as pre-paid bundles of hours) that can be distributed among as many render nodes as you like. And, once you have an account with the Store, purchasing extra time only takes a few minutes and does not require interaction with our sales team.

Can users still purchase perpetual licenses of Deadline?
Yes! We offer both usage-based licensing and perpetual licenses, which can be used separately or together in the cloud or on-premise.

How is Deadline usage tracked?
Usage is tracked per minute. For example, if you have 10,000 hours of usage-based licensing, that can be used on a single node for 10,000 hours, 10,000 nodes for one hour or anything in between. Minutes are only consumed while the Deadline Slave application is rendering, so if it’s sitting idle, minutes won’t be used.

What types of renderfarms are compatible with usage-based licensing?
Usage-based licensing works with both local- and cloud-based renderfarms. It can be used exclusively or alongside existing permanent and temporary licenses. You configure the Deadline Client on each machine for usage-based or standard licensing. Alternatively, Deadline’s Auto-Configuration feature allows you to automatically assign the licensing mode to groups of Slaves in the case of machines that might be dynamically spawned via our Balancer application. It’s easy to do, but if anyone is confused they can send us an email and we’ll schedule a session to step you through the process.

Can people try it out?
Of course! For the month of May, we’re providing free licensing hours of Deadline, Krakatoa, Nuke, Katana and V-Ray. Free hours can be used for on-premise or cloud-based rendering, and users are responsible for compute resources. Hours are offered on a first-come, first-served basis and any unused time will expire at 12am PDT on June 1.

Production Rendering: Tips for 2016 and beyond

By Andrew C. Jones

There is no shortage of articles online offering tips about 3D rendering. I have to admit that attempting to write one myself gave me a certain amount of trepidation considering how quickly most rendering advice can become obsolete, or even flat-out wrong.

The trouble is that production rendering is a product of the computing environment, available software and the prevailing knowledge of artists at a given time. Thus, the shelf life for articles about rendering tends to be five years or so. Inevitably, computing hardware gets faster, new algorithms get introduced and people shift their focus to new sets of problems.

I bring this up not only to save myself some embarrassment five years from now, but also as a reminder that computer graphics, and rendering in particular, is still an exciting topic that is ripe for innovation and improvement. As artists who spend a lot of time working within rigid production pipelines, it can be easy to forget this.

Below are some thoughts distilled from my own experience working in graphics, which I feel are about as relevant today as they would have been when I started working back in 2003. Along with each item, I have also included some commentary on how I feel the advice is applicable to rendering in 2016, and to Psyop’s primary renderer, Solid Angle’s Arnold, in particular.

Follow Academic Research
This can be intimidating, as reading academic papers takes considerably more effort than more familiar kinds of reading. Rest assured, it is completely normal to need to read a paper several times and to require background research to digest an academic paper. Sometimes the background research is as helpful as the paper itself. Even if you do not completely understand everything, just knowing what problems the paper solves can be useful knowledge.

Papers have to be novel to be published, so finding new rendering research relevant to 2016 is pretty easy. In fact, many useful papers have been overlooked by the production community and can be worth revisiting. A recent example of this is Charles Schmidt and Brian Budge’s paper, “Simple Nested Dielectrics in Ray Traced Images” from 2002, which inspired Jonah Friedman to write his open source JF Nested Dielectric shader for Arnold in 2013. ACM’s digital library is a fantastic resource for finding graphics-related papers.

Study the Photographic Imaging Pipeline
Film, digital cinema and video are engineering marvels, and their complexity is easily taken for granted. They are the template for how people expect light to be transformed into an image, so it is important to learn how they work.

Despite increasing emphasis on physical accuracy over the past few years, a lot of computer graphics workflows are still not consistent with real-world photography. Ten years ago, the no-nonsense, three-word version of this tip would have been “use linear workflow.” Today, the three-word version of the tip should probably be “use a LUT.” In five more years, perhaps people will finally start worrying about handling white balance properly. OpenColorIO and ACES are two recent technologies that fit under this heading.

Otto_theLetter.01726    BritishGas_NPLH1
Examples of recent renders done by Psyop on jobs for online retailer Otto and British Gas.

Study Real-World Lighting
The methodology and equipment of on-set lighting in live-action production can teach us a great deal, both artistically and technically. From an aesthetic standpoint, live-action lighting allows us to focus on learning how to control light to create pleasing images, without having to worry about whether or not physics is being simulated correctly.

Meanwhile, simulating real-world light setups accurately and efficiently in CG can be technically challenging. Many setups rely heavily on indirect effects like diffusion, but these effects can be computationally expensive compared to direct lighting. In Arnold, light filter shaders can help transform simplistic area lights into more advanced light rigs with view-dependent effects.

Fight for Simplicity
As important as it is to push the limits of your workflow and get the technical details right, all of that effort is for naught if the workflow is too difficult to use and artists start making mistakes.

In recent years, simplicity has been a big selling point for path-tracing renderers as brute force path-tracing algorithms tend to require fewer parameters than spatially dependent approximations. Developers are constantly working to make their renderers more intuitive, so that artists can achieve realistic results without visual cheats. For example, Solid Angle recently added per-microfacet fresnel calculations, which help achieve more realistic specular reflections along the edges of surfaces.

Familiarize Yourself With Your Renderer’s API (If it Has One)
Even if you have little coding background, the API can give you a much deeper understanding of how your renderer really works. This can be a significant trade-off for GPU renderers, as the fast-paced evolution of GPU programming makes providing a general purpose API particularly difficult.

Embrace the Statistical Nature of Raytracing
The “DF” in BRDF actually stands for “distribution function.” Even real light is made of individual photons, which can be thought of as particles bouncing off of surfaces according to probability distributions. (Just don’t think of the photons as waves or they will stop cooperating!)

When noise problems occur in a renderer, it is often because a large amount of light is being represented by a small subset of sampled rays. Intuitively, this is a bit like trying to determine the average height of Germans by measuring people all over the world and asking if they are German. Only 1 percent of the world’s population is German, so you will need to measure 100 times more people than if you collected your data from within Germany’s borders.

One way developers can improve a renderer is by finding ways to gather information about a scene using fewer samples. These improvements can be quite dramatic. For example, the most recent Arnold release can render some scenes up to three times as fast, thanks to improvements in diffuse sampling. As an artist, understanding how randomization, sampling and noise are related is the key to optimizing a modern path tracer, and it will help you anticipate long render times.

Learn What Your Renderer Does Not Do
Although some renderers prioritize physical accuracy at any cost, most production renderers attempt to strike a balance between physical accuracy and practicality.

Light polarization is a great example of something most renderers do not simulate. Polarizing filters are often used in photography to control the balance between specular and diffuse light on surfaces and to adjust the appearance of certain scene elements like the sky. Recreating these effects in CG requires custom solutions or artistic cheats. This can make a big difference when rendering things like cars and water.

Plan for New Technology
Technology can change quickly, but adapting production workflows always takes time. By anticipating trends, such as HDR displays, cloud computing, GPU acceleration, virtual reality, light field imaging, etc., we not only get a head start preparing for the future, but also motivate ourselves to think in different ways. In many cases, solutions that are necessary to support tomorrow’s technology can already change the way we work today.

Andrew C. Jones is head of visual effects at NYC- and LA-based Psyop, which supplies animation, design, illustration, 3D, 2D and live-action production to help brands connect with consumers. You can follow them on Twitter @psyop 

Company behind Arnold renderer brings on studio tech vet Darin Grant

VFX and animation technology veteran Darin Grant has joined Solid Angle, the developers of the Arnold image renderer, as VP of engineering. Grant has a held tech roles at a variety of VFX houses over the years, including time as CTO of Digital Domain and Method Studios. He also held senior technology titles at DreamWorks Animation and Google.

“Darin has a long history leading massive technology efforts in VFX and computer animation at big global studios,” reports Solid Angle founder/CEO Marcos Fajardo. “And, like we are, he’s passionate about making sure production needs drive development. Having Darin on our team will help us scale our efforts and bring fast, accurate rendering to all kinds of productions.”

Prior to joining Solid Angle, Grant was CTO of Method Studios, where he focused on unifying infrastructure, pipeline and workflows across its many locations. Prior to that he was CTO of Digital Domain and head of production technology at DreamWorks Animation, where he guided teams to maintain and enhance the company’s unified pipeline.

He has also provided strategic consulting to many companies, including The Foundry, Chaos Group and Autodesk, most recently collaborating to create a Pipeline Consulting Services team and advising on cloud enablement and film security for Shotgun Software. He began his career in visual effects, developing shaders as rendering lead at Digital Domain.

“Having followed the development of Arnold over the past 17 years, I have never been more excited about the future of the product than I am today,” says Grant. “Broadening our availability on the cloud and our accessibility from animation tools while allowing Marcos and the team to drive innovation in the renderer itself allows us to move faster than we ever have.”

We’ve known Darin for years and reached out to him for more.

What’s it like going from a creative studio to a software developer?
After spending a career helping teams create amazing content at individual studios, the opportunity to be able to help teams at all the studios was too good to pass up.

How are you taking your background as a studio guy and putting it toward making software?
On the studio side, I managed distributed software teams for the past 15 years, and I can definitely apply that here. The interesting piece is that every client meeting I walk into ends up being with someone who I’ve worked with in the past, so that definitely helps strengthen our already solid relationship with our customers.

The main difference is our company’s focus is on building robust software to scale versus trying to do that while the rest of the studio focused on creating great content.  It’s nice to have that singular focus and vision.

——
Grant is a member of the Scientific and Technical Awards Committee for AMPAS and the Visual Effects Society.

Nvidia’s GPU Technology Conference: Part III

Entrepreneurs, self-driving cars and more

By Fred Ruckel

Welcome to the final installment of my Nvidia GPU Technology Conference experience. If you have read Part I and Part II, I’m confident you will enjoy this wrap-up — from a one-on-one meeting with one of Nvidia’s top dogs to a “shark tank” full of entrepreneurs to my take on the status of self-driving cars. Thanks for following along and feel free to email if you have any questions about my story.

Going One on One
I had the pleasure sitting down with Nvidia marketing manager Greg Estes, along with Gail Laguna, their PR expert in media and entertainment. They allowed me to pick their brains about Continue reading

Nvidia’s GPU Technology Conference: Part II

By Fred Ruckel

A couple of weeks ago I had the pleasure of attending Nvidia’s GPU Technology Conference in San Jose. I spent five days sitting in on conferences, demos, and in a handful of one-on-one meetings. If the Part I of my story had you interested in the new world of GPU technology, take a dive into this installment and learn what other cool things Nvidia has created to enhance your workflow.

Advanced Rendering Solutions
We consider rendering to be the final output of an animation. While that’s true, there’s a lot
more to rendering than just the final animated result. We could jump straight to the previz Continue reading

Maxon intros next-gen Cinema 4D

Maxon has updated its 3D motion graphics, visual effects, visualization, painting and rendering software Cinema 4D to Release 16. Some of the new features in this newest version include a modeling PolyPen “super-tool,” a motion tracker for easily integrating 3D content within live footage and a Reflectance material channel that allows for multi-layered reflections and specularity.

The company will be at Siggraph next week with the new version and it’s scheduled to ship in September.

CINEMA_4D_R16_Packshot_Range_Books_Left_RGB copy

Key highlights include:
Motion Tracker – This offers fast and seamless integration of 3D elements into real-world footage. So footage can be tracked automatically or manually, and aligned to the 3D environment using position, vector and planar constraints.

Interaction Tag – This gives users control over 3D objects and works with the new Tweak mode to provide information on object movement and highlighting. Suited for technical directors and character riggers, the tag reports all mouse interaction and allows object control via XPresso, COFFEE or Python.

PolyPen – With this tool users can paint polygons and polish points as well as easily move, clone, cut and weld points and edges of 3D models. You can even re-topologize complex meshes. Enable snapping for greater precision or to snap to a surface.

Bevel Deformer – The Bevel toolset in Cinema 4D can now be applied nondestructively to entire objects or specific selection sets. Users can also animate and adjust bevel attributes to create all new effects.

Sculpting – R16 offers many improvements and dynamic features to sculpt with precision and expand the overall modeling toolset. The new Select tool gives users access to powerful symmetry and fill options to define point and polygon selections on any editable object. Additional features give users more control and flexibility for sculpting details on parametric objects, creating curves, defining masks, stamps and stencils, as well as tools for users to create their own sculpt brushes and more.

Other modeling features in R16 include an all-new Cogwheel spline primitive to generate involute and ratchet gears; a new Mesh Check tool to evaluate the integrity of a polygonal mesh; Deformer Falloff options and Cap enhancements to easily add textures to the caps of MoText, Extrude, Loft, Lathe and Sweep objects.

Reflectance Channel (main image) – This provides more control over reflections and specularity within a single new channel. Features include the ability to build-up multiple layers for complex surfaces such as metallic car paint, woven cloth surfaces, and options to render separate multi-pass layers for each reflection layer to achieve higher quality realistic imagery.

New Render Engine for Hair & Sketch – A completely new unified effects render engine allows artists to seamlessly raytrace Hair and Sketch lines within the same render pass to give users higher quality results in a fraction of the time.

Rendering

Rendered image

Team Render, introduced by Maxon in 2013, features many new enhancements including a client-server architecture allowing users to control all the render jobs for a studio via a browser.

Other Workflow Features/Updates
Content Library – Completely re-organized and optimized for Release 16, the preset library contains custom made solutions with specific target groups in mind. New house and stair generators, as well as modular doors and windows have been added for architectural visualizers. Product and advertising designers can take advantage of a powerful tool to animate the folding of die-cut packaging, as well as modular bottles, tubes and boxes. Motion designers will enjoy the addition of high-quality models made for MoGraph, preset title animations and interactive chart templates.

Exchange/Pipeline Support – Users can now exchange assets throughout the production pipeline more reliably in R16 with support for the most current versions of FBX and Alembic.

Solo Button – Offers artists a production-friendly solution to isolate individual objects and hierarchies for refinement when modeling. Soloing also speeds up the viewport performance for improved workflow on massive scenes.

Annotations – Tag specific objects, clones or points in any scene with annotations that appear directly in view for a dependable solution to reference online pre-production materials, target areas of a scene for enhancement, and more.

UV Peeler – An effective means to quickly unwrap the UV’s of cylindrical objects for optimized texturing.