Tag Archives: The Foundry

Review: Foundry’s Athera cloud platform

By David Cox

I’ve been thinking for a while that there are two types of post houses — those that know what cloud technology can do for them, and those whose days are numbered. That isn’t to say that the use of cloud technology is essential to the survival of a post house, but if they haven’t evaluated the possibilities of it they’re probably living in the past. In such a fast-moving business, that’s not a good place to be.

The term “cloud computing” suffers a bit from being hijacked by know-nothing marketeers and has become a bit vague in meaning. It’s quite simple though: it just means a computer (or storage) owned and maintained by someone else, housed somewhere else and used remotely. The advantage is that a post house can reduce its destructive fixed overheads by owning fewer computers and thus save money on installation and upkeep. Cloud computers can be used as and when they are needed. This allows scaling up and down in proportion to workload.

Over the last few years, several providers have created global datacenters containing upwards of 50,000 servers per site, entirely for the use of anyone who wants to “remote in.” Amazon and Google are the two biggest providers, but as anyone who has tried to harness their power for post production can confirm, they’re not simple to understand or configure. Amazon alone has hundreds of different computer “instance” types, and accessing them requires navigating through a sea of unintelligible jargon. You must know your Elastic Beanstalks from your EC2, EKS and Lambda. And make sure you’ve worked out how to connect your S3, EFS and Glacier. Software licensing can also be tricky.

The truth is, these incredible cloud installations are for cleverer people than those of us that just like to make pretty pictures. They are more for the sort that like to build neural networks and don’t go outside very much. What our industry needs is some clever company to make a nice shiny front end that allows us to harness that power using the tools we know and love, and just make it all a bit simpler. Enter Athera, from Foundry. That’s exactly what they’ve done.

What is Athera?

Athera is a platform hosted on Google Cloud infrastructure that presents a user with icons for apps such as Nuke and Houdini. Access to each app is via short-term (30-day) rental. When an available app icon is clicked, a cloud computer is commanded into action, pre-installed with the chosen app. From then on, the app is used just as if locally installed. Of course, the app is actually running on a high-performance computer located in a secure and nicely cooled datacenter environment. Provided the user has a vaguely decent Internet connection, they’re good to go, because only the user interface is being transmitted across the network, not the actual raw image data.

Apps available on Athera include Foundry’s products, plus a few others. Nuke is represented in its base form, plus a Nuke X variant, Nuke Studio, and a combination of Nuke X and Cara VR. Also available are the Mari texture painting suite, Katana look-creating app and Modo CGI modeling software.

Athera also offers access to non-Foundry products like CGI software Houdini and Blender, as well as the Gaffer management tool.

NukeIn my first test, I rustled up an instance of Nuke Studio and one of Blender. The first thing I wanted to test was the GPU speed, as this can be somewhat variable for many cloud computer types (usually between zero and not much). I was pleasantly surprised as the rendering speed was close to that of a local Nvidia GeForce GTX 1080, which is pretty decent. I was also pleased to see that user preferences were maintained between sessions.

One thing that particularly impressed me was how I could call up multiple apps together and Athera would effectively build a network in the background to link them all up. Frames rendered out of Blender were instantly available in the cloud-hosted Nuke Studio, even though it was running on a different machine. This suggests the Athera infrastructure is well thought out because multi-machine, networked pipelines with attached storage are constructed with just a few clicks and without really thinking about it.

Access to the Athera apps is either by web browser or via a local client software called “Orbit.” In web browser mode, each app opens in its own browser tab. With Orbit, each app appears in a dedicated local window. Orbit boasts lower latency and the ability to use local hardware such as multiple monitors. Latency, which would show itself as a frustrating delay between control input and visual feedback, was impressively low, even when using the web browser interface. Generally, it was easy to forget that the app being used was not installed locally.

Getting files in and out was also straightforward. A Dropbox account can be directly linked, although a Google or Amazon S3 storage “bucket” is preferred for speed. There is also a hosted app called “Toolbox,” which is effectively a file browser to allow the management of files and folders.

The Athera platform also contains management and reporting features. A manager can set up projects and users, setting out which apps and projects a user has access to. Quotas can be set, and full reports are given as to who did what, when and with which app.

Athera’s pricing is laid out on their website and it’s interesting to drill into the costs and make comparisons. A user buys access to apps in 30-day blocks. Personally, I would like to see shorter blocks at some point to increase up/down scale flexibility. That said, render-only instances for many of the apps can be accessed on a per-second billing basis. The 30-day block comes with a “fair use” policy of 200 hours. This is a hard limit, which equates to around nine and a half hours per day for five-day weeks (which is technically known in post production as part time).

Figuring Out Cost
Blender is a good place to start analyzing cost because it’s open source (free) software, so the $244 Athera cost to run for 30 days/200 hours must be for hardware only. This equates to $1.22 per hour, which, compared to direct cloud computer usage, is pretty good value for the GPU-backed machine on offer.

Modo

Another way of comparing the amount of $244 a month would be to say that a new computer costing $5,800 depreciates at roughly this monthly rate if depreciated over two years. That is to say, if a computer of that value is kept for two years before being replaced, it effectively loses roughly $241 per month in value. If depreciated over three years, the figure is $80 per month less. Of course, that’s just comparing the cost of depreciation. Cost of ownership must also include the costs of updating, maintaining, powering, cooling, insuring, housing and repairing if (when!) it breaks down. If a cloud computer breaks down, Google has a few thousand waiting in the wings. In general, the base hardware cost seems quite competitive.

Of course, Blender is not really the juicy stuff. Access to a base Nuke, complete with workstation, is $685 per 30 days / 200 hours. Nuke X is $1,025. There are also “power” options for around 20% more, where a significantly more powerful machine is provided. Compared to running a local machine with purchased or rented software, these prices are very interesting. But when the ability to scale up and down with workload is factored in, especially being able to scale down to nothing during quiet times, the case for Athera becomes quite compelling.

Another helpful factor is that a single 30-day access block to a particular app can be shared between multiple users — as long as only one user has control of the app at a time. This is subject to the fair use limitation.

There is an issue if commercial (licensed) plug-ins are needed. For the time being, these can’t be used on Athera due to the obvious licensing issues relating to their installation on a different cloud machine each time. Hopefully, plugin developers will become alive to the possibilities of pay-per-use licensing, as a platform like Athera could be the perfect storefront.

Mari

Security
One of the biggest concerns about using remote computing is that of security. This concern tends to be more perceptual than real. The truth is that a Google datacenter is likely to have significantly more security than an average post company’s machine room. Also, they will be employing the best in the security business. But if material being worked on leaks out into the public, telling a client, “But I just sent it to Google and figured it would be fine,” isn’t going to sound great. Realistically, the most likely concern for security is the sending of data to and from a datacenter. A security breach inside the datacenter is very unlikely. As ever, a post producer has to remain vigilant.

Summing Up
I think Foundry has been very smart and forward thinking to create a platform that is able to support more than just Foundry products in the cloud. It would have been understandable if they just made it a storefront for alternative ways of using a Nuke (etc), but they clearly see a bigger picture. Using a platform like Athera, post infrastructure can be assembled and disassembled on demand to allow post producers to match their overheads to their workload.

Athera enables smart post producers to build a highly scalable post environment with access to a global pool of creative talent who can log in and contribute from anywhere with little more than a modest computer and internet connection.

I hate the term game-changer — it’s another term so abused by know-nothing marketeers who have otherwise run out of ideas — but Athera, or at least what this sort of platform promises to provide, is most certainly a game-changer. Especially if more apps from different manufacturers can be included.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

postPerspective Impact Award winners from NAB 2017

In early April, postPerspective announced the debut of our Impact Awards, celebrating innovative products and technologies for the post production and production industries that will influence the way people work. Our inaugural awards honor the best new or upgraded gear shown at NAB 2017.

Now that the show is over, and our panel of post pro judges has had time to decompress, dig out and think about what impressed them, we are happy to announce our honorees.

And the winners of the postPerspective Impact Award from NAB 2017 are:

• Adobe — Creative Cloud Suite
• Avid — Media Composer | Cloud Remote
• Blackmagic Design — DaVinci Resolve 14
• Dell — UltraSharp 27 4K HDR Monitor
• HP — DreamColor Z31x Studio Display

“The postPerspective Impact Award celebrates companies that have listened to users’ wants and needs and then produced tools designed to make their working lives easier and projects better,” said Randi Altman, postPerspective’s founder and editor-in-chief. “And all of our winners certainly fall into that category.

“Our awards are special because they are voted on by people who will be potentially using these tools in their day-to-day workflows. It’s real-world users who have determined our winners, and that is the way it should be. We feel awards for products targeting pros should be voted on by pros.”

Obviously, there were many new technologies and products at NAB this year, and while only five won an Impact Award, our judges felt there were other tools that it was important to let people know about as well.

Displays for high-resolution workflows were of special interest to many of our judges. In addition to our winners, they pointed to Sony’s CLEDIS, Bravia and XBR displays; SmallHD’s Focus monitor; Eizo’s Color Edge monitors; and Flanders Scientific’s OLED 55-inch HDR display.

Other gear that caught our judges attention — AJA’s FS HDR with ColorFront; Telestream Wirecast with Cloud-Assist captioning; Avid Pro Tools with Dolby Atmos integration; IBM Watson for post production; Mettle’s 360 Degree/VR Depth plug-ins and Skybox Studio v2; G-Tech’s Thunderbolt 3 Shuttle XL; AJA’s KiPro Ultra Plus; and The Foundry’s Nuke 11 and Elara.

Stay tuned for future Impact Award winners in the coming months — voted on by users for users — from SIGGRAPH and IBC.

The Foundry gives Jody Madden additional role, ups Phil Parsonage

The Foundry’s chief customer officer, Jody Madden, has been given the additional role of chief product officer (CPO). In another move, Phil Parsonage has been promoted to director of engineering.

As CPO, Madden — who had at one time held the title of COO at The Foundry — returns to her more technical roots, which includes stints at VFX studios such Digital Domain and Industrial Light & Magic. In this new role, Madden is responsible for managing The Foundry’s full product line.

“I’m really excited to be stepping into this new role as I continue my rewarding journey with The Foundry,” says Madden. “I started [in this industry] as a customer, so I’m intimately familiar with the challenges the market faces. Now as CPO, I’m truly excited to help our customers address their technical and business challenges by continuing to push the boundaries of visual effects and design software.”

Parsonage, who has been with The Foundry for over 10 years, will run The Foundry’s engineering efforts. As part of this job, he is responsible for conceiving and implementing the company’s technical strategy.

 

Pixar open sources Universal Scene Description for CG workflows

Pixar Animation Studios has released Universal Scene Description (USD) as an open source technology in order to help drive innovation in the industry. Used for the interchange of 3D graphics data through various digital content creation tools, USD provides a scalable solution for the complex workflows of CG film and game studios. 

With this initial release, Pixar is opening up its development process and providing code used internally at the studio.

“USD synthesizes years of engineering aimed at integrating collaborative production workflows that demand a constantly growing number of software packages,” says Guido Quaroni, VP of software research and development at Pixar.

 USD provides a toolset for reading, writing, editing and rapidly previewing 3D scene data. With many of its features geared toward performance and large-scale collaboration among many artists, USD is ideal for the complexities of the modern pipeline. One such feature is Hydra, a high-performance preview renderer capable of interactively displaying large data sets.

“With USD, Hydra, and OpenSubdiv, we’re sharing core technologies that can be used in filmmaking tools across the industry,” says George ElKoura, supervising lead software engineer at Pixar. “Our focus in developing these libraries is to provide high-quality, high-performance software that can be used reliably in demanding production scenarios.”

Along with USD and Hydra, the distribution ships with USD plug-ins for some common DCCs, such as Autodesk’s Maya and The Foundry’s Katana.

 To prepare for open-sourcing its code, Pixar gathered feedback from various studios and vendors who conducted early testing. Studios such as MPC, Double Negative, ILM and Animal Logic were among those who provided valuable feedback in preparation for this release.

Foundry intros new procedural modeling system with Modo 10.1

With the launch of Modo 10.1, The Foundry has introduced a new workflow that is designed to make creating 3D content faster and easier thanks to a new procedural modeling system within 10.1 that works side by side with Modo’s direct modeling toolset.

The new toolset enables artists to iterate more freely, with the ability to manipulate modeling operations at any time in a flexible layer stack; create infinite variations using procedural operations that can be driven by textures, falloffs or dynamically changing inputs; and easily accommodate change requirements with the ability to edit selections and swap out input meshes after the fact. New curve tools, constraints, deformers and enhancements to the MeshFusion toolset complete the Modo 10.1 modeling updates.

Modo 10.1 is shipping now and is the second installment of three in the Modo 10 Series. Customers purchasing the Modo 10 Series will receive all three installments as they become available, including Modo 10.0, which offers enhanced workflows for creating realtime content for games, or other immersive interactive experiences like virtual reality.

Here are some details of Modo 10.1’s features:
Nondestructive procedural stack: Modo’s procedural modeling system allows users to model nondestructively. Each successive mesh operation — such as Bevel, Extrude, Merge, Reduce and Thicken — is added as a layer to a procedural stack, which can be modified, reordered, disabled or deleted at any time. Users can view the state of the mesh at any point in the stack, with future operations shown as ghosted. There is also the ability to freeze the stack up to any chosen point for improved performance.

Selection operations: Selection operations lets users control which mesh elements are affected by subsequent operations in the procedural stack. By default, elements selected are automatically added to a Select By Index operation. Elements can be added or removed from the selection at any time. In addition, users can define other operations to select elements procedurally. Selection operations allow selections to be rigged, animated and dynamically updated as the input mesh changes.

Procedural variations: Procedural modeling in Modo allows for creation of an almost infinite number of variations. For example, you could construct a procedural road, and then easily change the number of lights, the width of the road, or even the path that the road follows. Textures can be used as inputs to modeling operations — by changing the texture, an entirely different effect can be achieved, while all of the subsequent operations continue to be applied.

Animation and falloffs: Modo’s procedural system allows users to easily animate almost any parameter within the stack. Falloffs can be used to modulate the behavior of tools within the stack; the placement of the falloffs can even be animated for interesting effects. It’s also now possible to modulate the effect of a direct or procedural tool or deformer using a textured falloff. Textures can be connected to the falloff directly in the schematic or in the mesh operation stack.

Procedural text: Modo’s procedural system makes it easier to create a style for a piece of 3D text — adding thickness and bevels, for example — and then change the input string or the font to create new variations as required. Text can also be rigged in the schematic view, allowing the source text to be driven and dynamically changed. In addition, the Text tool can now directly output Bézier curves that can be used to drive a new Curve Fill operation, providing an all-quad mesh.

Curve enhancements: Modo now offers B-Splines as an alternative to Bézier curves. There’s also a new procedural Curve Fill operation that lets users fill a closed curve with quads; a Curve Particle Generator that lets you easily create and adjust duplicated geometry along curves; a Curve Rebuild operation that resamples a curve into an evenly spaced set of points; a Lacing Geometry operation that extrudes a profile shape along a guided curve; and an Edges to Curves operation.

MeshFusion enhancements: The advanced MeshFusion Boolean modeling toolset now offers better control of mesh topology and density for individual strips. Users can now intuitively create, edit and analyze simple Fusion models in the schematic with extended support for drag-and-drop editing, while enhanced placement options let you easily place and fuse multiple copies of preset meshes. Procedurally modeled meshes can be used as inputs to MeshFusion.

UV Transform, UV Constraint and Push Influence: Making a 2D shape accurately conform to a 3D mesh — for example, when attaching a decorative stripe to a shoe— is now faster and easier, thanks to a new UV Transform operation. In addition, a new UV Constraint makes it easy to constrain both the position and the rotation of objects to an arbitrary position on a 3D surface. Also, a new Push Influence deformer pushes geometry along its surface normal.

Experiencing autism in VR via Happy Finish

While people with autism might “appear” to be like the rest of us, the way they experience the world is decidedly different. Imagine sensory overload times 10. In an effort to help the public understand autism, the UK’s National Autistic Society and agency Don’t Panic have launched a campaign called “Too Much Information” (#autismTMI) that is set to challenge myths, misconceptions and stereotypes relating to this neurobiological disorder.

In order to help tell that story, the NAS called on London’s Happy Finish to help create a 360-degree VR film that puts viewers into the shoes of a child with autism during a visit to the store. A 2D film had previously been developed based on the experience of a 10-year-old boy autistic boy named Alexander. Happy Finish provided visual effects for that version, which, since March of last year, has over 54 million views and over 850K shares. The new 360-degree VR experience takes the viewer into Alexander’s world in a more immersive way.

After interviewing several autistic adults as part of the research, Happy Finish worked on this idea that aims to trigger viewer’s empathy and understanding. Working with Don’t Panic and The National Autistic Society, they share Alexander’s experience in an immersive and moving way.

The piece was shot by DP Michael Hornbogen using a six-camera GoPro array in 3D printed housing. For stitching, Happy Finish called on Autopano by Kolor, The Foundry’s Nuke and Adobe After Effects. Editing was in Adobe Premiere. Color grading was via Blackmagic’s Resolve.

“It was a long process of compositing using various tools,” explains Jamie Mossahebi, director of the VR shooting at Happy Finish. “We created 18 versions and amended and tweaked based on initial feedback from autistic adults.”

He says that most of the studio’s VR experiences aim to create something comfortable and pleasant, but this one needed to be uncomfortable while remaining engaging. “The main challenge was to be as realistic as possible, for that, we focused a lot on the sound design as well as a testing a wide variety of visual effects, selecting the key ones that contributed to making it as immersive and as close to a sensory overload as possible,” explains Mossahebi, who directed the VR film.

“This is Don’t Panic’s first experience of creating a virtual reality campaign,” says Richard Beer, creative director of Don’t Panic. “The process of creating a virtual reality film has a whole different set of rules: it’s about creating a place for people to visit and a person for them to become, rather than simply telling a story. This interactivity of virtual reality gives it a unique sense of “presence” — it has the power to take us somewhere else in time and space, to help us feel, just for a while, what it’s like to be someone else – which is why it was the perfect tool to communicate exactly what a sensory overload feels like for someone with autism for the NAS.”

Sponsored by Tangle Teaser and Intu, the film will tour shopping centers around the UK and will also be available through Autism TMI Virtual Reality Experience view app.

Thinkbox addresses usage-based licensing

At the beginning of May, Thinkbox Software launched Deadline 8, which introduced on-demand, per-minute licensing as an option for Thinkbox’s Deadline and Krakatoa, The Foundry’s Nuke and Katana, and Chaos Group’s V-Ray. The company also revealed it is offering free on-demand licensing for Deadline, Krakatoa, Nuke, Katana and V-Ray for the month of May.

Chris BondThinkbox founder/CEO Chris Bond explained, “As workflows increasingly incorporate cloud resources, on-demand licensing expands options for studios, making it easy to scale up production, whether temporarily or for a long-term basis. While standard permanent licenses are still the preferred choice for some VFX facilities, the on-demand model is an exciting option for companies that regularly expand and contract based on their project needs.”

Since the announcement, users have been reaching out to Thinkbox with questions about usage-based licensing. We reached out to Bond to help those with questions get a better understanding of what this model means for the creative community.

What is usage-based licensing?
Usage-based licensing is an additional option to permanent and temporary licenses and gives our clients the ability to easily scale up or scale down, without increasing their overhead, on a project-need basis. Instead of one license per render node, you can purchase minutes from the Thinkbox store (as pre-paid bundles of hours) that can be distributed among as many render nodes as you like. And, once you have an account with the Store, purchasing extra time only takes a few minutes and does not require interaction with our sales team.

Can users still purchase perpetual licenses of Deadline?
Yes! We offer both usage-based licensing and perpetual licenses, which can be used separately or together in the cloud or on-premise.

How is Deadline usage tracked?
Usage is tracked per minute. For example, if you have 10,000 hours of usage-based licensing, that can be used on a single node for 10,000 hours, 10,000 nodes for one hour or anything in between. Minutes are only consumed while the Deadline Slave application is rendering, so if it’s sitting idle, minutes won’t be used.

What types of renderfarms are compatible with usage-based licensing?
Usage-based licensing works with both local- and cloud-based renderfarms. It can be used exclusively or alongside existing permanent and temporary licenses. You configure the Deadline Client on each machine for usage-based or standard licensing. Alternatively, Deadline’s Auto-Configuration feature allows you to automatically assign the licensing mode to groups of Slaves in the case of machines that might be dynamically spawned via our Balancer application. It’s easy to do, but if anyone is confused they can send us an email and we’ll schedule a session to step you through the process.

Can people try it out?
Of course! For the month of May, we’re providing free licensing hours of Deadline, Krakatoa, Nuke, Katana and V-Ray. Free hours can be used for on-premise or cloud-based rendering, and users are responsible for compute resources. Hours are offered on a first-come, first-served basis and any unused time will expire at 12am PDT on June 1.

Foundry brings on Alex Mahon as chief executive 

The Foundry, makers of Nuke among other software used in media and entertainment, has the former CEO of Shine Group Alex Mahon as chief executive. Mahon joined Shine Group, the production company behind MasterChef, Spooks and Broadchurch, in 2006 and built it up into a business operating in 12 countries before its sale in December 2014. She had previously held senior positions at Talkback Thames, FremantleMedia Group and RTL Group.

The Foundry’s Nuke is used by studios such as DreamWorks, Double Negative and Pixar, to create visual effects. It has moved into commercial design software for clients such as Mercedes-Benz. The company is now using its expertise to create innovative IP and technology for the next generation of virtual and augmented reality environments.

Mahon started her career as a PhD Physicist and then worked at strategy consultants Mitchell Madison Group as an internet retail expert. She is currently a non-executive director of Ocado Plc and The Edinburgh International Television Festival, chairman of the RTS Awards, serves on the DCMS advisory panel on the BBC and is also Appeal Chair of The Scar Free Foundation, a national charity pioneering and transforming medical research into disfigurement.

Foundry © Martin Beddall

Alex Mahon and Bill Collis

“Having spent the last 20 years working in both technology and creative industries, I’m thrilled to be joining The Foundry because of the opportunity for dynamic change and rapid growth,” said Mahon. “The pace of change in technology, combined with The Foundry’s reputation for cutting-edge software and passion for the creative process, allows us almost limitless opportunities for growth.”

The former CEO, Bill Collis, who remains full-time as president and board member, added, “Alex has the skills to continue the extraordinary growth The Foundry has achieved over the last decade. I’m personally looking forward to focussing on identifying new initiatives and technology partnerships to maintain the company’s entrepreneurial spirit.”

Pixar to make Universal Scene Description open-sourced

Pixar Animation Studios, whose latest feature film is Inside Out,  will release Universal Scene Description software (USD) as an open-source project by summer 2016. USD addresses the growing need in the CG film and game industries for an effective way to describe, assemble, interchange and modify high-complexity virtual scenes between digital content creation tools employed by studios.

At the core of USD are Pixar’s techniques for composing and non-destructively editing graphics “scene graphs,” techniques that Pixar has been cultivating for close to 20 years, dating back to A Bug’s Life. These techniques, such as file-referencing, layered overrides, variation and inheritance, were completely overhauled into a robust and uniform design for Pixar’s next-generation animation system, Presto.

Although it is still under active development and optimization, USD has been in use for nearly a year in the making of Pixar’s production Finding Dory.

The open-source Alembic project brought standardization of cached geometry interchange to the VFX industry. USD hopes to build on Alembic’s success, taking the next step of standardizing the “algebra” by which assets are aggregated and refined in-context.

The USD distribution will include embeddable direct 3D visualization provided by Pixar’s modern GPU renderer, Hydra, as well as plug-ins for several key VFX DCCs, comprehensive documentation, tutorials and complete python bindings.

Pixar has already been sharing early USD snapshots with a number of industry vendors and studios for evaluation, feedback and advance incorporation. Among the vendors helping to evaluate USD are The Foundry and Fabric Software.

——

In related news, to accelerate production of its computer-animated feature films and short film content, Pixar Animation Studios is licensing a suite of Nvidia technologies related to image rendering.

The multiyear strategic licensing agreement gives Pixar access to Nvidia’s quasi-Monte Carlo (QMC) rendering methods. These methods can make rendering more efficient, especially when powered by GPUs and other massively parallel computing architectures.

As part of the agreement, Nvidia will also contribute raytracing technology to Pixar’s OpenSubdiv Project, an open-source initiative to promote high-performance subdivision surface evaluation on massively parallel CPU and GPU architectures. The OpenSubdiv technology will enable rendering of complex Catmull-Clark subdivision surfaces in animation with incredible precision.

Chaos Group shows V-Ray for Nuke at SIGGRAPH 2015

Chaos Group’s V-Ray for Nuke is a new tool for lighting and compositing that integrates production-quality ray-traced rendering into the company’sNuke,NukeX, and NukeStudio products. V-Ray for Nuke enables compositors to take advantage of V-Ray’s lighting, shading and rendering tools inside NUKE’s node-based workflow.

V-Ray forNuke brings the same technology used on Game of Thrones, Avengers: Age of Ultron, and other film, commercial and television projects to professional compositors.

Built on the same adaptive rendering core as V-Ray’s plugins for Autodesk 3ds Max and Maya, V-Ray for Nuke is designed for production pipelines. V-Ray forNuke gives compositors the ability to adjust lighting, materials and render elements up to final shot delivery. Full control of 3D scenes in Nuke lets compositors match 2D footage and 3D renders simultaneously, saving time for environments and set extension work. V-Ray for Nuke includes a range of features for rendering and geometry with 36 beauty, matte and utility render elements, as well as effects for lights, cameras, materials, and textures.