Tag Archives: Foundry Nuke

Foundry Nuke 12.1 offers upgrades across product line

Foundry has released Nuke 12.1, with UI enhancements and tool improvements across the entire Nuke family. The largest update to Blink and BlinkScript in recent years improves Cara VR node performance and introduces new tools for developers, while extended functionality in the timeline-based applications speeds up and enriches artist and team review.

Here are the upgrade highlights:
– New Shuffle node updates the classic checkboxes with an artist-friendly, node-based UI that supports up to eight channels per layer (Nuke’s limit) and consistent channel ordering, offering a more robust tool set at the heart of Nuke’s multi-channel workflow.
– Lens Distortion Workflow improvements: The LensDistortion node in NukeX is updated to have a more intuitive workflow and UI, making it easier and quicker to access the faster and more accurate algorithms and expanded options introduced in Nuke 11.
– Blink and BlinkScript improvements: Nuke’s architecture for GPU-accelerated nodes and the associated API can now store data on the GPU between operations, resulting in what Foundry says are “dramatic performance improvements to chains of nodes with GPU caching enabled.” This new functionality is available to developers using BlinkScript, along with bug fixes and a debug print out on Linux.
– Cara VR GPU performance improvements: The Cara VR nodes in NukeX have been updated to take advantage of the new GPU-caching functionality in Blink, offering performance improvements in viewer processing and rendering when using chains of these nodes together. Foundry’s internal tests on production projects show rendering time that’s up to 2.4 times faster.
– Updated Nuke Spherical Transform and Bilateral: The Cara VR versions of the Spherical Transform and Bilateral nodes have been merged with the Nuke versions of these nodes, adding increased functionality and GPU support in Nuke. Both nodes take advantage of the GPU performance improvements added in Nuke 12.1. They are now available in Nuke and no longer require a NukeX license.
– New ParticleBlinkScript node: NukeX now includes a new ParticleBlinkScript node, allowing developers to write BlinkScripts that operate on particles. Nuke 12.1 ships with more than 15 new gizmos, offering a starting point for artists who work with particle effects and developers looking to use BlinkScript.
– QuickTime audio and surround sound support: Nuke Studio, Hiero and HieroPlayer now support multi-channel audio. Artists can now import Mov containers holding audio on Linux and Windows without needing to extract and import the audio as a separate Wav file.

– Faster HieroPlayer launch and Nuke Flipbook integration: Foundry says new instances of HieroPlayer launch 1.2 times faster on Windows and up to 1.5 times faster on Linux in internal tests, improving the experience for artists using HieroPlayer for review. With Nuke 12.1, artists can also use HieroPlayer as the Flipbook tool for Nuke and NukeX, giving them more control when comparing different versions of their work in progress.
– High DPI Windows and Linux: UI scaling when using high-resolution monitors is now available on Windows and Linux, bringing all platforms in line with high-resolution display support added for macOS in Nuke 12.0 v1.
– Extended ARRI camera support: Nuke 12.1 adds support for ARRI formats, including Codex HDE .arx files, ProRes MXFs and the popular Alexa Mini LF. Foundry also says there are performance gains when debayering footage on CUDA GPUs, and there’s an SDK update.

VFX pipeline trends for 2020

By Simon Robinson

A new year, more trends — some burgeoning, and others that have been dominating industry discussions for a while. Underpinning each is the common sentiment that 2020 seems especially geared toward streamlining artist workflows, more so than ever before.

There’s an increasing push for efficiency; not just through hardware but through better business practices and solutions to throughput problems.

Exciting times lie ahead for artists and studios everywhere. I believe the trends below form the pillars of this key industry mission for 2020.

Machine Learning Will Make Better, Faster Artists
Machines are getting smarter. AI software is becoming more universally applied in the VFX industry, and with this comes benefits and implications for artist workflows.

As adoption of machine learning increases, the core challenge for 2020 lies in artist direction and participation, especially since the M.O. of machine learning is its ability to solve entire problems on its own.

The issue is this: if you rely on something 99.9% of the time, what happens if it fails in that extra 0.1%? Can you fix it? While ML means less room for human error, will people have the skills to fix something gone wrong if they don’t need them anymore?

So this issue necessitates building a bridge between artist and algorithm. ML can do the hard work, giving artists the time to get creative and perfect their craft in the final stages.

Gemini Man

We’ve seen this pay off in the face of accessible and inexpensive deepfake technology giving rise to “quick and easy” deepfakes, which rely entirely on ML. In contrast to these, bridging the uncanny valley remains in the realm of highly-skilled artists, requiring thought, artistry and care to produce something that tricks the human eye. Weta Digital’s work on Gemini Man is a prime example.

As massive projects like these continue to emerge, studios strive for efficiency and being able to produce at scale. Since ML and AI are all about data, the manipulation of both can unlock endless potential for the speed and scale at which artists can operate.

Foundry’s own efforts in this regard revolve around improving the persistence and availability of captured data. We’re figuring out how to deliver data in a more sensible way downstream, from initial capture to timestamping and synchronization, and then final arrangement in an easy, accessible format.

Underpinning our research into this is Universal Scene Description (USD), which you’ve probably heard about…

USD Becomes Uniform
Despite having a legacy and prominence from its development with Pixar, the still relevant open-sourcing and gradual adoption of Universal Scene Description means that it’s still maturing for wider pipelines and workflows.

New iterations of USD are now being released at a three month cadence, where before it used to be every two months. With each new release comes improvements as growing pains and teething issues are ironed out, and the slower pace provides some respite for artists who rely on specific versions of USD.

But challenges still exist, namely mismatched USD pipelines, and scattered documentation which means that solutions to these can’t be easily found. Currently, no one is officially rubber stamping USD best practice.

Capturing volumetric datasets for future testing.

To solve this issue, the industry needs a universal application of USD so it can exist in pipelines as an application-standard plugin to prevent an explosion of multiple variants of USD, which may cause further confusion.

If this comes off, documentation could be made uniform, and information could be shared across software, teams and studios with even more ease and efficiency.

It’ll make Foundry’s life easier, too. USD is vital to us to power interoperability in our products, allowing clients to extend their software capabilities on top of what we do ourselves.

At Foundry, our lighting tool, Katana, uses USD Hydra tech as the basis for much improved viewer experiences. Most recently, its Advanced Viewport Technology aims at delivering a consistent visual experience across software.

This wouldn’t be possible without USD. Even in its current state, the benefits are tangible, and its core principles — flexibility, modularity, interoperability  — underpin 2020’s next big trends.

Artist Pipelines Will Look More Iterative 
The industry is asking, “How can you be more iterative through everything?” Calls for this will only grow louder as we move into next year.

There’s an increasing push for efficiency as the common sentiment prevails: too much work, not enough people to do it. While maximizing hardware usage might seem like a go-to solution to this, the actual answer lies in solving throughput problems by improving workflows and facilitating sharing between studios and artists.

Increasingly, VFX pipelines don’t work well as a waterfall structure anymore, where each stage is done, dusted, and passed onto the next department in a structured, rigid process.

Instead, artists are thinking about how data persists throughout their pipeline and how to make use of it in a smart way. The main aim is to iterate on everything simultaneously for a more fluid, consistent experience across teams and studios.

USD helps tremendously here, since it captures all of the data layers and iterations in one. Artists can go to any one point in their pipeline, change different aspects of it, and it’s all maintained in one neat “chunk.” No waterfalls here.

Compositing in particular benefits from this new style of working. Being able to easily review in context lends an immense amount of efficiency and creativity to artists working in post production.

That’s Just the Beginning
Other drivers for artist efficiency that may gain traction in 2020 include: working across multiple shots (currently featured in Nuke Studio), process automation, and volumetric-style workflows to let artists work with 3D representations featuring depth and volume.

The bottom line is that 2020 looks to be the year of the artist — and we can’t wait.


Simon Robinson is the co-founder and chief scientist at Foundry.

Jody Madden upped to CEO at Foundry

Jody Madden, who joined Foundry in 2013 and has held positions as chief operating officer and, most recently, chief customer officer and chief product officer, has been promoted to chief executive officer. She takes over the role from Craig Rodgerson.

Madden, who has a rich background in VFX, has been with Foundry for six years. Prior to joining the company, she spent more than a decade in technology management and studio leadership roles at Industrial Light & Magic, Lucasfilm and Digital Domain after graduating from Stanford University.

“During a time of rapid change in creative industries, Foundry is committed to delivering innovations in workflow and future looking research,” says Madden.  “As the company continues to grow, delivering further improvements in speed, quality and user-experience remains a core focus to enable our customers to meet the demands of their markets.”

“Jody is well known for her collaborative leadership style and this has been crucial in enabling our engineering, product and research teams to achieve results for our customers and build the foundation for the future,” says Simon Robinson, co-founder/chief scientist. “I have worked closely with Jody and have seen the difference she has made to the business so I am extremely excited to see where she will lead Foundry in her new role and look forward to continuing to work with her.”

VFX and color for new BT spot via The Mill

UK telco BT wanted to create a television spot that showcased the WiFi capabilities of its broadband hub and underline its promise of “whole home coverage.” Sonny director Fredrik Bond visualized a fun and fast-paced spot for agency AMV BBDO, and a The Mill London was brought onboard to help with VFX and color. It is called Complete WiFi.

In the piece, the hero comes home to find it full of soldiers, angels, dancers, fairies, a giant and a horse — characters from the myriad of games and movies the family are watching simultaneously. Obviously, the look depends upon multiple layers of compositing, which have to be carefully scaled to be convincing.

They also need to be very carefully color matched, with similar lighting applied, so all the layers sit together. In a traditional workflow, this would have meant a lot of loops between VFX and grading to get the best from each layer, and a certain amount of compromise as the colorist imposed changes on virtual elements to make the final grade.

To avoid this, and to speed progress, The Mill recently started using BLG for Flame, a FilmLilght plugin that allows Baselight grades to be rendered identically within Flame — and with no back and forth to the color suite to render out new versions of shots. It means the VFX supervisor is continually seeing the latest grade and the colorist can access the latest Flame elements to match them in.

“Of course it was frustrating to grade a sequence and then drop the VFX on top,” explains VFX supervisor Ben Turner. “To get the results our collaborators expect, we were constantly pushing material to and fro. We could end up with more than a hundred publishes on a single job.”

With the BLG for Flame plugin, the VFX artist sees the latest Baselight grade automatically applied, either from FilmLight’s BLG format files or directly from a Baselight scene, even while the scene is still being graded — although Turner says he prefers to be warned when updates are coming.

This works because all systems have access to the raw footage. Baselight grades non-destructively, by building up layers of metadata that are imposed in realtime. The metadata includes all the grading information, multiple windows and layers, effects and relights, textures and more – the whole process. This information can be imposed on the raw footage by any BLG-equipped device (there are Baselight Editions software plugins for Avid and Nuke, too) for realtime rendering and review.

That is important because it also allows remote viewing. For this BT spot, director Bond was back in Los Angeles by the time of the post. He sat in a calibrated room in The Mill in LA and could see the graded images at every stage. He could react quickly to the first animation tests.

“I can render a comp and immediately show it to a client with the latest grade from The Mill’s colorist, Dave Ludlam,” says Turner. “When the client really wants to push a certain aspect of the image, we can ensure through both comp and grade that this is done sympathetically, maintaining the integrity of the image.”

(L-R) VFX supervisor Ben Turner and colorist Dave Ludlam.

Turner admits that it means more to-ing and fro-ing, but that is a positive benefit. “If I need to talk to Dave then I can pop in and solve a specific challenge in minutes. By creating the CGI to work with the background, I know that Dave will never have to push anything too hard in the final grade.”

Ludlam agrees that this is a complete change, but extremely beneficial. “With this new process, I am setting looks but I am not committing to them,” he says. “Working together I get a lot more creative input while still achieving a much slicker workflow. I can build the grade and only lock it down when everyone is happy.

“It is a massive speed-up, but more importantly it has made our output far superior. It gives everyone more control and — with every job under huge time pressure — it means we can respond quickly.”

The spot was offlined by Patric Ryan from Marshall Street. Audio post was via 750mph with sound designers Sam Ashwell and Mike Bovill.

Roper Technologies set to acquire Foundry

Roper Technologies, a technology company and a constituent of the S&P 500, Fortune 1000 and the Russell 1000 indices, is expected to purchase Foundry — the deal is expected to close in April 2019, subject to regulatory approval and customary closing conditions.Foundry makes software tools used to create visual effects and 3D for the media and entertainment world, including Nuke, Modo, Mari and Katana.

Craig Rodgerson

It’s a substantial move that enables Foundry to remain an independent company, with Roper assuming ownership from Hg. Roper has a successful history of acquiring well-run technology companies in niche markets that have strong, sustainable growth potential.

“We’re excited about the opportunities this partnership brings. Roper understands our strategy and chose to invest in us to help us realize our ambitious growth plans,” says Foundry CEO Craig Rodgerson. “This move will enable us to continue investing in what really matters to our customers: continued product improvement, R&D and technology innovation and partnerships with global leaders in the industry.”

Behind the Title: Senior compositing artist Marcel Lemme

We recently reached out to Marcel Lemme to find out more about how he works, his background and how he relaxes.

What is your job title and where are you based?
I’m a senior compositing artist based out of Hamburg, Germany.

What does your job entail?
I spend about 90 percent of my time working on commercial jobs for local and international companies like BMW, Audi and Nestle, but also dabble in feature films, corporate videos and music videos. On a regular day, I’m handling everything from job breakdowns to set supervision to conform. I’m also doing shot management for the team, interacting with clients, showing clients work and some compositing. Client review sessions and final approvals are regular occurrences for me too.

What would surprise people the most about the responsibilities that fall under that title?
When it comes to client attended sessions, you have to be part clown, part mind-reader. Half the job is being a good artist; the other half is keeping clients happy. You have to anticipate what the client will want and balance that with what you know looks best. I not only have to create and keep a good mood in the room, but also problem-solve with a smile.

What’s your favorite part of your job?
I love solving problems when compositing solo. There’s nothing better than tackling a tough project and getting results you’re proud of.

What’s your least favorite?
Sometimes the client isn’t sure what they want, which can make the job harder.

What’s your most productive time of day?
I’m definitely not a morning guy, so the evening — I’m more productive at night.

If you didn’t have this job, what would you be doing instead?
I’ve asked myself this question a lot, but honestly, I’ve never come up with a good answer.

How’d you get your first job, and did you know this was your path early on?
I fell into it. I was young and thought I’d give computer graphics a try, so I reached out to someonewho knew someone, and before I knew it I was interning at a company in Hamburg, which is how I came to know online editing. At the time, Quantel mostly dominated the industry with Editbox and Henry, and Autodesk Flame and Flint were just emerging. I dove in and started using all the technology I could get my hands on, and gradually started securing jobs based on recommendations.

Which tools are you using today, and why?
I use whatever the client and/or the project demands, whether it’s Flame or Foundry’s Nuke and for tracking I often use The Pixel Farm PFTrack and Boris FX Mocha. For commercial spots, I’ll do a lot of the conform and shot management on Flame and then hand off the shots to other team members. Or, if I do it myself, I’ll finish in Flame because I know I can do it fast.

I use Flame because it gives me different ways to achieve a certain look or find a solution to a problem. I can also play a clip at any resolution with just two clicks in Flame, which is important when you’re in a room with clients who want to see different versions on the fly. The recent open clip updates and python integration have also saved me time. I can import and review shots, with automatic versions coming in, and build new tools or automate tedious processes in the post chain that have typically slowed me down.

Tell us about some recent project work.
I recently worked on a project for BMW as a compositing supervisor and collaborated with eight other compositors to finish number of versions in a short amount of time. We did shot management, compositing, reviewing, versioning and such in Flame. Also individual shot compositing in Nuke and some tracking in Mocha Pro.

What is the project that you are most proud of?
There’s no one project that stands out in particular, but overall, I’m proud of jobs like the BMW spots, where I’ve led a team of artists and everything just works and flows. It’s rewarding when the client doesn’t know what you did or how you did it, but loves the end result.

Where do you find inspiration for your projects?
The obvious answer here is other commercials, but I also watch a lot of movies and, of course, spend time on the Internet.

Name three pieces of technology you can’t live without.
The off button on the telephone (they should really make that bigger), anything related to cinematography or digital cinema, and streaming technology.

What social media channels do you follow?
I’ve managed to avoid Facebook, but I do peek at Twitter and Instagram from time to time. Twitter can be a great quick reference for regional news or finding out about new technology and/or industry trends.

Do you listen to music while you work?
Less now than I did when I was younger. Most of the time, I can’t as I’m juggling too much and it’s distracting. When I listen to music, I appreciate techno, classical and singer/song writer stuff; whatever sets the mood for the shots I’m working on. Right now, I’m into Iron and Wine and Trentemøller, a Danish electronic music producer.

How do you de-stress from the job?
My drive home. It can take anywhere from a half an hour to an hour, depending on the traffic, and that’s my alone time. Sometimes I listen to music, other times I sit in silence. I cool down and prepare to switch gears before heading home to be with my family.

Foundry’s Nuke and Hiero 11.0 now available

Foundry has made available Nuke and Hiero 11.0, the next major release for the Nuke line of products, including Nuke, NukeX, Nuke Studio, Hiero and HieroPlayer. The Nuke family is being updated to VFX Platform 2017, which includes several major updates to key libraries used within Nuke, including Python, Pyside and Qt.

The update also introduces a new type of group node, which offers a powerful new collaborative workflow for sharing work among artists. Live Groups referenced in other scripts automatically update when a script is loaded, without the need to render intermediate stages.

Nuke Studio’s intelligent background rendering is now available in Nuke and NukeX. The Frame Server takes advantage of available resource on your local machine, enabling you to continue working while rendering is happening in the background. The LensDistortion node has been completely revamped, with added support for fisheye and wide-angle lenses and the ability to use multiple frames to produce better results. Nuke Studio now has new GPU-accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.