Tag Archives: Foundry Nuke

Quick Chat: Compositor Jen Howard on her move from films to spots

By Randi Altman

Industry veteran Jen Howard started her career as a model maker before transitioning to a career as a compositor. After spending the last 20 years at ILM working on features — including Avatar, Pirates of the Caribbean: At World’s End, Transformers, Hulk and Jurassic World — she recently made the move to Carbon Chicago to work on commercials.

While Howard’s official title is Nuke compositor, she has been credited on films as digital artist, lead digital artist, sequence lead, compositing lead and sequence supervisor. We recently reached out to her to talk about her transition, her past and present. Enjoy!

While you specialize in Nuke, your official title is compositor. What does that title entail?
Regardless of what software package one uses, being a compositor entails marrying together many pieces of separately shot footage so that they appear to be part of a single image sequence captured at one time.

For realistic-style productions, these pieces of photography can include live-action plates, rendered creatures, rendered simulations (like smoke or water), actors shot against greenscreen, miniatures, explosions or other practical elements shot on a stage. For more stylistic productions that list might also include hand-drawn, stop motion or rendered animations.

Sounds fun as well as challenging.
Yes, compositing presents both technical and aesthetic challenges, and this is what I love about it. Each shot is both a math problem and an art problem.

Technically, you need to be able to process the image data in the gentlest way possible while achieving a seamless blend of all your elements. No matte lines, no layering mistakes, solid tracking, proper defocus and depth hazing. Whether or not you’ve done this correctly is easy to see by looking at the final image — there is largely a right and a wrong result. The tracked-in element is either sliding, or it isn’t. However, whether you’ve made the right aesthetic decisions is a trickier question.

The less quantifiable goal for all the artists on a shot is to manifest the director’s vision … to take the image in their head and put it on the screen. This requires a lot of verbal discussion about visuals, which is tricky. Sometimes there is production art, but often there isn’t. So what does it mean when the director says, “Make it more mysterious”? Or what if they don’t even know what they want? What if they do, but the people between the director and the artists can’t communicate that vision downstream clearly?

When you build an image from scratch, almost everything can be in play — composition, contrast, saturation, depth of field, the direction and falloff of lighting, the placement of elements to frame the action and direct the eye. It is a compositor’s job to interpret the verbal input they’ve received and know what changes to make to each of these parameters to deliver the visual look and feel the director is after and to tell their story.

What would surprise people the most about what falls under that title?
I think people are still surprised at how many aspects of an effects shot are in a compositor’s control, even today when folks are pretty tech-savvy. Between the person doing the lighting and rendering and the compositor, you can create any look. And they’re surprised at the amount of “hand work” it entails, as they imagine the process to be more automated than it is.

How long have you been working in visual effects?
During college, I became a production assistant for master model maker Greg Jein, and he taught me that craft. Interesting fact — the first lesson was how to get your fingers apart after you’ve glued them together. I worked building models until about 1997 then crossed over to the digital side. So that’s about 30 years, and it’s a good thing I’m sitting down as I say that.

Kong

How has the industry changed in the time you’ve been working? What’s been good? What’s been bad?
When I was a model maker, most of that work was happening in the LA area. The VFX houses with their own model shops and stages and the stand-alone model shops were there. There was also ILM in the Bay Area. These places drew on local talent. They had a regular pool of local freelancers who knew each other, and a lot of them fell into the field by accident..

I worked with welders, machinists and sci-fi geeks good at bashing model kits who ended up working at these places because someone there knew them, and the company needed their skill set. Then all of a sudden, they were in show business. There was a family feel to most shops, and it was always fun. Some shops were union, so the schedules for projects at those places mostly fit the scope of work, and late nights were rare. The digital world was the same for a long time.

Model shops mostly went away, and as everyone knows, most digital feature effects are now done overseas, with some tasks like roto and matchmoving entirely farmed out to separate smaller companies. Crews are from all over the globe, and I’d hazard a guess that those folks got into the industry on purpose because now it is a thing.

What we’ve gained with this new paradigm is a more diverse pool of new talent who can find their way into the industry pretty much no matter where they’re from. That makes me happy because I feel strongly that everyone who has a love for this kind of work should get a shot at trying it. They bring fresh vision and new ideas to the industry and an appetite for pushing the technology further.

What’s lost is the shorthand and efficiency you get from a crew that’s worked together for a long time. They’re older and have made a lot of the mistakes already and can cut to the chase quickly. They make great mentors for the younger artists when tapped for that job, but I don’t feel that there’s been the amount of knowledge transfer there could have been — in either direction. Sometimes an “us versus them” dynamic emerges, which is really unfortunate.

Another change is the increasingly compressed schedule of feature production, which creates long hours and weekend work. This is hard on everyone, both physically and emotionally. The stress can be intense and translates into work injuries and relationship tension and is extremely hard on families with children. Studios have been pushing for these shorter schedules and cheaper prices. VFX work has been moved to countries that offer tax breaks or a generally cheaper labor pool. So quality now takes a back seat two ways: There isn’t enough time, and sometimes there isn’t enough experience.

You recently made the move to Chicago and spot work after years at ILM working on features. Can you talk about the differences in workflows?
The powerful role of advertising agencies in commercial work really surprised me. In film, the director is king, and they’re there all the way through the project, making every creative decision. In advertising, it seems the director shoots and moves on, and the agency takes up the direction of the creative vision in post production.

The shorter timeline for spot work translates into less time for 3D artists to iterate and finesse their renders, which are time-intensive to run, and so the flexibility and faster turnaround of comp means more comp work on renders, sooner. In features, 3D artists ideally have the time to get their render to a place that they’re mostly happy with before comp steps in, and the comp touch can be pretty light. (Of course, feature timelines are becoming more compressed, so that’s not always true now.)

Did a particular film inspire you along this path?
Two words. Star Wars. (Not unusual I know.) Also, when I was older, Japanese anime. Starblazers (Yamato), specifically.

Growing up, I watched my mom struggle to make enough money to support us. She had to look for opportunity everywhere, taking whatever job was available. Mostly she didn’t particularly enjoy her jobs, and I noticed the price she paid for that – spending so many hours with people she didn’t enjoy, doing work that didn’t resonate for her. So it became very important for me to find work that I loved. It was a very conscious goal.

You mentioned school earlier. Was that film school?
Yes, I went to Cal Arts in Valencia, California, just outside of LA. I studied animation and motion graphics, but I discovered pretty quickly that I had no talent for animation. However, I became fascinated with the school’s optical printer and motion control camera, and I played a lot with those. The optical printer is the chemical way of compositing that was used before digital compositing was developed. Using those analog machines helped me understand digital compositing down the road.

Porche’s The Heist

Can you name some recent projects you’ve worked on?
My last ILM project was the new Star Wars ride that opened recently in Disneyland, called Rise of the Resistance. Other recent projects include Solo: A Star Wars Story, Transformers: The Last Knight, Kong: Skull Island and Bumblebee: The Movie.

While at Carbon I worked on a spot for Porche called The Heist and a Corona campaign.

What projects are you most proud of?
For model making, I’m proud of the work I did on Judge Dredd, which came out in 1995. I got to spend several months just detailing out a miniature city with little greebles — making up futuristic-looking antennae and spires to give the city more scale.

Batman

On the digital side I’m really proud of the look we developed for Rango, ILM’s one and only animated feature, directed by Gore Verbinski. We brought a lot of realistic cinematic zing to that world using some practical elements in combination with rendered layers, and we built comp into the process deliberately so we could dial to our hearts’ content.

I’m also extremely proud of the first three Pirates movies, in which we did something of the opposite — brought a fantasy world to reality. The pirate characters are extreme in their design, and it was especially rewarding to see them come to life.

Where do you find inspiration now?
Chicago is amazing. I’m a fan of architecture, and I have to say, this city knocks my socks off in that department. It is such a pleasure to live somewhere where so much thought has gone into the built environment. The Art Institute is constantly inspirational, and so is my backyard, which is full of bunnies and squirrels and my wife and our two kids.

What do you do to destress from it all, especially these days?
Well, we don’t really leave the house, so right now I mostly hide in the bathroom.

Any tips for folks just starting out?
– Do whatever you’re doing now to the best of your ability, even if it isn’t the job you ultimately want or even the field you want to be in. Relationships are key, and it can be surprising how someone you worked with 10 years ago can pop up suddenly in a position to help you out later on..

– Also, don’t be scared of software. Your most important asset is your ability to know what an image needs. You can learn any software.

– Start saving for retirement now.

As for me, I’m glad I didn’t know anything and that there was no internet or social media of significance until after I finished school. It meant I had to look inward to figure out what felt right, and that really worked for me. I wouldn’t want to spoil that.

Foundry Nuke 12.1 offers upgrades across product line

Foundry has released Nuke 12.1, with UI enhancements and tool improvements across the entire Nuke family. The largest update to Blink and BlinkScript in recent years improves Cara VR node performance and introduces new tools for developers, while extended functionality in the timeline-based applications speeds up and enriches artist and team review.

Here are the upgrade highlights:
– New Shuffle node updates the classic checkboxes with an artist-friendly, node-based UI that supports up to eight channels per layer (Nuke’s limit) and consistent channel ordering, offering a more robust tool set at the heart of Nuke’s multi-channel workflow.
– Lens Distortion Workflow improvements: The LensDistortion node in NukeX is updated to have a more intuitive workflow and UI, making it easier and quicker to access the faster and more accurate algorithms and expanded options introduced in Nuke 11.
– Blink and BlinkScript improvements: Nuke’s architecture for GPU-accelerated nodes and the associated API can now store data on the GPU between operations, resulting in what Foundry says are “dramatic performance improvements to chains of nodes with GPU caching enabled.” This new functionality is available to developers using BlinkScript, along with bug fixes and a debug print out on Linux.
– Cara VR GPU performance improvements: The Cara VR nodes in NukeX have been updated to take advantage of the new GPU-caching functionality in Blink, offering performance improvements in viewer processing and rendering when using chains of these nodes together. Foundry’s internal tests on production projects show rendering time that’s up to 2.4 times faster.
– Updated Nuke Spherical Transform and Bilateral: The Cara VR versions of the Spherical Transform and Bilateral nodes have been merged with the Nuke versions of these nodes, adding increased functionality and GPU support in Nuke. Both nodes take advantage of the GPU performance improvements added in Nuke 12.1. They are now available in Nuke and no longer require a NukeX license.
– New ParticleBlinkScript node: NukeX now includes a new ParticleBlinkScript node, allowing developers to write BlinkScripts that operate on particles. Nuke 12.1 ships with more than 15 new gizmos, offering a starting point for artists who work with particle effects and developers looking to use BlinkScript.
– QuickTime audio and surround sound support: Nuke Studio, Hiero and HieroPlayer now support multi-channel audio. Artists can now import Mov containers holding audio on Linux and Windows without needing to extract and import the audio as a separate Wav file.

– Faster HieroPlayer launch and Nuke Flipbook integration: Foundry says new instances of HieroPlayer launch 1.2 times faster on Windows and up to 1.5 times faster on Linux in internal tests, improving the experience for artists using HieroPlayer for review. With Nuke 12.1, artists can also use HieroPlayer as the Flipbook tool for Nuke and NukeX, giving them more control when comparing different versions of their work in progress.
– High DPI Windows and Linux: UI scaling when using high-resolution monitors is now available on Windows and Linux, bringing all platforms in line with high-resolution display support added for macOS in Nuke 12.0 v1.
– Extended ARRI camera support: Nuke 12.1 adds support for ARRI formats, including Codex HDE .arx files, ProRes MXFs and the popular Alexa Mini LF. Foundry also says there are performance gains when debayering footage on CUDA GPUs, and there’s an SDK update.

VFX pipeline trends for 2020

By Simon Robinson

A new year, more trends — some burgeoning, and others that have been dominating industry discussions for a while. Underpinning each is the common sentiment that 2020 seems especially geared toward streamlining artist workflows, more so than ever before.

There’s an increasing push for efficiency; not just through hardware but through better business practices and solutions to throughput problems.

Exciting times lie ahead for artists and studios everywhere. I believe the trends below form the pillars of this key industry mission for 2020.

Machine Learning Will Make Better, Faster Artists
Machines are getting smarter. AI software is becoming more universally applied in the VFX industry, and with this comes benefits and implications for artist workflows.

As adoption of machine learning increases, the core challenge for 2020 lies in artist direction and participation, especially since the M.O. of machine learning is its ability to solve entire problems on its own.

The issue is this: if you rely on something 99.9% of the time, what happens if it fails in that extra 0.1%? Can you fix it? While ML means less room for human error, will people have the skills to fix something gone wrong if they don’t need them anymore?

So this issue necessitates building a bridge between artist and algorithm. ML can do the hard work, giving artists the time to get creative and perfect their craft in the final stages.

Gemini Man

We’ve seen this pay off in the face of accessible and inexpensive deepfake technology giving rise to “quick and easy” deepfakes, which rely entirely on ML. In contrast to these, bridging the uncanny valley remains in the realm of highly-skilled artists, requiring thought, artistry and care to produce something that tricks the human eye. Weta Digital’s work on Gemini Man is a prime example.

As massive projects like these continue to emerge, studios strive for efficiency and being able to produce at scale. Since ML and AI are all about data, the manipulation of both can unlock endless potential for the speed and scale at which artists can operate.

Foundry’s own efforts in this regard revolve around improving the persistence and availability of captured data. We’re figuring out how to deliver data in a more sensible way downstream, from initial capture to timestamping and synchronization, and then final arrangement in an easy, accessible format.

Underpinning our research into this is Universal Scene Description (USD), which you’ve probably heard about…

USD Becomes Uniform
Despite having a legacy and prominence from its development with Pixar, the still relevant open-sourcing and gradual adoption of Universal Scene Description means that it’s still maturing for wider pipelines and workflows.

New iterations of USD are now being released at a three month cadence, where before it used to be every two months. With each new release comes improvements as growing pains and teething issues are ironed out, and the slower pace provides some respite for artists who rely on specific versions of USD.

But challenges still exist, namely mismatched USD pipelines, and scattered documentation which means that solutions to these can’t be easily found. Currently, no one is officially rubber stamping USD best practice.

Capturing volumetric datasets for future testing.

To solve this issue, the industry needs a universal application of USD so it can exist in pipelines as an application-standard plugin to prevent an explosion of multiple variants of USD, which may cause further confusion.

If this comes off, documentation could be made uniform, and information could be shared across software, teams and studios with even more ease and efficiency.

It’ll make Foundry’s life easier, too. USD is vital to us to power interoperability in our products, allowing clients to extend their software capabilities on top of what we do ourselves.

At Foundry, our lighting tool, Katana, uses USD Hydra tech as the basis for much improved viewer experiences. Most recently, its Advanced Viewport Technology aims at delivering a consistent visual experience across software.

This wouldn’t be possible without USD. Even in its current state, the benefits are tangible, and its core principles — flexibility, modularity, interoperability  — underpin 2020’s next big trends.

Artist Pipelines Will Look More Iterative 
The industry is asking, “How can you be more iterative through everything?” Calls for this will only grow louder as we move into next year.

There’s an increasing push for efficiency as the common sentiment prevails: too much work, not enough people to do it. While maximizing hardware usage might seem like a go-to solution to this, the actual answer lies in solving throughput problems by improving workflows and facilitating sharing between studios and artists.

Increasingly, VFX pipelines don’t work well as a waterfall structure anymore, where each stage is done, dusted, and passed onto the next department in a structured, rigid process.

Instead, artists are thinking about how data persists throughout their pipeline and how to make use of it in a smart way. The main aim is to iterate on everything simultaneously for a more fluid, consistent experience across teams and studios.

USD helps tremendously here, since it captures all of the data layers and iterations in one. Artists can go to any one point in their pipeline, change different aspects of it, and it’s all maintained in one neat “chunk.” No waterfalls here.

Compositing in particular benefits from this new style of working. Being able to easily review in context lends an immense amount of efficiency and creativity to artists working in post production.

That’s Just the Beginning
Other drivers for artist efficiency that may gain traction in 2020 include: working across multiple shots (currently featured in Nuke Studio), process automation, and volumetric-style workflows to let artists work with 3D representations featuring depth and volume.

The bottom line is that 2020 looks to be the year of the artist — and we can’t wait.


Simon Robinson is the co-founder and chief scientist at Foundry.

Jody Madden upped to CEO at Foundry

Jody Madden, who joined Foundry in 2013 and has held positions as chief operating officer and, most recently, chief customer officer and chief product officer, has been promoted to chief executive officer. She takes over the role from Craig Rodgerson.

Madden, who has a rich background in VFX, has been with Foundry for six years. Prior to joining the company, she spent more than a decade in technology management and studio leadership roles at Industrial Light & Magic, Lucasfilm and Digital Domain after graduating from Stanford University.

“During a time of rapid change in creative industries, Foundry is committed to delivering innovations in workflow and future looking research,” says Madden.  “As the company continues to grow, delivering further improvements in speed, quality and user-experience remains a core focus to enable our customers to meet the demands of their markets.”

“Jody is well known for her collaborative leadership style and this has been crucial in enabling our engineering, product and research teams to achieve results for our customers and build the foundation for the future,” says Simon Robinson, co-founder/chief scientist. “I have worked closely with Jody and have seen the difference she has made to the business so I am extremely excited to see where she will lead Foundry in her new role and look forward to continuing to work with her.”

VFX and color for new BT spot via The Mill

UK telco BT wanted to create a television spot that showcased the WiFi capabilities of its broadband hub and underline its promise of “whole home coverage.” Sonny director Fredrik Bond visualized a fun and fast-paced spot for agency AMV BBDO, and a The Mill London was brought onboard to help with VFX and color. It is called Complete WiFi.

In the piece, the hero comes home to find it full of soldiers, angels, dancers, fairies, a giant and a horse — characters from the myriad of games and movies the family are watching simultaneously. Obviously, the look depends upon multiple layers of compositing, which have to be carefully scaled to be convincing.

They also need to be very carefully color matched, with similar lighting applied, so all the layers sit together. In a traditional workflow, this would have meant a lot of loops between VFX and grading to get the best from each layer, and a certain amount of compromise as the colorist imposed changes on virtual elements to make the final grade.

To avoid this, and to speed progress, The Mill recently started using BLG for Flame, a FilmLilght plugin that allows Baselight grades to be rendered identically within Flame — and with no back and forth to the color suite to render out new versions of shots. It means the VFX supervisor is continually seeing the latest grade and the colorist can access the latest Flame elements to match them in.

“Of course it was frustrating to grade a sequence and then drop the VFX on top,” explains VFX supervisor Ben Turner. “To get the results our collaborators expect, we were constantly pushing material to and fro. We could end up with more than a hundred publishes on a single job.”

With the BLG for Flame plugin, the VFX artist sees the latest Baselight grade automatically applied, either from FilmLight’s BLG format files or directly from a Baselight scene, even while the scene is still being graded — although Turner says he prefers to be warned when updates are coming.

This works because all systems have access to the raw footage. Baselight grades non-destructively, by building up layers of metadata that are imposed in realtime. The metadata includes all the grading information, multiple windows and layers, effects and relights, textures and more – the whole process. This information can be imposed on the raw footage by any BLG-equipped device (there are Baselight Editions software plugins for Avid and Nuke, too) for realtime rendering and review.

That is important because it also allows remote viewing. For this BT spot, director Bond was back in Los Angeles by the time of the post. He sat in a calibrated room in The Mill in LA and could see the graded images at every stage. He could react quickly to the first animation tests.

“I can render a comp and immediately show it to a client with the latest grade from The Mill’s colorist, Dave Ludlam,” says Turner. “When the client really wants to push a certain aspect of the image, we can ensure through both comp and grade that this is done sympathetically, maintaining the integrity of the image.”

(L-R) VFX supervisor Ben Turner and colorist Dave Ludlam.

Turner admits that it means more to-ing and fro-ing, but that is a positive benefit. “If I need to talk to Dave then I can pop in and solve a specific challenge in minutes. By creating the CGI to work with the background, I know that Dave will never have to push anything too hard in the final grade.”

Ludlam agrees that this is a complete change, but extremely beneficial. “With this new process, I am setting looks but I am not committing to them,” he says. “Working together I get a lot more creative input while still achieving a much slicker workflow. I can build the grade and only lock it down when everyone is happy.

“It is a massive speed-up, but more importantly it has made our output far superior. It gives everyone more control and — with every job under huge time pressure — it means we can respond quickly.”

The spot was offlined by Patric Ryan from Marshall Street. Audio post was via 750mph with sound designers Sam Ashwell and Mike Bovill.

Roper Technologies set to acquire Foundry

Roper Technologies, a technology company and a constituent of the S&P 500, Fortune 1000 and the Russell 1000 indices, is expected to purchase Foundry — the deal is expected to close in April 2019, subject to regulatory approval and customary closing conditions.Foundry makes software tools used to create visual effects and 3D for the media and entertainment world, including Nuke, Modo, Mari and Katana.

Craig Rodgerson

It’s a substantial move that enables Foundry to remain an independent company, with Roper assuming ownership from Hg. Roper has a successful history of acquiring well-run technology companies in niche markets that have strong, sustainable growth potential.

“We’re excited about the opportunities this partnership brings. Roper understands our strategy and chose to invest in us to help us realize our ambitious growth plans,” says Foundry CEO Craig Rodgerson. “This move will enable us to continue investing in what really matters to our customers: continued product improvement, R&D and technology innovation and partnerships with global leaders in the industry.”

Behind the Title: Senior compositing artist Marcel Lemme

We recently reached out to Marcel Lemme to find out more about how he works, his background and how he relaxes.

What is your job title and where are you based?
I’m a senior compositing artist based out of Hamburg, Germany.

What does your job entail?
I spend about 90 percent of my time working on commercial jobs for local and international companies like BMW, Audi and Nestle, but also dabble in feature films, corporate videos and music videos. On a regular day, I’m handling everything from job breakdowns to set supervision to conform. I’m also doing shot management for the team, interacting with clients, showing clients work and some compositing. Client review sessions and final approvals are regular occurrences for me too.

What would surprise people the most about the responsibilities that fall under that title?
When it comes to client attended sessions, you have to be part clown, part mind-reader. Half the job is being a good artist; the other half is keeping clients happy. You have to anticipate what the client will want and balance that with what you know looks best. I not only have to create and keep a good mood in the room, but also problem-solve with a smile.

What’s your favorite part of your job?
I love solving problems when compositing solo. There’s nothing better than tackling a tough project and getting results you’re proud of.

What’s your least favorite?
Sometimes the client isn’t sure what they want, which can make the job harder.

What’s your most productive time of day?
I’m definitely not a morning guy, so the evening — I’m more productive at night.

If you didn’t have this job, what would you be doing instead?
I’ve asked myself this question a lot, but honestly, I’ve never come up with a good answer.

How’d you get your first job, and did you know this was your path early on?
I fell into it. I was young and thought I’d give computer graphics a try, so I reached out to someonewho knew someone, and before I knew it I was interning at a company in Hamburg, which is how I came to know online editing. At the time, Quantel mostly dominated the industry with Editbox and Henry, and Autodesk Flame and Flint were just emerging. I dove in and started using all the technology I could get my hands on, and gradually started securing jobs based on recommendations.

Which tools are you using today, and why?
I use whatever the client and/or the project demands, whether it’s Flame or Foundry’s Nuke and for tracking I often use The Pixel Farm PFTrack and Boris FX Mocha. For commercial spots, I’ll do a lot of the conform and shot management on Flame and then hand off the shots to other team members. Or, if I do it myself, I’ll finish in Flame because I know I can do it fast.

I use Flame because it gives me different ways to achieve a certain look or find a solution to a problem. I can also play a clip at any resolution with just two clicks in Flame, which is important when you’re in a room with clients who want to see different versions on the fly. The recent open clip updates and python integration have also saved me time. I can import and review shots, with automatic versions coming in, and build new tools or automate tedious processes in the post chain that have typically slowed me down.

Tell us about some recent project work.
I recently worked on a project for BMW as a compositing supervisor and collaborated with eight other compositors to finish number of versions in a short amount of time. We did shot management, compositing, reviewing, versioning and such in Flame. Also individual shot compositing in Nuke and some tracking in Mocha Pro.

What is the project that you are most proud of?
There’s no one project that stands out in particular, but overall, I’m proud of jobs like the BMW spots, where I’ve led a team of artists and everything just works and flows. It’s rewarding when the client doesn’t know what you did or how you did it, but loves the end result.

Where do you find inspiration for your projects?
The obvious answer here is other commercials, but I also watch a lot of movies and, of course, spend time on the Internet.

Name three pieces of technology you can’t live without.
The off button on the telephone (they should really make that bigger), anything related to cinematography or digital cinema, and streaming technology.

What social media channels do you follow?
I’ve managed to avoid Facebook, but I do peek at Twitter and Instagram from time to time. Twitter can be a great quick reference for regional news or finding out about new technology and/or industry trends.

Do you listen to music while you work?
Less now than I did when I was younger. Most of the time, I can’t as I’m juggling too much and it’s distracting. When I listen to music, I appreciate techno, classical and singer/song writer stuff; whatever sets the mood for the shots I’m working on. Right now, I’m into Iron and Wine and Trentemøller, a Danish electronic music producer.

How do you de-stress from the job?
My drive home. It can take anywhere from a half an hour to an hour, depending on the traffic, and that’s my alone time. Sometimes I listen to music, other times I sit in silence. I cool down and prepare to switch gears before heading home to be with my family.

Foundry’s Nuke and Hiero 11.0 now available

Foundry has made available Nuke and Hiero 11.0, the next major release for the Nuke line of products, including Nuke, NukeX, Nuke Studio, Hiero and HieroPlayer. The Nuke family is being updated to VFX Platform 2017, which includes several major updates to key libraries used within Nuke, including Python, Pyside and Qt.

The update also introduces a new type of group node, which offers a powerful new collaborative workflow for sharing work among artists. Live Groups referenced in other scripts automatically update when a script is loaded, without the need to render intermediate stages.

Nuke Studio’s intelligent background rendering is now available in Nuke and NukeX. The Frame Server takes advantage of available resource on your local machine, enabling you to continue working while rendering is happening in the background. The LensDistortion node has been completely revamped, with added support for fisheye and wide-angle lenses and the ability to use multiple frames to produce better results. Nuke Studio now has new GPU-accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.