NBCUni 7.26

Category Archives: VFX

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

FilmLight sets speakers for free Color On Stage seminar at IBC

At this year’s IBC, FilmLight will host a free two-day seminar series, Color On Stage, on September 14 and 15. The event features live presentations and discussions with colorists and other creative professionals. The event will cover topics ranging from the colorist today to understanding color management and next-generation grading tools.

“Color on Stage offers a good platform to hear about real-world interaction between colorists, directors and cinematographers,” explains Alex Gascoigne, colorist at Technicolor and one of this year’s presenters. “Particularly when it comes to large studio productions, a project can take place over several months and involve a large creative team and complex collaborative workflows. This is a chance to find out about the challenges involved with big shows and demystify some of the more mysterious areas in the post process.”

This year’s IBC program includes colorists from broadcast, film and commercials, as well as DITs, editors, VFX artists and post supervisors.

Program highlights include:
•    Creating the unique look for Mindhunter Season 2
Colorist Eric Weidt will talk about his collaboration with director David Fincher — from defining the workflow to creating the look and feel of Mindhunter. He will break down scenes and run through color grading details of the masterful crime thriller.

•    Realtime collaboration on the world’s longest running continuing drama, ITV Studios’ Coronation Street
The session will address improving production processes and enhancing pictures with efficient renderless workflows, with colorist Stephen Edwards, finishing editor Tom Chittenden and head of post David Williams.

•    Looking to the future: Creating color for the TV series Black Mirror
Colorist Alex Gascoigne of Technicolor will explain the process behind grading Black Mirror, including the interactive episode Bandersnatch and the latest Season 5.

•    Bollywood: A World of Color
This session will delve into the Indian film industry with CV Rao, technical general manager at Annapurna Studios in Hyderabad. In this talk, CV will discuss grading and color as exemplified by the hit film Baahubali 2: The Conclusion.

•    Joining forces: Strengthening VFX and finishing with the BLG workflow
Mathieu Leclercq, head of post at Mikros Image in Paris, will be joined by colorist Sebastian Mingam and VFX supervisor Franck Lambertz to showcase their collaboration on recent projects.

•    Maintaining the DP’s creative looks from set to post
Meet with French DIT Karine Feuillard, ADIT — who worked on the latest Luc Besson film Anna as well as the TV series The Marvelous Mrs Maisel — and FilmLight workflow specialist Matthieu Straub.

•    New color management and creative tools to make multi-delivery easier
The latest and upcoming Baselight developments, including a host of features aimed to simplify delivery for emerging technologies such as HDR. With FilmLight’s Martin Tlaskal, Daniele Siragusano and Andy Minuth.

Color On Stage will take place in Room D201 on the second floor of the Elicium Centre (Entrance D), close to Hall 13. The event is free to attend but spaces are limited. Registion is available here.

NBCUni 7.26

Rob Legato to receive HPA’s Lifetime Achievement Award 

The Hollywood Professional Association (HPA) will honor renowned visual effects supervisor and creative Robert Legato with its Lifetime Achievement Award at the HPA Awards at the Skirball Cultural Center in Los Angeles on November 21. Now in its 14th year, the HPA Awards recognize creative artistry, innovation and engineering excellence in the media content industry. The Lifetime Achievement Award honors the recipients’ dedication to the betterment of the industry.

Legato is an iconic figure in the visual effects industry with multiple Oscar, BAFTA and Visual Effects Society nominations and awards to his credit. He is a multi-hyphenate on many of his projects, serving as visual effects supervisor, VFX director of photography and second unit director. From his work with studios and directors and in his roles at Sony Pictures Imageworks and Digital Domain, he has developed a variety of digital workflows.

He has enjoyed collaborations with leading directors including James Cameron, Jon Favreau, Martin Scorsese and Robert Zemeckis. Legato’s career in VFX began in television at Paramount Pictures, where he supervised visual effects on two Star Trek series, which earned him two Emmy awards. He left Paramount to join the newly formed Digital Domain where he worked with founders James Cameron, Stan Winston and Scott Ross. He remained at Digital Domain until he segued to Sony Imageworks.

Legato began his feature VFX career on Neil Jordan’s Interview with the Vampire. He then served as VFX supervisor and DP for the VFX unit on Ron Howard’s Apollo 13, which earned him his first Academy Award nomination, and a win at the BAFTAs. He worked with James Cameron on Titanic, earning him his first Academy Award. Legato continued to work with Cameron, conceiving and creating the virtual cinematography pipeline for Cameron’s visionary Avatar.

Legato has also enjoyed a long collaboration with Martin Scorsese that began with his consultation on Kundun and continued with the multi-award winning film The Aviator, on which he served as co-second unit director/cameraman and VFX supervisor. Legato’s work on The Aviator won him three VES awards. He returned to work with the director on the Oscar Best Picture winner The Departed as the 2nd unit director/cameraman and VFX supervisor.  Legato and Scorsese collaborated once again on Shutter Island, on which he was both VFX supervisor and 2nd unit director/cameraman. He continued on to Scorsese’s 3D film Hugo, which was nominated for 11 Oscars and 11 BAFTAs, including Best Picture and Best Visual Effects. Legato won his second Oscar for Hugo as well as three VES Society Awards. His collaboration with Scorsese continued with The Wolf of Wall Street as well as with non-theatrical and advertising projects such as the Clio award-winning Freixenet: The Key to Reserva, a 10-minute commercial project, and the Rolling Stones feature documentary, Shine a Light.

Legato worked with director Jon Favreau on Disney’s The Jungle Book (second unit director/cinematographer and VFX supervisor) for which he received his third Academy Award, a British Academy Award, five VES Awards, an HPA Award and the Critics’ Choice Award for Best Visual Effects for 2016. His latest film with Favreau is Disney’s The Lion King, which surpassed $1 billion in box office after fewer than three weeks in theaters.

Legato’s extensive credits include serving as VFX supervisor on Chris Columbus’ Harry Potter and the Sorcerer’s Stone, as well as on two Robert Zemeckis films, What Lies Beneath and Cast Away. He was senior VFX supervisor on Michael Bay’s Bad Boys II, which was nominated for a VES Award for Outstanding Supporting Visual Effects, and for Digital Domain he worked on Bay’s Armageddon.

Legato is a member of ASC, BAFTA, DGA, AMPAS, VES, and the Local 600 and Local 700 unions.


Shipping + Handling adds Jerry Spivack, Mike Pethel, Matthew Schwab

VFX creative director Jerry Spivack and colorists Michael Pethel and Matthew Schwab have joined LA’s Shipping + Handling, Spot Welders‘ VFX, color grading, animation, and finishing arm/sister company.

Alongside executive producer Scott Friske and current creative director Casey Price, Spivack will help lead the company’s creative team. As the creative director/co-founder at Ring of Fire, Spivack was responsible for crafting and spearheading VFX on commercials for brands including FedEx, Nike and Jaguar; episodic work for series television including Netflix’s Wormwood and 12 seasons of FX’s It’s Always Sunny in Philadelphia; promos for NBC’s The Voice and The Titan Games; and feature films such as Sony Pictures’ Spider-Man 2, Bold Films’ Drive and Warner Bros.’ The Bucket List.

Colorist Pethel was a founding partner of Company 3 and for the past five years has served client and director relationships under his BeachHouse Color brand, which he will continue to maintain. Pethel’s body of work includes campaigns for Carl’s Jr., Chase, Coke, Comcast/Xfinity, Hyundai, Jeep, Netflix and Southwest Airlines.

Commenting on the move, Pethel says, “I’m thrilled to be joining such a fantastic group of highly regarded and skilled professionals at Shipping + Handling. There is so much creativity here; the people are awesome to work with and the technology they are able to offer clientele at the facility is top-notch.”

Schwab formally joins the Shipping + Handling roster after working closely with the company over the past two years on multiple campaigns for Apple, Acura, QuickBooks and many others. Aside from his role at Shipping + Handling, Schwab will also continue his work through Roving Picture Company. Having worked with a number of internationally recognized brands, Schwab has collaborated on projects for Amazon, Honda, Mercedes-Benz, National Geographic, Netflix, Nike, PlayStation and Smirnoff.

“It’s exciting to be part of a team that approaches every project with such energy. This partnership represents a shared commitment to always deliver outstanding color and technical results for our clients,” says Schwab.

“Pethel is easily amongst the best colorists in our industry. As a longtime client of his, I have a real understanding of the professionalism he brings to every session. He is a delight in the room and wickedly talented. Schwab’s talent has just been realized in the last few years, and we are pleased to offer his skill to our clients. If our experience working with him over the last couple of years is any indication, we’re going to make a lot of clients happy he’s on our roster,” adds Friske.

Spivack, Pethel and Schwab will operate out of Shipping + Handling’s West Coast office on the creative campus it shares with its sister company, editorial post house Spot Welders.

Image: (L-R) Mike Pethel, Matthew Schwab, Jerry Spivack

 


Matthew Bristowe joins Jellyfish as COO

UK-based VFX and animation studio Jellyfish Pictures has hired Matthew Bristowe as director of operations. With a career spanning over 20 years, Bristowe joins Jellyfish Pictures after a stint as head of production at Technicolor.

During his 20 years in the industry, Bristowe has overseen hundreds of productions, including; Aladdin (Disney), Star Wars: The Last Jedi (Lucasfilm/Disney), Avengers: Age of Ultron (Marvel) and Guardians of the Galaxy (Marvel). In 2014 he was honored with the Advanced Imaging Society’s Lumiere Award for his work on Alfonso Cuarón’s Academy Award-winning Gravity.

Bristowe led the One Of Us VFX team to success in the category of Special, Visual and Graphic Effects at the BAFTAs and Best Digital Effects at the Royal Television Society Awards for The Crown Season 1. Another RTS award and BAFTA nomination followed in 2018 for The Crown Season 2. Prior to working with Technicolor and One of Us, Bristowe held senior positions at MPC and Prime Focus.

“Matt joining Jellyfish Pictures is a substantial hire for the company,” explains CEO Phil Dobree. “2019 has seen us focus on our growth, following the opening of our newest studio in Sheffield, and Matt’s extensive experience of bringing together creativity and strategy will be instrumental in our further expansion.”


An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.


Autodesk intros Bifrost for Maya at SIGGRAPH

At SIGGRAPH, Autodesk announced a new visual programming environment in Maya called Bifrost, which makes it possible for 3D artists and technical directors to create serious effects quickly and easily.

“Bifrost for Maya represents a major development milestone for Autodesk, giving artists powerful tools for building feature-quality VFX quickly,” says Chris Vienneau, senior director, Maya and Media & Entertainment Collection. “With visual programming at its core, Bifrost makes it possible for TDs to build custom effects that are reusable across shows. We’re also rolling out an array of ready-to-use graphs to make it easy for artists to get 90% of the way to a finished effect fast. Ultimately, we hope Bifrost empowers Maya artists to streamline the creation of anything from smoke, fire and fuzz to high-performance particle systems.”

Bifrost highlights include:

  • Ready-to-Use Graphs: Artists can quickly create state-of-the-art effects that meet today’s quality demands.
  • One Graph: In a single visual programming graph, users can combine nodes ranging from math operations to simulations.
  • Realistic Previews: Artists can see exactly how effects will look after lighting and rendering right in the Arnold Viewport in Maya.
  • Detailed Smoke, Fire and Explosions: New physically-based solvers for aerodynamics and combustion make it easy to create natural-looking fire effects.
  • The Material Point Method: The new MPM solver helps artists tackle realistic granular, cloth and fiber simulations.
  • High-Performance Particle System: A new particle system crafted entirely using visual programming adds power and scalability to particle workflows in Maya.
  • Artistic Effects with Volumes: Bifrost comes loaded with nodes that help artists convert between meshes, points and volumes to create artistic effects.
  • Flexible Instancing: High-performance, rendering-friendly instancing empowers users to create enormous complexity in their scenes.
  • Detailed Hair, Fur and Fuzz: Artists can now model things consisting of multiple fibers (or strands) procedurally.

Bifrost is available for download now and works with any version of Maya 2018 or later. It will also be included in the installer for Maya 2019.2 and later versions. Updates to Bifrost between Maya releases will be available for download from Autodesk AREA.

In addition to the release of Bifrost, Autodesk highlighted the latest versions of Shotgun, Arnold, Flame and 3ds Max. The company gave a tech preview of a new secure enterprise Shotgun that supports network segregation and customer-managed media isolation on AWS, making it possible for the largest studios to collaborate in a closed-network pipeline in the cloud. Shotgun Create, now out of beta, delivers a cloud-connected desktop experience, making it easier for artists and reviewers to see which tasks demand attention while providing a collaborative environment to review media and exchange feedback accurately and efficiently. Arnold 5.4 adds important updates to the GPU renderer, including OSL and OpenVDB support, while Flame 2020.1 introduces more uses of AI with new Sky Extraction tools and specialized image segmentation features. Also on display, the 3ds Max 2020.1 update features modernized procedural tools for 3D modeling.


Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”


Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.

Beecham House‘s VFX take viewers back in time

Cambridge, UK-based Vine FX was the sole visual effects vendor on Gurinder Chadha’s Beecham House, a new Sunday night drama airing on ITV in the UK. Set in the India of 1795, Beecham House is the story of John Beecham (Tom Bateman), an Englishman who resigned from military service to set up as an honorable trader of the East India Company.

The series was shot at Ealing Studios and at some locations in India, with the visual effects work focusing on the Port of Delhi, the emperor’s palace and Beecham’s house. Vine FX founder Michael Illingworth assisted during development of the series and supervised his team of artists, creating intricate set extensions, matte paintings and period assets.

To make the shots believable and true to the era, the Vine FX team consulted closely with the show’s production designer and researched the period thoroughly. All modern elements — wires, telegraph poles, cars and lamp posts — had to be removed from the shoot footage, but the biggest challenge for the team was the Port of Delhi itself, a key location in the series.

Vine FX created a digital matte painting to extend the port and added numerous 3D boats and 3D people people working on the docks to create a busy working port of 1795 — a complex task and achieved by the expert eye of the Vine team.

“The success of this type of VFX is in its subtlety. We had to create a Delhi of 1795 that the audience believed, and this involved a great deal of research into how this would have looked that was essential to making it realistic,” says Illingworth. “Hopefully, we managed to do this.  I’m particularly happy with the finished port sequences as originally there were just three boats.

“I worked very closely with on-set supervisor Oliver Milburn while he was on set in India so was very much part of the production process in terms of VFX,” he continues. “Oliver would send me reference material from the shoot; this is always fundamental to the outcome of the VFX, as it allows you to plan ahead and work out any potential upcoming challenges. I was working on the VFX in Cambridge while Oliver was on set in Delhi — perfect!”

Vine FX used Photoshop and Nuke are its main tools. The artists modeled assets with Maya and Zbrush and painted assets using Substance painter. They rendered with Arnold.

Vine FX is currently working on War of the Worlds for Fox Networks and Canal+, due for release next year.

The Umbrella Academy‘s Emmy-nominated VFX supe Everett Burrell

By Iain Blair

If all ambitious TV shows with a ton of visual effects aspire to be cinematic, then Netflix’s The Umbrella Academy has to be the gold standard. The acclaimed sci-fi, superhero, adventure mash-up was just Emmy-nominated for its season-ending episode “The White Violin,” which showcased a full range of spectacular VFX. This included everything from the fully-CG Dr. Pogo to blowing up the moon and a mansion to the characters’ varied superpowers. Those VFX, mainly created by movie powerhouse Weta Digital in New Zealand and Spin VFX in Toronto, indeed rival anything in cinema. This is partly thanks to Netflix’s 4K pipeline.

The Umbrella Academy is based on the popular, Eisner Award-winning comics and graphic novels created and written by Gerard Way (“My Chemical Romance”), illustrated by Gabriel Bá, and published by Dark Horse Comics.

The story starts when, on the same day in 1989, 43 infants are born to unconnected women who showed no signs of pregnancy the day before. Seven are adopted by Sir Reginald Hargreeves, a billionaire industrialist, who creates The Umbrella Academy and prepares his “children” to save the world. But not everything went according to plan. In their teenage years, the family fractured and the team disbanded. Now, six of the surviving members reunite upon the news of Hargreeves’ death. Luther, Diego, Allison, Klaus, Vanya and Number Five work together to solve a mystery surrounding their father’s death. But the estranged family once again begins to come apart due to divergent personalities and abilities, not to mention the imminent threat of a global apocalypse.

The live-action series stars Ellen Page, Tom Hopper, Emmy Raver-Lampman, Robert Sheehan, David Castañeda, Aidan Gallagher, Cameron Britton and Mary J. Blige. It is produced by Universal Content Productions for Netflix. Steve Blackman (Fargo, Altered Carbon) is the executive producer and showrunner, with additional executive producers Jeff F. King, Bluegrass Television, and Mike Richardson and Keith Goldberg from Dark Horse Entertainment.

Everett Burrell

I spoke with senior visual effects supervisor and co-producer Everett Burrell (Pan’s Labyrinth, Altered Carbon), who has an Emmy for his work on Babylon 5, about creating the VFX and the 4K pipeline.

Congratulations on being nominated for the first season-ending episode “The White Violin,” which showcased so many impressive visual effects.
Thanks. We’re all really proud of the work.

Have you started season two?
Yes, and we’re already knee-deep in the shooting up in Canada. We shoot in Toronto, where we’re based, as well as Hamilton, which has this great period look. So we’re up there quite a bit. We’re just back here in LA for a couple of weeks working on editorial with Steve Blackman, the executive producer and showrunner. Our offices are in Encino, in a merchant bank building. I’m a co-producer as well, so I also deal a lot with editorial — more than normal.

Have you planned out all the VFX for the new season?
To a certain extent. We’re working on the scripts and have a good jump on them. We definitely plan to blow the first season out of the water in terms of what we come up with.

What are the biggest challenges of creating all the VFX on the show?
The big one is the sheer variety of VFX, which are all over the map in terms of the various types. They go from a completely animated talking CG chimpanzee Dr. Pogo to creating a very unusual apocalyptic world, with scenes like blowing up the moon and, of course, all the superpowers. One of the hardest things we had to do — which no one will ever know just watching it — was a ton of leaf replacement on trees.

Digital leaves via Montreal’s Folks.

When we began shooting, it was winter and there were no leaves on the trees. When we got to editorial we realized that the story spans just eight days, so it wouldn’t make any sense if in one scene we had no leaves and in the next we had leaves. So we had to add every single leaf to the trees for all of the first five episodes, which was a huge amount of work. The way we did it was to go back to all the locations and re-shoot all the trees from the same angles once they were in bloom. Then we had to composite all that in. Folks in Montreal did all of it, and it was very complicated. Lola did a lot of great work on Hargreeves, getting his young look for the early 1900s and cleaning up the hair and wrinkles and making it all look totally realistic. That was very tricky too.

Netflix is ahead of the curve thanks to its 4K policy. Tell us about the pipeline.
For a start, we shoot with the ARRI Alexa 65, which is a very robust cinema camera that was used on The Revenant. With its 65mm sensor, it’s meant for big-scope, epic movies, and we decided to go with it to give our show that great cinema look. The depth of field is like film, and it can also emulate film grain for this fantastic look. That camera shoots natively at 5K — it won’t go any lower. That means we’re at a much higher resolution than any other show out there.

And you’re right, Netflix requires a 4K master as future-proofing for streaming and so on. Those very high standards then trickle down to us and all the VFX. We also use a very unique system developed by Deluxe and Efilm called Portal, which basically stores the entire show in the cloud on a server somewhere, and we can get background plates to the vendors within 10 minutes. It’s amazing. Back in the old days, you’d have to make a request and maybe within 24 or 48 hours, you’d get those plates. So this system makes it almost instantaneous, and that’s a lifesaver.

   
Method blows up the moon.

How closely do you work with Steve Blackman and the editors?
I think Steve said it best:”There’s no daylight between the two of us” We’re linked at the hip pretty much all the time. He comes to my office if he has issues, and I go to his if we have complications; we resolve all of it together in probably the best creative relationship I’ve ever had. He relies on me and counts on me, and I trust him completely. Bottom line, if we need to write ourselves out of a sticky situation, he’s also the head writer, so he’ll just go off and rewrite a scene to help us out.

How many VFX do you average for each show?
We average between 150 and 200 per episode. Last season we did nearly 2,000 in total, so it’s a huge amount for a TV show, and there’s a lot of data being pushed. Luckily, I have an amazing team, including my production manager Misato Shinohara. She’s just the best and really takes care of all the databases, and manages all the shot data, reference, slates and so on. All that stuff we take on set has to go into this massive database, and just maintaining that is a huge job.

Who are the main VFX vendors?
The VFX are mainly created by Weta in New Zealand and Spin VFX in Toronto. Weta did all the Pogo stuff. Then we have Folks, Lola, Marz, Deluxe Toronto, DigitalFilm Tree in LA… and then Method Studios in Vancouver did great work on our end-of-the-world apocalyptic sequence. They blew up the moon and had a chunk of it hitting the Earth, along with all the surrounding imagery. We started R&D on that pretty early to get a jump on it. We gave them storyboards and they did previz. We used that as a cut to get iterations of it all. There were a lot of particle simulations, which was pretty intense.

Weta created Dr. Pogo

What have been the most difficult VFX sequences to create?
Just dealing with Pogo is obviously very demanding, and we had to come up with a fast shortcut to dealing with the photo-real look as we just don’t have the time or budget they have for the Planet of the Apes movies. The big thing is integrating him in the room as an actor with the live actors, and that was a huge challenge. We used just two witness cameras to capture our Pogo body performer. All the apocalyptic scenes were also very challenging because of the scale, and then those leaves were very hard to do and make look real. That alone took us a couple of months. And we might have the same problem this year, as we’re shooting in the summer through fall, and I’m praying that the leaves don’t start falling before we wrap.

What have been the main advances in technology that have really helped you pull off some of the show’s VFX?
I think the rendering and the graphics cards are the big ones, and the hardware talks together much more efficiently now. Even just a few years ago, and it might have taken weeks and weeks to render a Pogo. Now we can do it in a day. Weta developed new software for creating the texture and fabric of Pogo’s clothes. They also refined their hair programs.

 

I assume as co-producer that you’re very involved with the DI?
I am… and keeping track of all that and making sure we keep pushing the envelope. We do the DI at Company 3 with colorist Jill Bogdanowicz, who’s a partner in all of this. She brings so much to the show, and her work is a big part of why it looks so good. I love the DI. It’s where all the magic happens, and I get in there early with Jill and take care of the VFX tweaks. Then Steve comes in and works on contrast and color tweaks.By the time Steve gets there, we’re probably 80% of the way there already.

What can fans expect from season two?
Bigger, better visual effects. We definitely pay attention to the fans. They love the graphic novel, so we’re getting more of that into the show.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

UK’s Molinare adds two to its VFX team

Molinare has boosted its visual effects team with the addition of head of VFX production Kerrie Bryant and VFX supervisor Andy Tusabe.

Bryant comes to Molinare after working at DNeg TV and Technicolor, where she oversaw all projects within the studio, as well as supervising line producers and coordinators on their projects.

Tusabe joins Molinare with over 26 years’ experience across TV, film and commercials production and post production. He knows the Molinare VFX team well, having worked with them as a freelancer over the past two years, on titles such as Good Omens, The Crown, A Discovery of Witches, King Lear and Yardie.

So far this year, Molinare has completed VFX post on high-end dramas such as Good Omens, Strike Back: Silent War, Beecham House and the next series of The Crown, as well as Gurinder Chadha‘s new feature film Blinded by the Light, which will be released internationally in August.

Meet the Artist: The Mill’s Anne Trotman

Anne Trotman is a senior Flame artist and VFX supervisor at The Mill in New York. She specializes in beauty and fashion work but gets to work on a variety of other projects as well.

A graduate of Kings College in London, Trotman took on what she calls “a lot of very random temp jobs” before finally joining London’s Blue Post Production as a runner.

“In those days a runner did a lot of ‘actual’ running around SoHo, dropping off tapes and picking up lunches,” she says, admitting she was also sent out for extra green for color bars and warm sake at midnight. After being promoted to the machine room, she spent her time assisting all the areas of the company, including telecine grading, offline, online, VFX and audio. “This gave me a strong understanding of the post production process as a whole.”

Trotman then joined the 2D VFX teams from Blue, Clear Post Production, The Hive and VTR to create a team at Prime Focus London. She moved into film compositing where she headed up the 2D team as a senior Flame operator. Overseeing projects, including shot allocation and VFX reviews. Then she joined SFG-Technicolor’s commercials facility in Shanghai. After a year in China she joined The Mill in New York, where she is today.

We reached out to Trotman to find out more about The Mill, a technology and visual effects studio, how she works and some recent projects. Enjoy.

Bumble

Can you talk about some recent high-profile projects you’ve completed?
The most recent high-profile project I’ve worked on was for Bumble’s Super Bowl 2019 spot. It was its first commercial ever. Being that Bumble is a female-founded company, it was important for this project to celebrate female artists and empowerment, something I strongly support. Therefore, I was thrilled to lead an all-female team for this project. The agency creatives and producers were all female and so was almost the whole post team, including the editor, colorist and all the VFX artists.

How did you first learn Flame, and how has your use of it evolved over the years?
I had been assisting artists working on a Quantel Editbox at Blue. They then installed a Flame and hired a female artist who had worked on Gladiator. That’s when I knew I had found my calling. Working with technical equipment was very attractive to me, and in those days it was a dark art, and you had to work in a company to get your hands on one. I worked nights doing a lot of conforming and rotoscoping. I also started doing small jobs for clients I knew well. I remember assisting on an Adele pop video, which is where my love of beauty started.

When I first started using Flame, the whole job was usually completed by one artist. These days, jobs are much bigger, and with so many versions for social media, some days a lot of my day is coordinating the team of artists. Workshare and remote artists are becoming a big part of our industry, so communicating with artists all over the world has become a big part of my job in order to bring everything together to create the final film.

In addition to Flame, what other tools are used in your workflow?
Post production has changed so much in the past five years. My job is not just to press buttons on a Flame to get a commercial on television anymore; that’s only a small part. My job is to help the director and/or the agency position a brand and connect it with the consumer.

My workflow usually starts with bidding an agency or a director’s brief. Sometimes they need tests to sell an idea to a client. I might supervise a previz artist on Maxon Cinema 4D to help them achieve the director’s vision. I attend most of the shoots, which gives me an insight into the project while assessing the client’s goals and vision. I can take Flame on a laptop to my shoots to do tests for the director to help explain how certain shots will look after post. This process is so helpful all around in order for me to see if what we are shooting is correct and for the client to understand the director’s vision.

At The Mill, I work closely with the colorists who work on FilmLight Baselight before completing the work on Flame. All the artists at The Mill use Flame and Foundry Nuke, although my Flame skills are 100% better than my Nuke skills.

What are the most fulfilling aspects of the work you do?
I’m lucky to work with many directors and agency creatives that I now call friends. It still gives me a thrill when I’m able to interpret the vision of the creative or director to create the best work possible and convey the message of the brand.

I also love working with the next generation of artists. I especially love being able to work alongside the young female talent at The Mill. This is the first company I’ve worked at where I’ve not been “the one and only female Flame artist.”

At the Mill NY, we currently have 11 full-time female 2D artists working in our team, which has a 30/70 male to female ratio. Still a way to go to get to 50/50, so if I can inspire another female intern or runner who is thinking of becoming a VFX artist or colorist, then it’s a good day. Helping the cycle continue for female artists is so important to me.

What is the greatest challenge you’ve faced in your career?
Moving to Shanghai. Not only did I have the challenge of the language barrier to overcome but also the culture — from having lunch at noon to working with clients from a completely different background than mine. I had to learn all I could about the Chinese culture to help me connect with my clients.

Covergirl with Issa Rae

Out of all of the projects you’ve worked on, which one are you the most proud of?
There are many, but one that stands out is the Covergirl brand relaunch (2018) for director Matt Lambert at Prettybird. As an artist working on high-profile beauty brands, what they stand for is very important to me. I know every young girl will want to use makeup to make themselves feel great, but it’s so important to make sure young women are using it for the right reason. The new tagline “I am what I make-up” — together with a very diverse group of female ambassadors — was such a positive message to put out into the world.

There was also 28 Weeks Later, a feature film from director Juan Carlos Fresnadillo. My first time working on a feature was an amazing experience. I got to make lifelong friends working on this project. My technical abilities as an artist grew so much that year, from learning the patience needed to work on the same shot for two months to discovering the technical difficulties in compositing fire to be able to blow up parts of London. Such fun!

Finally, there was also a spot for the Target Summer 2019 campaign. It was directed by Whitelabel’s Lacey, who I collaborate with together on a lot of projects. Tristan Sheridan was the DP and the agency was Mother NY.

Target Summer Campaign

What advice do you have for a young professional trying to break into the industry?
Try everything. Don’t get pigeonholed into one area of the industry too early on. Learn about every part of the post process; it will be so helpful to you as you progress through your career.

I was lucky my first boss in the industry (Dave Cadle) was patient and gave me time to find out what I wanted to focus on. I try to be a positive mentor to the young runners and interns at The Mill, especially the young women. I was so lucky to have had female role models throughout my career, from the person that employed me to the first person that started training me on Flame. I know how important it is to see someone like you in a role you are thinking of pursuing.

Outside of work, how do you enjoy spending your free time?
I travel as much as I can. I love learning about new cultures; it keeps me grounded. I live in New York City, which is a bubble, and if you stay here too long, you start to forget what the real world looks like. I also try to give back when I can. I’ve been helping a director friend of mine with some films focusing on the issue of female homelessness around the world. We collaborated on some lovely films about women in LA and are currently working on some London-based ones.

You can find out more here.

Anne Trotman Image: Photo by Olivia Burke

Brittany Howard music video sets mood with color and VFX

The latest collaboration between Framestore and director Kim Gehrig is for Brittany Howard’s debut solo music video for Stay High, which features a color grade and subtle VFX by the studio. A tribute to the Alabama Shakes’ lead singer’s late father, the stylized music video stars actor Terry Crews (Brooklyn Nine-Nine, The Expendables) as a man finishing a day’s work and returning home to his family.

Produced by production company Somesuch, the aim of Stay High is to present a natural and emotionally driven story that honors the singer’s father, K.J. Howard. Shot in her hometown of Nashville, the music video features Howard’s family and friends while the singer pops up in several scenes throughout the video as different characters.

The video begins with Howard’s father getting off of work at his factory job. The camera follows him on his drive home, all the while he’s singing “Stay High.” As he drives home, we see images people and locations where Howard grew up. The video ends when her dad pulls into his driveway and is met by his daughters and wife.

“Kim wanted to really highlight the innocence of the video’s story, something I kept in mind while grading the film,” says Simon Bourne, Framestore’s head of creative color, who’s graded several films for the director. “The focus needed to always be on Terry with nothing in his surroundings distracting from that and the grade needed to reflect that idea.”

Framestore’s creative director Ben Cronin, who was also a compositor on the project along with Nuke compositor Christian Baker, adds, “From a VFX point of view, our job was all about invisible effects that highlighted the beautiful job that Ryley Brown, the film’s DP, did and to complement Kim’s unique vision.”

“We’ve worked with Kim on several commercials and music video projects, and we love collaborating because her films are always visually-interesting and she knows we’ll always help achieve the ground-breaking and effortlessly cool work that she does.”

Jody Madden upped to CEO at Foundry

Jody Madden, who joined Foundry in 2013 and has held positions as chief operating officer and, most recently, chief customer officer and chief product officer, has been promoted to chief executive officer. She takes over the role from Craig Rodgerson.

Madden, who has a rich background in VFX, has been with Foundry for six years. Prior to joining the company, she spent more than a decade in technology management and studio leadership roles at Industrial Light & Magic, Lucasfilm and Digital Domain after graduating from Stanford University.

“During a time of rapid change in creative industries, Foundry is committed to delivering innovations in workflow and future looking research,” says Madden.  “As the company continues to grow, delivering further improvements in speed, quality and user-experience remains a core focus to enable our customers to meet the demands of their markets.”

“Jody is well known for her collaborative leadership style and this has been crucial in enabling our engineering, product and research teams to achieve results for our customers and build the foundation for the future,” says Simon Robinson, co-founder/chief scientist. “I have worked closely with Jody and have seen the difference she has made to the business so I am extremely excited to see where she will lead Foundry in her new role and look forward to continuing to work with her.”

Review: FXhome’s HitFilm Pro 12 for editing, compositing, VFX

By Brady Betzel

If you have ever worked in Adobe Premiere Pro, Apple FCP X or Avid Media Composer and wished you could just flip a tab and be inside After Effects, with access to 3D objects directly in your timeline, you are going to want to take a look at FXhome’s HitFilm Pro 12.

Similar to how Blackmagic brought Fusion inside of its most recent versions of DaVinci Resolve, HitFilm Pro offers a nonlinear editor, a composite/VFX suite and a finishing suite combined into one piece of software. Haven’t heard about HitFilm yet? Let me help fill in some blanks.

Editing and 3D model Import

Editing and 3D model Import

What is HitFilm Pro 12?
Technically, HitFilm Pro 12 is a non-subscription-based nonlinear editor, compositor and VFX suite that costs $299. Not only does that price include 12 months of updates and tech support, but one license can be used on up to three computers simultaneously. In my eyes, HitFilm Pro is a great tool set for independent filmmakers, social media content generators and any editor who goes beyond editing and dives into topics like 3D modeling, tracking, keying, etc. without having to necessarily fork over money for a bunch of expensive third-party plugins. That doesn’t mean you won’t want to buy third-party plugins, but you are less likely to need them with HitFilm’s expansive list of native features and tools.

At my day job, I use Premiere, After Effects, Media Composer and Resolve. I often come home and want to work in something that has everything inside, and that is where HitFilm Pro 12 lives. Not only does it have the professional functionality that I am used to, such as trimming, color scopes and more, but it also has BorisFX’s Mocha planar tracking plugin built in for no extra cost. This is something I use constantly and love.

One of the most interesting and recent updates to HitFilm Pro 12 is the ability to use After Effects plugins. Not all plugins will work since there are so many, but in a video released after NAB 2019, HitFilm said plugins like Andrew Kramer’s Video CoPilot Element3D and ones from Red Giant are on the horizon. If you are within your support window, or you continue to purchase HitFilm, FXhome will work with you to get your favorite After Effects plugins working directly inside of HitFilm.

Timeline and 3D model editor

Some additional updates to HitFilm Pro 12 include a completely redesigned user interface that resembles Premiere Pro… kind of. Threaded rendering has also been added, so Windows users who have Intel and Nvidia hardware will see increased GPU speeds, the ability to add title directly in the editor and more.

The Review
So how doees HitFilm Pro 12 compare to today’s modern software packages? That is an interesting question. I have become more and more of a Resolve convert over the past two years, so I am constantly comparing everything to that. In addition, being an Avid user for over 15 years, I am used to a rock-solid NLE with only a few hiccups here and there. In my opinion, HitFilm 12 lands itself right where Premiere and FCP X live.

It feels prosumer-y, in a YouTuber or content-generator capacity. Would it stand up to 10 hours of abuse with content over 45 minutes? It probably would, but much like with Premiere, I would probably split my edits in scenes or acts to avoid slowdowns, especially when importing things like OBJ files or composites.

The nonlinear editor portion feels like Premiere and FCP X had a baby, but left out FCP X’s Magnetic Timeline feature. The trimming in the timeline feels smooth, and after about 20 minutes of getting comfortable with it I felt like it was what I am generally used to. Cutting in footage feels good using three-point edits or simply dragging and dropping. Using effects feels very similar to the Adobe world, where you can stack them on top of clips and they each affect each other from the top down.

Mocha within HitFilm Pro

Where HitFilm Pro 12 shines is in the inclusion of typically third-party plugins directly in the timeline. From the ability to create a scene with 3D cameras and particle generators to being able to track using BorisFX’s Mocha, HitFilm Pro 12 has many features that will help take your project to the next level. With HitFilm 12 Pro’s true 3D cameras, you can take flat text and enhance it with raytraced lighting, shadows and even textures. You can even use the included BorisFX Continuum 3D Objects to make great titles relatively easily. To take it a step further, you can even track them and animate them.

Color Tools
By day, I am an online editor/colorist who deals with the finishing aspect of media creation. Throughout the process, from color correction to exporting files, I need tools that are not only efficient but accurate. When I started to dig into the color correction side of HitFilm Pro 12, things slowed down for me. The color correction tools are very close to what you’ll find in other NLEs, like Premiere and FCP X, but they don’t quite rise to the level of Resolve. HitFilm Pro 12 does operate inside of a 32-bit color pipeline, which really helps avoid banding and other errors when color correcting. However, I didn’t feel that the toolset was making me more efficient; in fact, it was the opposite. I felt like I had to learn FXhome’s way of doing it. It wasn’t that it totally slowed me down, but I felt it could be better.

Color

Color

Summing Up
In the end, HitFilm 12 Pro will fill a lot of holes for individual content creators. If you love learning new things (like I do), then HitFilm Pro 12 will be a good investment of your time. In fact, FXhome post tons of video tutorials on all sorts of good and topical stuff, like how to create a Stranger Things intro title.

If you are a little more inclined to work with a layer-based workflow, like in After Effects, then HitFilm Pro Pro 12 is the app you’ll want to learn. Check out HitFilm Pro 12 on FXhome’s website and definitely watch some of the company’s informative tutorials.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Glassbox’s virtual camera toolset for Unreal, Unity, Maya

Virtual production software company Glassbox Technologies has released its virtual camera plugin DragonFly from private beta for public use. DragonFly offers professional virtual cinematography tools to filmmakers and content creators, allowing users to view character performances and scenes within computer-generated virtual environments in realtime, through the camera’s viewfinder, an external monitor or iPad.

Available for Unreal Engine, Unity 3D and Autodesk Maya, DragonFly delivers an inclusive virtual cinematography workflow that allows filmmakers and content creators to make and test creative decisions faster and earlier in the process, whittling down production cost on projects of all scopes and sizes.

This off-the-shelf toolkit allows users to create previz to postviz without the need for large teams of operators, costly hardware or proprietary tools. It is platform-agnostic and fits seamlessly into any workflow out of box. Users can visualize and explore a CG virtual environment, then record, bookmark, create snapshots and replicate real camera movement as seamlessly as conducting a live-action shoot.

“Virtual production poses great potential for creators, but there were no off-the-shelf filming solutions available that worked out of the box,” notes co-founder/CPO Mariana Acuña. “In response, we made DragonFly: a virtual window that allows users to visualize complex sets, environments and performances through a viewfinder. Without the need for a big stage or mocap crew, it brings greater flexibility to the production and post pipeline for films, animation, immersive content, games and realtime VFX.”

The product was developed in collaboration with top Hollywood visualization and production studios, including The Third Floor for best-in-class results.

“Prior to DragonFly, each studio created its own bespoke virtual production workflow, which is costly and time-consuming per project. DragonFly makes realtime virtual production usable for all creators,” says Evelyn Cover, global R&D manager for The Third Floor. “We’re excited to collaborate with the Glassbox team to develop and test  DragonFly in all kinds of production scenarios from previz to post, with astounding success.”

Glassbox’s second in-beta virtual production software solution, BeeHive — the multi-platform, multi-user collaborative virtual scene syncing, editing and review solution–is slated to launch later this summer.

DragonFly is now available for purchase or can be downloaded for free as a 15-day trial from the Glassbox website. Pricing and licensing includes a permanent license option costing $750 USD (including $250 for the first year of support and updates) and an annual rental option costing $420 a year.

Assimilate Scratch 9.1: productivity updates, updated VFX workflow

Assimilate’s Scratch 9.1, a dailies and finishing software, now includes new and extensive performance and productivity features, including integration with Foundry Nuke and Adobe After Effects. It’s available now.

“A primary goal for us is to quickly respond to the needs of DITs and post artists, whether it’s for more advanced features, new format support, or realtime bug-fixes,” said Mazze Aderhold, Scratch product manager at Assimilate. “Every feature introduced in Scratch 9.1 is based on feedback we received from our users before and during the beta cycle.”

The software now features native touch controls for grading by clicking and dragging directly on the image. Thanks to this intuitive way to color and manipulate images, an artist can grade the overall image or even control curves and secondaries — all without a panel and directly where the cursor is dragging.

There is also a redesigned color management system, enabling deep control over how camera-specific gamut and gamma spaces are handled and converted. Additionally, there is a new color-space conversion plugin (any color space to any other) that can be applied at any stage of the color/mastering process.

Also new is integration with After Effects and Nuke. Within Scratch, users can now seamlessly send shots to and from Nuke and After Effects, including transparencies and alphas. This opens up Scratch to high-end tracking, compositing, 3D models, advanced stabilization, motion graphics and more.

Within the VFX pipeline, Scratch can act as a central hub for all finishing needs. It provides realtime tools for any format, data management, playback and all color management in a timeline with audio, including to and from After Effects and Nuke.

Other new features include:

• Integration with Avid, including all metadata in the Avid MXF. Additionally, Scratch includes all the source-shot metadata, such as the genuine Sound TC in Avid MXF, which is important later on in post for something like a Pro Tools roundtrip
• Per-frame metadata on ARRIRAW files, allowing camera departments to pass through camera roll and tilt, lens focus distance metadata items, and more. Editorial and VFX teams can benefit from per-frame info later in the post process.
• Faster playback and rendering
• Realtime, full-res Red 8K DeBayer on GPU
• A deep set of options to load media, including sizing options, LUTs and automatic audio-sync, speeding up the organizational process when dealing with large amounts of disparate media
• A LUT cycler that allows for quick preview and testing of large numbers of looks on footage
• Preset outputs for Pix, Dax, MediaSilo and Copra, simplifying the delivery of industry-standard web dailies


• Vector tool for advanced color remapping using a color grid
• Automatic installation of free Matchbox Shaders, opening Scratch up to a wealth of realtime VFX effects, including glows, lens effects, grain add/remove, as well as more advanced creative FX
• Built-in highlight glow, diffusion, de-noise and time-warp FX
• Added support for AJA’s Io 4K Plus and Kona 5 SDI output devices using the latest SDKs.
• Support for Apple’s new ProRes RAW compressed-acquisition format and Blackmagic RAW support on both OS X and Windows

Scratch 9.1 starts at $89 monthly and $695 annually.

Conductor boosts its cloud rendering with Amazon EC2

Conductor Technologies’ cloud rendering platform will now support Amazon Web Services (AWS) and Amazon Elastic Compute Cloud (Amazon EC2), bringing the virtual compute resources of AWS to Conductor customers. This new capability will provide content production studios working in visual effects, animation and immersive media access to new, secure, powerful resources that will allow them — according to the company — to quickly and economically scale render capacity. Amazon EC2 instances, including cost-effective Spot Instances, are expected to be available via Conductor this summer.

“Our goal has always been to ensure that Conductor users can easily access reliable, secure instances on a massive scale. AWS has the largest and most geographically diverse compute, and the AWS Thinkbox team, which is highly experienced in all facets of high-volume rendering, is dedicated to M&E content production, so working with them was a natural fit,” says Conductor CEO Mac Moore. “We’ve already been running hundreds of thousands of simultaneous cores through Conductor, and with AWS as our preferred cloud provider, I expect we’ll be over the million simultaneous core mark in no time.”

Simple to deploy and highly scalable, Conductor is equally effective as an off-the-shelf solution or customized to a studio’s needs through its API. Conductor’s intuitive UI and accessible analytics provide a wealth of insightful data for keeping studio budgets on track. Apps supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy. Additional software and plug-in support are in progress, and may be available upon request.

Some background on Conductor: it’s a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud. As the only rendering service that is scalable to meet the exact needs of even the largest studios, Conductor easily integrates into existing workflows, features an open architecture for customization, provides data insights and can implement controls over usage to ensure budgets and timelines stay on track.

Technicolor opens prepro studio in LA

Technicolor is opening a new studio in Los Angeles dedicated to creating a seamless pipeline for feature projects — from concept art and visualization through virtual production, production and into final VFX.

As new distribution models increase the demand for content, Technicolor Pre-Production will provide the tools, the talent and the space for creatives to collaborate from day one of their project – from helping set the vision at the start of a job to ensuring that the vision carries through to production and VFX. The result is a more efficient filmmaking process.

Technicolor Pre-Production studio is headed by Kerry Shea, an industry veteran with over 20 years of experience. She is no stranger to this work, having held executive positions at Method Studios, The Third Floor, Digital Domain, The Jim Henson Company, DreamWorks Animation and Sony Pictures Imageworks.

Kerry Shea

Credited on more than 60 feature films including The Jungle Book, Pirates of the Caribbean: Dead Men Tell No Tales and Guardians of the Galaxy Vol. 2, Shea has an extensive background in VFX and post production, as well as live action, animatronics and creature effects.

While the Pre-Production studio stands apart from Technicolor’s visual effects studios — MPC Film, Mill Film, MR. X and Technicolor VFX — it can work seamlessly in conjunction with one or any combination of them.

The Technicolor Pre-Production Studio will comprise of key departments:
– The Business Development Department will work with clients, from project budgeting to consulting on VFX workflows, to help plan and prepare projects for a smooth transition into VFX.
– The VFX Supervisors Department will offer creative supervision across all aspects of VFX on client projects, whether delivered by Technicolor’s studios or third-party vendors.
– The Art Department will work with clients to understand their vision – including characters, props, technologies, and environments – creating artwork that delivers on that vision and sets the tone for the rest of the project.
– The Virtual Production Department will partner with filmmakers to bridge the gap between them and VFX through the production pipeline. Working on the ground and on location, the department will deliver a fully integrated pipeline and shooting services with the flexibility of a small, manageable team — allowing critical players in the filmmaking process to collaborate, view and manipulate media assets and scenes across multiple locations as the production process unfolds.
– The Visualization Department will deliver visualizations that will assist in achieving on screen exactly what clients envisioned.

“With the advancements of tools and technologies, such as virtual production, filmmaking has reached an inflection point, one in which storytellers can redefine what is possible on-set and beyond,” says Shea. “I am passionate about the increasing role and influence that the tools and craft of visual effects can have on the production pipeline and the even more important role in creating more streamlined and efficient workflows that create memorable stories.”

EP Nick Litwinko leads Nice Shoes’ new long-form VFX arm

NYC-based creative studio Nice Shoes has hired executive producer Nick Litwinko to lead its new film and episodic VFX division. Litwinko, who has built a career on infusing a serial entrepreneur approach to the development of creative studios, will grow the division, recruiting talent to bring a boutique, collaborative approach to visual effects for long-form entertainment projects. The division will focus on feature film and episodic projects.

Since coming on board with Nice Shoes, Litwinko and his team already have three long-form projects underway and will continue working to sign on new talent.

Litwinko launched his career at MTV during the height of its popularity, working as a senior producer for MTV Promos/Animation before stepping up as executive producer/director for MTV Commercials. His decade-long tenure led him to launch his own company, Rogue Creative, where he served dual roles as EP and director and oversaw a wide range of animated, live-action and VFX-driven branded campaigns. He was later named senior producer for Psyop New York before launching the New York office of Blind. He moved on to join the team at First Avenue Machine as executive producer/head of production. He was then recruited to join Shooters Inc. as managing director, leading a strategic rebrand, building the company’s NYC offices and playing an instrumental part in the rebrand to Alkemy X.

Behind the Title: Artifex VFX supervisor Rob Geddes

NAME: Rob Geddes

COMPANY: Artifex Studios (@artifexstudios)

CAN YOU DESCRIBE YOUR COMPANY?
Artifex is a small to mid-sized independent VFX studio based in Vancouver, BC. We’ve built up a solid team over the years, with very low staff turnover. We try our best to be an artist-centric shop.

That probably means something different to everyone, but for me it means ensuring that people are being challenged creatively, supported as they grow their skills and encouraged to maintain a healthy work-life balance.

WHAT’S YOUR JOB TITLE?
VFX Supervisor

WHAT DOES THAT ENTAIL?
I guess the simplest explanation is that I have to interpret the needs and requests of our clients, and then provide the necessary context and guidance to our team of artists to bring those requests to life.

Travelers – “Ave Machina” episode

I have to balance the creative and technical challenges of the work, and work within the constraints of budget, schedule and our own studio resources.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The seemingly infinite number of decisions and compromises that must be made each day, often with incomplete information.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I started out back in 2000 as a 3D generalist. My first job was building out environments in 3ds Max for a children’s animated series. I spent some years providing 3D assets, animation and programming to various military and private sector training simulations. Eventually, I made the switch over to the 2D side of things and started building up my roto, paint and compositing skills. This led me to Vancouver, and then to Artifex.

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? 
The biggest change I have seen over the years is the growth in demand for content. All of the various content portals and streaming services have created this massive appetite for new stories. This has brought new opportunities for vendors and artists, but it’s not without challenges. The quality bar is always being raised, and the push to 4K for broadcast puts a lot of pressure on pipelines and infrastructure.

WHY DO YOU LIKE BEING ON SET FOR SHOTS? WHAT ARE THE BENEFITS?
As the in-house VFX supervisor for Artifex, I don’t end up on set — though there have been projects for which we were brought in prior to shooting and could help drive the creative side of the VFX in support of the storytelling. There’s really no substitute for getting all of the context behind what was shot in order to help inform the finished product.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
When I was younger, I always assumed I would end up in classical animation. I devoured all of the Disney classics (Beauty and the Beast, The Lion King, etc.) Jurassic Park was a huge eye-opener though, and seeing The Matrix for the first time made it seem like anything was possible in VFX at that point.

DID YOU GO TO FILM SCHOOL?
Not film school specifically. Out of high school I still wasn’t certain of the path I wanted to take. I went to university first and ended up with a degree in math and computing science. By the time I left university I was convinced that animation and VFX were what I wanted. I worked through two diploma programs in 3D modeling, animation and film production.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The best part of the job for me is seeing the evolution of a shot, as a group of artists come together to solve all of the creative and technical challenges.

WHAT’S YOUR LEAST FAVORITE?
Realizing the limits of what can be accomplished on any given day and then choosing what has to be deferred.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
That’s a tough one. When I wasn’t working in VFX, I was working toward it. I’m obsessed with video game development, and I like to write, so maybe in an alternate timeline I’d be doing something like that.

Zoo

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This past year has been a pretty busy one. We’ve been on Travelers and The Order for Netflix, The Son for AMC, Project Blue Book for A&E, Kim Possible for Disney, Weird City for YouTube, and a couple of indie features for good measure!

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
I’m a big fan of our work on Project Blue Book. It was an interesting challenge to contribute to a project with historical significance and I think our team really rose to the occasion.

WHAT TOOLS DO YOU USE DAY TO DAY?
At Artifex we run our shows through ftrack for reviews and management, so I spend a lot of time in the browser keeping tabs on things. For daily communication we use Slack and email. I use Google Docs for organizational stuff. I pop into Foundry Nuke to test out some things or to work with an artist. I use Photoshop or Affinity Photo on the iPad to do draw-overs and give notes.

WHERE DO YOU FIND INSPIRATION NOW?
It’s such an incredible time to be a visual artist. I try to keep an eye on work getting posted from around the world on sites like ArtStation and Instagram. Current films, but also any other visual mediums like graphic novels, video games, photography, etc. Great ideas can come from anywhere.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I play a lot of video games, drink a lot of tea, and hang out with my daughter.

SIGGRAPH making-of sessions: Toy Story 4, GoT, more

The SIGGRAPH 2019 Production Sessions program offers attendees a behind-the-scenes look at the making of some of the year’s most impressive VFX films, shows, games and VR projects. The 11 production sessions will be held throughout the conference week of July 28 through August 1 at the Los Angeles Convention Center.

With 11 total sessions, attendees will hear from creators who worked on projects such as Toy Story 4, Game of Thrones, The Lion King and First Man.

Other highlights include:

Swing Into Another Dimension: The Making of Spider-Man: Into the Spider-Verse
This production session will explore the art and innovation behind the creation of the Academy Award-winning Spider-Man: Into the Spider-Verse. The filmmaking team behind the first-ever animated Spider-Man feature film took significant risks to develop an all-new visual style inspired by the graphic look of comic books.

Creating the Immersive World of BioWare’s Anthem
The savage world of Anthem is volatile, lush, expansive and full of unexpected characters. Bringing these aspects to life in a realtime, interactive environment presented a wealth of problems for BioWare’s technical artists and rendering engineers. This retrospective panel will highlight the team’s work, alongside reflections on innovation and the successes and challenges of creating a new IP.

The VFX of Netflix Series
From the tragic tales of orphans to a joint force of super siblings to sinister forces threatening 1980s Indiana, the VFX teams on Netflix series have delivered some of the year’s most best visuals. Creatives behind A Series of Unfortunate Events, The Umbrella Academy and Stranger Things will present the work and techniques that brought these worlds and characters into being.

The Making of Marvel Studios’ Avengers: Endgame
The fourth installment in the Avengers saga is the culmination of 22 interconnected films and has drawn audiences to witness the turning point of this epic journey. SIGGRAPH 2019 keynote speaker Victoria Alonso will join Marvel Studios, Digital Domain, ILM and Weta Digital as they discuss how the diverse collection of heroes, environments, and visual effects were assembled into this ultimate, climactic final chapter.

Space Explorers — Filming VR in Microgravity
Felix & Paul Studios, along with collaborators from NASA and the ISS National Lab, share insights from one of the most ambitious VR projects ever undertaken. In this session, the team will discuss the background of how this partnership came to be before diving into the technical challenges of capturing cinematic virtual reality on the ISS.

Productions Sessions are open to conference participants with Select Conference, Full Conference or Full Conference Platinum registrations. The Production Gallery can be accessed with an Experiences badge and above.

Review: Red Giant’s VFX Suite plugins

By Brady Betzel

If you have ever watched After Effects tutorials, you are bound to have seen the people who make up Red Giant. There is Aharon Rabinowitz, who you might mistake for a professional voiceover talent; Seth Worley, who can combine a pithy sense of humor and over-the-top creativity seamlessly; and my latest man-crush Daniel Hashimoto, better known as “Action Movie Dad” of Action Movie Kid.

In these videos, these talented pros show off some amazing things they created using Red Giant’s plugin offerings, such as the Trapcode Suite, the Magic Bullet Suite, Universe and others.

Now, Red Giant is trying to improve your visual effects workflow even further with the new VFX Suite for Adobe After Effects (although some work in Adobe Premiere as well).

The new VFX Suite is a compositing focused toolkit that will complement many aspects of your work, from green screen keying to motion graphics compositing with tools such as Video Copilot’s Element 3D. Whether you want to seamlessly composite light and atmospheric fog with fewer pre-composites, add a reflection to an object easily or even just have a better greenscreen keyer, the VFX Suite will help.

The VFX Suite includes Supercomp, Primatte Keyer 6, King Pin Tracker, Spot Clone Tracker, Optical Glow; Chromatic Displacement, Knoll Light Factory 3.1, Shadow and Reflection. The VFX Suite is priced at $999 unless you qualify for the academic discount, which means you can get it for $499.

In this review, I will go over each of the plugins within the VFX Suite. Up first will be Primatte Keyer 6.

Overall, I love Red Giant’s GUIs. They seem to be a little more intuitive, allowing me to work more “creatively” as opposed to spending time figuring out technical issues.

I asked Red Giant what makes VFX Suite so powerful and Rabinowitz, head of marketing for Red Giant and general post production wizard, shared this: “Red Giant has been helping VFX artists solve compositing challenges for over 15 years. For VFX suite, we looked at those challenges with fresh eyes and built new tools to solve them with new technologies. Most of these tools are built entirely from scratch. In the case of Primatte Keyer, we further enhanced the UI and sped it up dramatically with GPU acceleration. Primatte Keyer 6 becomes even more powerful when you combine the keying results with Supercomp, which quickly turns your keyed footage into beautifully comped footage.”

Primatte Keyer 6
Primatte is a chromakey/single-color keying technology used in tons of movies and television shows. I got familiar with Primatte when BorisFX included it in its Continuum suite of plugins. Once I used Primatte and learned the intricacies of extracting detail from hair and even just using their auto-analyze function, I never looked back. On occasion, Primatte needs a little help from others, like Keylight, but I can usually pull easy and tough keys all within one or two instances of Primatte.

If you haven’t used Primatte before, you essentially pick your key color by drawing a line or rectangle around the color, adjust the detail and opacity of the matte, and — boom — you’re done. With Primatte 6 you now also get Core Matte, a new feature that draws an inside mask automatically while allowing you to refine the edges — this is a real time-saver when doing hundreds of interview greenscreen keys, especially when someone decides to wear a reflective necklace or piece of jewelry that usually requires an extra mask and tracking. Primatte 6 also adds GPU optimization, gaining even more preview and rendering speed than previous versions.

Supercomp
If you are an editor like me — who knows enough to be dangerous when compositing and working within After Effects — sometimes you just want (or need) a simpler interface without having to figure out all the expressions, layer order, effects and compositing modes to get something to look right. And if you are an Avid Media Composer user, you might have encountered the Paint Effect Tool, which is one of those one-for-all plugins. You can paint, sharpen, blur and much more from inside one tool, much like Supercomp. Think of the Supercomp interface as a Colorista or Magic Bullet Looks-type interface, where you can work with composite effects such as fog, glow, lights, matte chokers, edge blend and more inside of one interface with much less pre-composing.

The effects are all GPU-accelerated and are context-aware. Supercomp is a great tool to use with your results from the Primatte Keyer, adding in atmosphere and light wraps quickly and easily inside one plugin instead of multiple.

King Pin Tracker and Spot Clone Tracker
As an online editor, I am often tasked with sign replacements, paint-out of crew or cameras in shots, as well as other clean-ups. If I can’t accomplish what I want with BorisFX Continuum while using Mocha inside of Media Composer or Blackmagic’s DaVinci Resolve, I will jump over to After Effects and try my hand there. I don’t practice as much corner pinning as I would like, so I often forget the intricacies when tracking in Mocha and copying Corner Pin or Transform Data to After Effects. This is where the new King Pin Tracker can ease any difficulties, especially when performing corner pinning on relatively simple objects but still need to keyframe positions or perform a planar track without using multiple plugins or applications.

The Spot Clone Tracker is exactly what is says it is. Much like Resolve’s Patch Replace, Spot Clone Tracker allows you to track one area while replacing that same area with another area from the screen. In addition, Spot Clone Tracker has options to flip vertical, flip horizontal, add noise, and adjust brightness and color values. For such a seemingly simple tool, the Spot Clone Tracker is the darkhorse in this race. You’d be surprised how many clone and paint tools don’t have adjustments, like flipping and flopping or brightness changes. This is a great tool for quick dead-pixel fixes and painting out GoPros when you don’t need to mask anything out. (Although there is an option to “Respect Alpha.”)

Optical Glow and Knoll Light Factory 3.1
Have you ever been in an editing session that needed police lights amplified or a nice glow on some text but the stock plugins just couldn’t get it right? Optical Glow will solve this problem. In another amazing, simple-yet-powerful Red Giant plugin, Optical Glow can be applied and gamma-adjusted for video, log and linear levels right off the bat.

From there you can pick an inner tint, outer tint and overall glow color via the Colorize tool and set the vibrance. I really love the Falloff, Highlight Rolloff, and Highlights Only functions, which allow you to fine-tune the glow and just how much it shows what it affects. It’s so simple that it is hard to mess up, but the results speak for themselves and render out quicker than with other glow plugins I am using.

Knoll Light Factory has been newly GPU-accelerated in Version 3.1 to decrease render times when using its more than 200 presets or when customizing your own lens flares. Optical Glow and Knoll Light Factory really complement each other.

Chromatic Displacement
Since watching an Andrew Kramer tutorial covering displacement, I’ve always wanted to make a video that showed huge seismic blasts but didn’t really want to put the time into properly making chromatic displacement. Lucky for me, Red Giant has introduced Chromatic Displacement! Whether you want to make rain drops appear on the camera lens or add a seismic blast from a phaser, Chromatic Displacement will allow you to offset your background with a glass-, mirror- or even heatwave-like appearance quickly. Simply choose the layer you want to displace from and adjust parameters such as displacement amount, spread and spread chroma, and whether you want to render using the CPU or GPU.

Shadow and ReflectionRed Giant packs Shadow and Reflection plugins into the VFX Suite as well. The Shadow plugin not only makes it easy to create shadows in front of or behind an object based on alpha channel or brightness, but, best of all, it gives you an easy way to identify the point where the shadow should bend. The Shadow Bend option lets you identify where the bend exists, what color the bend axis should be, the type of seam and seam the size, and even allows for motion blur.

The Reflection plugin is very similar to the Shadow plugin and produces quick and awesome reflections without any After Effects wizardry. Just like Shadow, the Reflection plugin allows you to identify a bend. Plus, you can adjust the softness of the reflection quickly and easily.

Summing Up
In the end, Red Giant always delivers great and useful plugins. VFX Suite is no different, and the only downside some might point to is the cost. While $999 is expensive, if compositing is a large portion of your business, the efficiency you gain might outweigh the cost.

Much like Shooter Suite does for online editors, Trapcode Suite does for VFX masters and Universe does for jacks of all trades, VFX Suite will take all of your ideas and help them blend seamlessly into your work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Lenovo intros next-gen ThinkPads

Lenovo has launched the next generation of its ThinkPad P Series with the release of five new ThinkPads, including the ThinkPad P73, ThinkPad P53, ThinkPad P1 Gen 2 and ThinkPad P53s and P43s.

The ThinkPad P53 features the Nvidia Quadro RTX 5000 GPU with RT and Tensor cores, offering realtime raytracing and AI acceleration. It now features Intel Xeon and 9th Gen Core class CPUs with up to eight cores (including the Core i9) up to 128GB of memory and 6TB of storage.

This mobile workstation also boasts a new OLED touch display with Dolby Vision HDR for superb color and some of the deepest black levels ever. Building on the innovation behind the ThinkPad P1 power supply, Lenovo is also maximizing the portability of this workstation with a 35 percent smaller power supply. The ThinkPad P53 is designed to handle everything from augmented reality and VR content creation to the deployment of mobile AI or ISV workflows. The ThinkPad P53 will be available in July, starting at $1,799.

At 3.74 pounds and 17.2mm thin, Lenovo’s thinnest and lightest 15-inch workstation — the ThinkPad P1 Gen 2 — includes the latest Nvidia Quadro Turing T1000 and T2000 GPUs. The ThinkPad P1 also features eight-core Intel 9th Gen Xeon and Core CPUs and an OLED touch display with Dolby Vision HDR.

The ThinkPad P1 Gen 2 will be available at the end of June starting at $1,949.

With its 17.3-inch Dolby Vision 4K UHD screen and mobility with a 35% smaller power adaptor, Lenovo’s ThinkPad P73 offers users maximum workspace and mobility. Like the ThinkPad 53, it features the Intel Xeon and Core processors and the most powerful Nvidia Quadro RTX graphics. The ThinkPad P73 will be available in August starting at $1,849.

The ThinkPad P43s features a 14-inch chassis and will be available in July starting at $1,499.

Rounding out the line is the ThinkPad P53s which combines the latest Nvidia Quadro graphics and Intel Core processors — all in a thin and light chassis. The ThinkPad P53s will be available in June, starting at $1,499.

For the first time, Lenovo is adding new X-Rite Pantone Factory Color Calibration to the ThinkPad P1 Gen 2, ThinkPad P53 and ThinkPad P73. The unique factory color calibration profile is stored in the cloud to ensure more accurate recalibration. This profile allows for dynamic switching between color spaces, including sRGB, Adobe RGB and DCI-P3 to ensure accurate ISV application performance.

The entire ThinkPad portfolio is also equipped with advanced ThinkShield security features – from ThinkShutter to privacy screens to self-healing BIOS that recover when attacked or corrupted – to help protect users from every angle and give them the freedom to innovate fearlessly.

Quick Chat: Sinking Ship’s Matt Bishop on live-action/CG series

By Randi Altman

Toronto’s Sinking Ship Entertainment is a production, distribution and interactive company specializing in children’s live-action and CGI-blended programming. The company has 13 Daytime Emmys and a variety of other international awards on its proverbial mantel. Sinking Ship has over 175 employees across all its divisions, including its VFX and interactive studio.

Matt Bishop

Needless to say, the company has a lot going on. We decided to reach out to Matt Bishop, founding partner at Sinking Ship, to find out more.

Sinking Ship produces, creates visual effects and posts its own content, but are you also open to outside projects?
Yes, we do work in co-production with other companies or contract our post production service to shows that are looking for cutting-edge VFX.

Have you always created your own content?
Sinking Ship has developed a number of shows and feature films, as well as worked in co-production with production companies around the world.

What came first, your post or your production services? Or were they introduced in tandem?
Both sides of company evolved together as a way to push our creative visions. We started acquiring equipment on our first series in 2004, and we always look for new ways to push the technology.

Can you mention some of your most recent projects?
Some of our current projects include Dino Dana (Season 4), Dino Dana: The Movie, Endlings and Odd Squad Mobile Unit.

What is your typical path getting content from set to post?
We have been working with Red cameras for years, and we were the first company in Canada to shoot in 4K over a decade ago. We shoot a lot of content, so we create backups in the field before the media is sent to the studio.

Dino Dana

You work with a lot of data. How do you manage and keep all of that secure?
Backups, lots of backups. We use a massive LTO-7 tape robot and we have over a 2PB of backup storage on top of that. We recently added Qumulo to our workflow to ensure the most secure method possible.

What do you use for your VFX work? What about your other post tools?
We use a wide range of software, but our main tools in our creature department are Pixologic Zbrush and Foundry Mari, with all animation happening inside Autodesk Maya.

We also have a large renderfarm to handle the amount of shots, and our render engine of choice is Arnold, which is now an Autodesk project.  In post we use an Adobe Creative Cloud pipeline with 4K HDR color grading happening in DaVinci Resolve. Qumulo is going to be a welcome addition as we continue to grow and our outputs become more complex.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Axis provides 1,000 VFX shots for the TV series Happy!

UK-based animation and visual effects house Axis Studios has delivered 1,000 shots across 10 episodes on the second series of the UCP-produced hit Syfy show Happy!.

Based on Grant Morrison and Darick Robertson’s graphic novel, Happy! follows alcoholic ex-cop turned hitman Nick Sax (Christopher Meloni), who teams up with imaginary unicorn Happy (voiced by Patton Oswalt). In the second season, the action moves from Christmastime to “the biggest holiday rebranding of all time” and a plot to “make Easter great again,” courtesy of last season’s malevolent child-kidnapper, Sonny Shine (Christopher Fitzgerald).

Axis Studios, working across its three creative sites in Glasgow, Bristol, and London, collaborated with executive producer and director Brian Taylor and showrunner Patrick Macmanus to raise the bar on the animation of the fully CG character. The studio also worked on a host of supporting characters, including a “chain-smoking man-baby,” a gimp-like Easter Bunny and even a Jeff Goldblum-shaped cloud. Alongside the extensive animation work, the team’s VFX workload greatly increased from the first season — including two additional episodes, creature work, matte painting, cloud simulations, asset building and extensive effects and clean-up work.

Building on the success of the first season, the 100-person team of artists further developed the animation of the lead character, Happy!, improving the rig, giving more nuanced emotions and continually working to integrate him more in the real-world environments.

Zoic in growth mode, adds VFX supervisor Wanstreet, ups Overstrom

VFX house Zoic Studios has made changes to its creative team, adding VFX supervisor Chad Wanstreet to its Culver City studio and promoting Nate Overstrom to creative director in its New York studio.

Wanstreet has nearly 15 years of experience in visual effects, working across series, feature film, commercial and video game projects. He comes to Zoic from FuseFX, where he worked on television series including NBC’s Timeless, Amazon Prime’s The Tick, Marvel Agents of S.H.I.E.L.D. for ABC and Starz’s Emmy-winning series Black Sails.

Overstrom has spent over 15 years of his career with Zoic, working across the Culver City and New York City studios, earning two Emmy nominations and working on top series including Banshee, Maniac and Iron Fist. He is currently the VFX supervisor on Cinemax’s Warrior.

The growth of the creative department is accompanied by the promotion of several Zoic lead artists to VFX supervisors, with Andrew Bardusk, Matt Bramante, Tim Hanson and Billy Spradlin stepping up to lead teams on a wide range of episodic work. Bardusk just wrapped Season 4 of DC’s Legends of Tomorrow, Bramante just wrapped Noah Hawley’s upcoming feature film Lucy in the Sky, Hanson just completed Season 2 of Marvel’s Cloak & Dagger, and Spradlin just wrapped Season 7 of CW’s Arrow.

This news comes on the heels of a busy start of the year for Zoic across all divisions, including the recent announcement of the company’s second development deal — optioning New York Times best-selling author Michael Johnston’s fantasy novel Soleri for feature film and television adaptation. Zoic also added Daniel Cohen as executive producer, episodic and series in New York City, and Lauren F. Ellis as executive producer, episodic and series in Culver City.

Main Image Caption: (L-R) Chad Wanstreet and Nate Overstrom

UK’s Jellyfish adds virtual animation studio and Kevin Spruce

London-based visual effects and animation studio Jellyfish Pictures is opening of a new virtual animation facility in Sheffield. The new site is the company’s fifth studio in the UK, in addition to its established studios in Fitzrovia, Central London; Brixton; South London; and Oval, South London. This addition is no surprise considering Jellyfish created one of Europe’s first virtual VFX studios back in 2017.

With no hardware housed onsite, Jellyfish Pictures’ Sheffield studio — situated in the city center within the Cooper Project Complex — will operate in a completely PC-over-IP environment. With all technology and pipeline housed in a centrally-based co-location, the studio is able to virtualize its distributed workstations through Teradici’s remote visualization solution, allowing for total flexibility and scalability.

The Sheffield site will sit on the same logical LAN as the other four studios, providing access to the company’s software-defined storage (SDS) from Pixit Media, enabling remote collaboration and support for flexible working practices. With the rest of Jellyfish Pictures’ studios all TPN-accredited, the Sheffield studio will follow in their footsteps, using Pixit Media’s container solution within PixStor 5.

The innovative studio will be headed up by Jellyfish Pictures’ newest appointment, animation director Kevin Spruce. With a career spanning over 30 years, Spruce joins Jellyfish from Framestore, where he oversaw a team of 120 as the company’s head of animation. During his time at Framestore, Spruce worked as animation supervisor on feature films such as Fantastic Beasts and Where to Find Them, The Legend of Tarzan and Guardians of the Galaxy. Prior to his 17-year stint at Framestore, Spruce held positions at Canadian animation company, Bardel Entertainment and Spielberg-helmed feature animation studio Amblimation.

Jellyfish Pictures’ northern presence will start off with a small team of animators working on the company’s original animation projects, with a view to expand its team and set up with a large feature animation project by the end of the year.

“We have multiple projects coming up that will demand crewing up with the very best talent very quickly,” reports Phil Dobree, CEO of Jellyfish Pictures. “Casting off the constraints of infrastructure, which traditionally has been the industry’s way of working, means we are not limited to the London talent pool and can easily scale up in a more efficient and economical way than ever before. We all know London, and more specifically Soho, is an expensive place to play, both for employees working here and for the companies operating here. Technology is enabling us to expand our horizon across the UK and beyond, as well as offer talent a way out of living in the big city.”

For Spruce, the move made perfect sense: “After 30 years working in and around Soho, it was time for me to move north and settle in Sheffield to achieve a better work life balance with family. After speaking with Phil, I was excited to discover he was interested in expanding his remote operation beyond London. With what technology can offer now, the next logical step is to bring the work to people rather than always expecting them to move south.

“As animation director for Jellyfish Pictures Sheffield, it’s my intention to recruit a creative team here to strengthen the company’s capacity to handle the expanding slate of work currently in-house and beyond. I am very excited to be part of this new venture north with Jellyfish. It’s a vision of how creative companies can grow in new ways and access talent pools farther afield.”

 

Amazon’s Good Omens: VFX supervisor Jean-Claude Deguara

By Randi Altman

Good versus evil. It’s a story that’s been told time and time again, but Amazon’s Good Omens turns that trope on its head a bit. With Armageddon approaching, two unlikely heroes and centuries-long frenemies— an angel (Michael Sheen) and demon (David Tennant) — team up to try to fight off the end of the world. Think buddy movie, but with the fate of the world at stake.

In addition to Tennant and Sheen, the Good Omens cast is enviable — featuring Jon Hamm, Michael McKean, Benedict Cumberbatch and Nick Offerman, just to name a few. The series is based on the 1990 book by Terry Pratchett and Neil Gaiman.

Jean-Claude Degaura

As you can imagine, this six-part end-of-days story features a variety of visual effects, from creatures to environments to particle effects and fire. London’s Milk was called on to provide 650 visual effects shots, and its co-founder Jean-Claude Deguara supervised all.

He was also able to talk directly with Gaiman, which he says was a huge help. “Having access to Neil Gaiman as the author of Good Omens was just brilliant, as it meant we were able to ask detailed questions to get a more detailed brief when creating the VFX and receive such insightful creative feedback on our work. There was never a question that couldn’t be answered. You don’t often get that level of detail when you’re developing the VFX.”

Let’s find out more about Deguara’s process and the shots in the show as he walks us through his collaboration and creating some very distinctive characters.

Can you talk about how early you got involved on Good Omens?
We were involved right at the beginning, pre-script. It’s always the best scenario for VFX to be involved at the start, to maximize planning time. We spent time with director Douglas Mackinnon, breaking down all six scripts to plan the VFX methodology — working out and refining how to best use VFX to support the storytelling. In fact, we stuck to most of what we envisioned and we continued to work closely with him throughout the project.

How did getting involved when you did help the process?
With the sheer volume and variety of work — 650 shots, a five-month post production turnaround and a crew of 60 — the planning and development time in preproduction was essential. The incredibly wide range of work spanned multiple creatures, environments and effects work.

Having constant access to Neil as author and showrunner was brilliant as we could ask for clarification and more details from him directly when creating the VFX and receive immediate creative feedback. And it was invaluable to have Douglas working with us to translate Neil’s vision in words onto the screen and plan out what was workable. It also meant I was able to show them concepts the team were developing back in the studio while we were on set in South Africa. It was a very collaborative process.

It was important to have strong crew across all VFX disciplines as they worked together on multiple sequences at the same time. So you’re starting in tracking on one, in effects on another and compositing and finishing everything off on another. It was a big logistical challenge, but certainly the kind that we relish and are well versed in at Milk.

Did you do previs? If so, how did that help and what did you use?
We only used previs to work out how to technically achieve certain shots or to sell an idea to Douglas and Neil. It was generally very simple, using gray scale animation with basic geometry. We used it to do a quick layout of how to rescale the dog to be a bigger hellhound, for example.

You were on set supervising… can you talk about how that helped?
It was a fast-moving production with multiple locations in the UK over about six months, followed by three months in South Africa. It was crucial for the volume and variety of VFX work required on Good Omens that I was across all the planning and execution of filming for our shots.

Being on set allowed me to help solve various problems as we went along. I could also show Neil and Douglas various concepts that were being developed back in the studio, so that we could move forward more quickly with creative development of the key sequences, particularly the challenging ones such as Satan and the Bentley.

What were the crucial things to ensure during the shoot?
Making sure all the preparation was done meticulously for each shot — given the large volume and variety of the environments and sets. I worked very closely with Douglas on the shoot so we could have discussions to problem-solve where needed and find creative solutions.

Can you point to an example?
We had multiple options for shots involving the Bentley, so our advance planning and discussions with Douglas involved pulling out all the car sequences in the series scripts and creating a “mini script” specifically for the Bentley. This enabled us to plan which assets (the real car, the art department’s interior car shell or the CG car) were required and when.

You provided 650 VFX shots. Can you describe the types of effects?
We created everything from creatures (Satan exploding up out of the ground; a kraken; the hellhound; a demon and a snake) to environments (heaven – a penthouse with views of major world landmarks, a busy Soho street); feathered wings for Michael Sheen’s angel Aziraphale and David Tennant’s demon Crowley, and a CG Bentley in which Tennant’s Crowley hurtles around London.

We also had a large effects team working on a whole range of effects over the six episodes — from setting the M25 and the Bentley on fire to a flaming sword to a call center filled with maggots to a sequence in which Crowley (Tennant) travels through the internet at high speed.

Despite the fantasy nature of the subject matter, it was important to Gaiman that the CG elements did not stand out too much. We needed to ensure the worlds and characters were always kept grounded in reality. A good example is how we approached heaven and hell. These key locations are essentially based around an office block. Nothing too fantastical, but they are, as you would expect, completely different and deliberately so.

Hell is the basement, which was shot in a disused abattoir in South Africa, whilst heaven is a full CG environment located in the penthouse with a panoramic view over a cityscape featuring landmarks such as the Eiffel Tower, The Shard and the Pyramids.

You created many CG creatures. Can you talk about the challenges of that and how you accomplished them?
Many of the main VFX features, such as Satan (voiced by Benedict Cumberbatch), appear only once in the six-part series as the story moves swiftly toward the apocalypse. So we had to strike a careful balance between delivering impact yet ensuring they were immediately recognizable and grounded in reality. Given our fast five-month post- turnaround, we had our key teams working concurrently on creatures such as a kraken; the hellhound; a small, portly demon called Usher who meets his demise in a bath of holy water; and the infamous snake in the Garden of Eden.

We have incorporated Ziva VFX into our pipeline, which ensured our rigging and modeling teams maximized the development and build phases in the timeframe. For example, the muscle, fat and skin simulations are all solved on the renderfarm; the animators can publish a scene and then review the creature effects in dailies the next day.

We use our proprietary software CreatureTools for rigging all our creatures. It is a modular rigging package, which allows us to very quickly build animation rigs for previs or blocking and we build our deformation muscle and fat rigs in Ziva VFX. It means the animators can start work quickly and there is a lot of consistency between the rigs.

Can you talk about the kraken?
The kraken pays homage to Ray Harryhausen and his work on Clash of the Titans. Our team worked to create the immense scale of the kraken and take water simulations to the next level. The top half of the kraken body comes up out of the water and we used a complex ocean/water simulation system that was originally developed for our ocean work on the feature film Adrift.

Can you dig in a bit more about Satan?
Near the climax of Good Omens, Aziraphale, Crowley and Adam witness the arrival of Satan. In the early development phase, we were briefed to highlight Satan’s enormous size (about 400 feet) without making him too comical. He needed to have instant impact given that he appears on screen for just this one long sequence and we don’t see him again.

Our first concept was pretty scary, but Neil wanted him simpler and more immediately recognizable. Our concept artist created a horned crown, which along with his large, muscled, red body delivered the look Neil had envisioned.

We built the basic model, and when Cumberbatch was cast, the modeling team introduced some of his facial characteristics into Satan’s FACS-based blend shape set. Video reference of the actor’s voice performance, captured on a camera phone, helped inform the final keyframe animation. The final Satan was a full Ziva VFX build, complete with skeleton, muscles, fat and skin. The team set up the muscle scene and fat scene in a path to an Alembic cache of the skeleton so that they ended up with a blended mesh of Satan with all the muscle detail on it.

We then did another skin pass on the face to add extra wrinkles and loosen things up. A key challenge for our animation team — lead by Joe Tarrant — lay in animating a creature of the immense scale of Satan. They needed to ensure the balance and timing of his movements felt absolutely realistic.

Our effects team — lead by James Reid — layered multiple effects simulations to shatter the airfield tarmac and generate clouds of smoke and dust, optimizing setups so that only those particles visible on camera were simulated. The challenge was maintaining a focus on the enormous size and impact of Satan while still showing the explosion of the concrete, smoke and rubble as he emerges.

Extrapolating from live-action plates shot at an airbase, the VFX team built a CG environment and inserted live action of the performers into otherwise fully digital shots of the gigantic red-skinned devil bursting out of the ground.

And the hellhound?
Beelzebub (Anna Maxwell Martin) sends the antichrist (a boy named Adam) a giant hellhound. By giving the giant beast a scary name, Adam will set Armageddon in motion. In reality, Adam really just wants a loveable pet and transforms the hellhound into a miniature hound called, simply, Dog.

A Great Dane performed as the hellhound, photographed in a forest location while a grip kept pace with a small square of bluescreen. The Milk team tracked the live action and performed a digital head and neck replacement. Sam Lucas modeled the head in Autodesk Maya, matching the real dog’s anatomy before stretching its features into grotesquery. A final round of sculpting followed in Pixologic ZBrush, with artists refining 40-odd blend shapes for facial expression.

Once our rigging team got the first iteration of the blend shapes, they passed the asset off to animation for feedback. They then added an extra level of tweaking around the lips. In the creature effects phase, they used Ziva VFX to add soft body jiggle around the bottom of the lips and jowls.

What about creating the demon Usher?
One of our favorite characters was the small, rotund, quirky demon creature called Usher. He is a fully rigged CG character. Our team took a fully concepted image and adapted it to the performance and physicality of the actor. To get the weight of Usher’s rotund body, the rigging team — lead by Neil Roche — used Ziva VFX to run a soft body simulation on the fatty parts of the creature, which gave him a realistic jiggle. They then added a skin simulation using Ziva’s cloth solver to give an extra layer of wrinkling across Usher’s skin. Finally they used nCloth in Maya to simulate his sash and medals.

Was one more challenging/rewarding than the others?
Satan, because of his huge scale and the integrated effects.

Out of all of the effects, can you talk about your favorite?
The CG Bentley without a doubt! The digital Bentley featured in scenes showing the car tearing around London and the countryside at 90 miles per hour. Ultimately, Crowley drives through hell fire on the M25, it catches fire and burns continuously as he heads toward the site of Armageddon. The production located a real Bentley 3.5 Derby Coupe Thrupp & Maberly 1934, which we photo scanned and modeled in intricate detail. We introduced subtle imperfections to the body panels, ensuring the CG Bentley had the same handcrafted appearance as the real thing and would hold up in full-screen shots, including continuous transitions from the street through a window to the actors in an interior replica car.

In order to get the high speed required, we shot plates on location from multiple cameras, including on a motorbike to achieve the high-speed bursts. Later, production filled the car with smoke and our effects team added CG fire and burning textures to the exterior of our CG car, which intensified as he continued his journey.

You’ve talked about the tight post turnaround? How did you show the client shots for approval?
Given the volume and wide range of work required, we were working on a range of sequences concurrently to maximize the short post window — and align our teams when they were working on similar types of shot.

We had constant access to Neil and Douglas throughout the post period, which was crucial for approvals and feedback as we developed key assets and delivered key sequences. Neil and Douglas would visit Milk regularly for reviews toward delivery of the project.

What tools did you use for the VFX?
Amazon (AWS) for cloud rendering, Ziva for creature rigging, Maya, Nuke, Houdini for effects and Arnold for rendering.

What haven’t I asked that is important to touch on?
Our work on Soho, in which Michael Sheen’s Aziraphale bookshop is situated. Production designer Michael Ralph created a set based on Soho’s Berwick Street, comprising a two-block street exterior constructed up to the top of the first story, with the complete bookshop — inside and out — standing on the corner.

Four 20-x-20-foot mobile greenscreens helped our environment team complete the upper levels of the buildings and extend the road into the far distance. We photo scanned both the set and the original Berwick Street location, combining the reference to build digital assets capturing the district’s unique flavor for scenes during both day and nighttime.


Before and After: Soho

Mackinnon wanted crowds of people moving around constantly, so on shooting days crowds of extras thronged the main section of street and a steady stream of vehicles turned in from a junction part way down. Areas outside this central zone remained empty, enabling us to drop in digital people and traffic without having to do takeovers from live-action performers and cars. Milk had a 1,000-frame cycle of cars and people that it dropped into every scene. We kept the real cars always pulling in round the corner and devised it so there was always a bit of gridlock going on at the back.

And finally, we relished the opportunity to bring to life Neil Gaiman and Douglas Mackinnon’s awesome apocalyptic vision for Good Omens. It’s not often you get to create VFX in a comedy context. For example, the stuff inside the antichrist’s head: whatever he thinks of becomes reality. However, for a 12-year-old child, this means reality is rather offbeat.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Behind the Title: Ntropic Flame artist Amanda Amalfi

NAME: Amanda Amalfi

COMPANY: Ntropic (@ntropic)

CAN YOU DESCRIBE YOUR COMPANY?
Ntropic is a content creator producing work for commercials, music videos and feature films as well as crafting experiential and interactive VR and AR media. We have offices in San Francisco, Los Angeles, New York City and London. Some of the services we provide include design, VFX, animation, color, editing, color grading and finishing.

WHAT’S YOUR JOB TITLE?
Senior Flame Artist

WHAT DOES THAT ENTAIL?
Being a senior Flame artist involves a variety of tasks that really span the duration of a project. From communicating with directors, agencies and production teams to helping plan out any visual effects that might be in a project (also being a VFX supervisor on set) to the actual post process of the job.

Amanda worked on this lipstick branding video for the makeup brand Morphe.

It involves client and team management (as you are often also the 2D lead on a project) and calls for a thorough working knowledge of the Flame itself, both in timeline management and that little thing called compositing. The compositing could cross multiple disciplines — greenscreen keying, 3D compositing, set extension and beauty cleanup to name a few. And it helps greatly to have a good eye for color and to be extremely detail-oriented.

WHAT MIGHT SURPRISE PEOPLE ABOUT YOUR ROLE?
How much it entails. Since this is usually a position that exists in a commercial house, we don’t have as many specialties as there would be in the film world.

WHAT’S YOUR FAVORITE PART OF THE JOB?
First is the artwork. I like that we get to work intimately with the client in the room to set looks. It’s often a very challenging position to be in — having to create something immediately — but the challenge is something that can be very fun and rewarding. Second, I enjoy being the overarching VFX eye on the project; being involved from the outset and seeing the project through to delivery.

WHAT’S YOUR LEAST FAVORITE?
We’re often meeting tight deadlines, so the hours can be unpredictable. But the best work happens when the project team and clients are all in it together until the last minute.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
The evening. I’ve never been a morning person so I generally like the time right before we leave for the day, when most of the office is wrapping up and it gets a bit quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably a tactile art form. Sometimes I have the urge to create something that is tangible, not viewed through an electronic device — a painting or a ceramic vase, something like that.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved films that were animated and/or used 3D elements growing up and wanted to know how they were made. So I decided to go to a college that had a computer art program with connections in the industry and was able to get my first job as a Flame assistant in between my junior and senior years of college.

ANA Airlines

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Most recently I worked on a campaign for ANA Airlines. It was a fun, creative challenge on set and in post production. Before that I worked on a very interesting project for Facebook’s F8 conference featuring its AR functionality and helped create a lipstick branding video for the makeup brand Morphe.

IS THERE A PROJECT THAT YOU ARE MOST PROUD OF?
I worked on a spot for Vaseline that was a “through the ages” concept and we had to create looks that would read as from 1880s, 1900, 1940s, 1970s and present day, in locations that varied from the Arctic to the building of the Brooklyn Bridge to a boxing ring. To start we sent the digitally shot footage with our 3D and comps to a printing house and had it printed and re-digitized. This worked perfectly for the ’70s-era look. Then we did additional work to age it further to the other eras — though my favorite was the Arctic turn-of-the-century look.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Flame… first and foremost. It really is the most inclusive software — I can grade, track, comp, paint and deliver all in one program. My monitors — the 4K Eizo and color-calibrated broadcast monitor, are also essential.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Mostly Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? 
I generally have music on with clients, so I will put on some relaxing music. If I’m not with clients, I listen to podcasts. I love How Did This Get Made and Conan O’Brien Needs a Friend.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Hiking and cooking are two great de-stressors for me. I love being in nature and working out and then going home and making a delicious meal.

NYC’s The-Artery expands to larger space in Chelsea

The-Artery has expanded and moved into a new 7,500-square-foot space in Manhattan’s Chelsea neighborhood. Founded by chief creative officer Vico Sharabani, The-Artery will use this extra space while providing visual effects, post supervision, offline editorial, live action and experience design and development across multiple platforms.

According to Sharabani, the new space is not only a response to the studio’s growth, but allows The-Artery to foster better collaboration and reinforce its relationships with clients and creative partners. “As a creative studio, we recognize how important it is for our artists, producers and clients to be working in a space that is comfortable and supportive of our creative process,” he says. “The extraordinary layout of this new space, the size, the lighting and even our location, allows us to provide our clients with key capabilities and plays an important part in promoting our mission moving forward.”

Recent The-Artery projects include 2018’s VR-enabled production for Mercedez-Benz, their work on Under Armour’s “Rush” campaign and Beyonce’s Coachella documentary, Homecoming.

They have also worked on feature films like Netflix’s Beasts of No Nation, Wes Anderson’s Oscar-winning Grand Budapest Hotel and the crime caper Ocean’s 8.

The-Artery’s new studio features a variety of software including Flame, Houdini, Cinema 4D, 3ds Max, Maya, the Adobe Creative Cloud suite of tools, Avid Media Composer, Shotgun for review and approval and more.

The-Artery features a veteran team of talented team of artists and creative collaborators, including a recent addition — editor and former Mad River Post owner Michael Elliot. “Whether they are agencies, commercial and film directors or studios, our clients always work directly with our creative directors and artists, collaborating closely throughout a project,” says Sharabani.

Main Image: Vico Sharabani (far right) and team in their new space.

Checking In: Glassworks’ Duncan Malcolm, Flame Award winner

Back in April, during an event at NAB, Autodesk presented its 2019 Flame Award to Duncan Malcolm. This Flame artist and director of 2D at Glassworks VFX in London is being celebrated for his 20-plus years of artistic achievements.

Malcolm has been working in production and post for 33 years. At Glassworks, he works closely with the studio’s CG artists to seamlessly blend CG photoreal assets and real-world environments for high-end commercial clients. Alongside his work in commercials, Malcolm has worked closely with the creators of the television series Black Mirror on look development and compositing for the award-winning Netflix series, including the critically acclaimed Bandersnatch interactive episode.

Duncan Malcolm

Let’s find out more about Malcolm’s beginnings, and the path that led him to Glassworks. And you can check out his showreel here.

You have a rich history in this industry. How did you get started working in VFX?
I started straight out of school at 15 years old at TVP, a small production company in Scotland that made corporate films and crewed for visiting broadcast companies. It was very small so I got involved in everything — camera work, location sound, sound design, edit and even made the VHS dubs, 8mm cine film transfers and designed the tape covers. So I learned a lot by getting on and doing it. It was before the Internet was prevalent, so you couldn’t just Google it back then; it really was trial and error.

TVP are still based in Aberdeen and still doing incredible work with a tiny crew. I often tell people in London about their feature film Sawney Bean, which they self-funded and made with a complete crew of five in their “spare time” and for all that, is completely inspirational.

I then became an offline and online editor at Picardy Television, which was at the time the biggest and most creative edit house in Scotland. It was there that I started using Quantel’s Editbox. I was focused on the offline  but also started to incorporate more sophisticated VFX into the online work. Around 1998 I made quite an abrupt move to London, I think as a reaction to my dad’s death. Back then the London industry didn’t really accept that one person could be good at more than one part of the filmmaking process, so I decided to focus on the VFX string on my bow.

I freelanced through Soho Editors as an Editbox artist in London and Denmark until I was offered the creative director/lead compositor position at Saatchi’s in-house company, Triangle. This is where I first met the Flame, and together we spent many a long day and night together making commercials and music videos.

I think my first big lead Flame job was Craig David’s Walking Away for Max and Dania. Apart from a few relatively simple commercials I hadn’t truly put the toolset to the test by then. It was quite frankly my personal VFX version of a baptism by fire. I barely left the room for weeks but felt more inspired (and tired) by the end.

Flame became my best VFX friend and my work grew in complexity. Eventually I was offered a position by Joce Capper and Bill McNamara at Rushes and spent quite a few years there working on a fair mixture of commercials and music videos.

How did you find your way to Glassworks?
Around 14 years ago, Hector Macleod offered me a Flame operator position at Glassworks. I jumped at that chance, and since then we have been building on Glassworks’ reputation for seamless VFX and innovative techniques. It’s been fun times, but also very interesting to watch the growth of our industry and the changes in expectations in projects. Even more interesting to me is that, even though on large projects we still effectively specialize, the industry in London and worldwide is much more accepting of the multi-skilled approach to filmmaking. Finally, the world is beginning to embrace the principles I first learned 33 years ago at TVP.

For the Bandersnatch episode of Black Mirror, how did your creative process on this episode differ from other TV projects, and did you use Flame any differently as a result?
I should mention that Bandersnatch has been nominated for a few BAFTAs (best single drama, best editing and best special, visual and graphic effect) so everyone involved are massively excited about that.

I really like working with House of Tomorrow on the Black Mirror films, but I especially loved working on Bandersnatch with producer Russell McLean and director David Slade. It really felt like we were involved in something fresh and new. Nobody knew for sure how the audience was going to watch and engage with such a complex story told in the interactive format. This made it impossible to make any of the normal assumptions. For VFX the goal was the same as normal: to realize director David Slade’s vision and, in the process, make every shot as engaging as possible. But the fact that it didn’t play out in a single linear timeline meant that every single decision had to be considered from this new point of view.

When did you get involved in the project?
I was involved in the very early stages of Bandersnatch, helping with ideas for the viewer’s interactive choice points. These tests were more basic editorial and content tests. I shot our head of production Duncan Buxton acting out parts of the script and cut decision-point sequences to illustrate ways the choices could work. I used Flame as an offline, basic online and audio editing tool for these. Almost every stage in the VFX planning went through some look developed in Flame.

For the environmental work we used traditional matte painting techniques and some clever CG techniques in places, but on a lot of it, I used the Flame to build and paint concept layouts. The pre-shoot the Trellick concept work in fact carried through to the final shots. The moment the mirror cracks was completely built in Flame using some pictures of west London vandalism I came across by accident on the way back from a Bandersnatch preproduction meeting.

The “through the mirror” sequences were shot with 3x-synced ARRI 65 cameras and the footage was unwrapped and used to re-project onto a 3D Stefan [the show’s young programmer] to make his reflection whilst he emerged from the mirror. The VFX requirements on this section of the shoot schedule were quite significant, so on set we had to be confident of the technique used and very quick to react to changes. Since rebuilding his reflection would take many weeks, I built versions of all the shots in Flame. These were used by editor Tony Kearns to find a pace for the sequence, and this fed into our CG artists who were building the reflection.

There were all sorts of Flame tools used to look-develop and finish this show. It really was my complete VFX supervisor companion throughout.

Can you talk about your Mr-benn.com initiative and how that came about?
Mr-benn.com is an art site I set up to exhibit and sell some of what I refer to as ‘the other art” created by people who work in the film and television industry. A portion from every sale is donated to plasticpollutioncoalition.org. It raises awareness about and fights plastic pollution, which is something worth standing behind.

I talked with so many friends and colleagues, talented in their own work fields, who had such an Insatiable appetite for creating that even after the grueling schedules of film projects had beaten them, they still had more to create and show. Their “other” is an amazing mixture of photography, found art, land art, fractals, infrared photography and digital design. It all could be — and often is — exhibited separately on generic art sites without much importance put on the creators’ cinematic achievements. Mr-benn is about the achievement in both their day jobs their “other art” together. It’s starting to get talked about; I hope people like what they see and help support a good cause.

How has your use of Flame changed or evolved over the past 20 years? Are there any particular features that have been added that make your job easier?
Flame has changed greatly since I started with it. I think the addition of the timeline was a particular game-changer, and it’s difficult to remember what it was like without 16-bit float capabilities. On terms of recent changes, the color management has made color workflow much easier. To be fair, every update makes something a little easier.

What other tools are in your arsenal?
I have the demo of almost every type of 3D and 2D package on my laptop, but I haven’t made enough time to master any of them apart from Flame, a little Nuke and Photoshop. I do rely on my Canon DSLR a lot, and I grade stills with Lightroom.

Was there a particular film that motivated you to work in VFX?
Not one in particular. There have been some that along the way have impressed me. I’m thinking District 9 as I type, but there have been a few with a similar effect on me.

What inspires your work?
I take an interest in a lot of everyday things, what the world looks and moves like. Not enough to be an expert in anything, but enough to understand (on a basic level) how it could be recreated. I’m certainly not very clever, just interested enough to spend proper time to find solutions.

The other part is that I seem to have is a gene that makes me feel really bad if I let people down. So I keep going until a problem shot is better, or I hit an immovable delivery date. I’d have done okay in any service industry really.

Any tips for young people starting out?
I see a direct link between exceptional creativity in VFX work to how deeply curious people are in the real world, with all of its incredible qualities. A good place to start is getting interested in what the real world actually looks like through a real lens. Take your own pictures, as it makes you understand relationship between lens and objects.

Start your own projects, and make sure they’re ambitious. Work out how to make them amazing. Then show these as an example of what you can do. Don’t show roto for rotos sake. Once you get a job, don’t get complacent and think you’ve made it. The next step in a career isn’t automatic. It only happens with added effort.

Phosphene’s visual effects for Showtime’s Escape at Dannemora

By Randi Altman

The Showtime limited series Escape at Dannemora is based on the true story of two inmates (David Sweat and Richard Matt) who escape from an Upstate New York prison. They were aided by Tilly, a female prison employee, whose husband also worked at Clinton Correctional Facility. She helped run the tailor shop where both men worked and had an intimate relationship with both men.

Matt Griffin

As we approach Emmy season, we thought it was a good time to reach out to the studio that provided visual effects for the Ben Stiller-directed miniseries, which was nominated for a Golden Globe for best television limited series or movie. Escape at Dannemora stars Patricia Arquette, Benicio Del Toro and Paul Dano.

New York City-based Phosphene was called on to create a variety of visual effects, including turning five different locations into the Clinton Correctional Facility, the maximum security prison where the escape took place. The series was also nominated for an Emmy for its Outstanding Visual Effects in A Supporting Role.

We recently spoke with VFX producer Matt Griffin and VFX supervisor Djuna Wahlrab to find out more.

How early did you guys get involved in the project? Were there already set plans for the types of VFX needed? How much input did Phosphene have?
Matt Griffin: There were key sequences that were discussed with us very early on. The most crucial among them were Sweat’s Run, which was a nine-minute “oner” that opened Episode 5; the gruesome death scene of Broome County Sheriff’s Deputy Kevin Tarsia and an ambitious crane shot that revealed the North Yard in the prison.

Djuna Wahlrab

What were the needs of the filmmakers and how did your studio fill that need?
Were you on set supervising?
Griffin: Ben Stiller and the writers had a very clear vision for these challenging sequences, and therefore had a very realistic understanding of how ambitious the VFX would be. They got us involved right at the start so we could be as collaborative as possible with production in preparing the methodology for execution.

In that same spirit, they had us supervise the majority of the shoot, which positioned us to be involved as the natural shifts and adjustments of production arose day to day. It was amazing to be creative problem solvers with the whole team and not just reacting to what happened once in post.

I know that creating the prison was a big part — taking pieces of a few different prisons to make one?
Djuna Wahlrab: Clinton Correctional is a functioning prison, so we couldn’t shoot the whole series within its premises — instead we filmed in five different locations. We shot at a decommissioned prison in Pittsburgh, the prison’s tailor shop was staged in an old warehouse in Brooklyn, and the Honor Block (where our characters were housed) and parts of the prison bowels were built on a stage in Queens. Remaining pieces under the prison were shot in Yonkers, New York in an active water treatment plant. Working closely with production designer Mark Ricker, we tackled the continuity across all these locations.

The upper courts overlook the town.

We knew the main guard tower visible from the outside of Clinton Correctional was crucial, so we always planned to carry that through to Pittsburgh. Scenes taking place just inside the prison wall were also shot in Pittsburgh, and it was not as long as Clinton so we extended the depth of those shots.

While the surrounding mountainside terrain is on beautiful display from the North Yard, it’s also felt from the ground among the buildings within the prison. When looking down the length of the streets, you can see the sloping side of the mountain just over the wall. These scenes were filmed in Pittsburgh, so what you see beyond those walls is actually a bustling hilly city with water towers and electric lines and highways, so we had to adjust to match the real location.

Can you talk about the shot that had David Sweat crawling through pipes in the basement of the prison?
Wahlrab: For what we call Sweat’s Run — because we were creating a “oner” out of 17 discrete pieces — preproduction was crucial. The previs went far beyond a compositional guide. Using blueprints from three different locations and plans for the eventual stage set, orthographic views were created with extremely detailed planning for camera rigging and hand-off points. Drawing on this early presentation, Matt Pebler and the camera department custom-built many of the rigs required for our constricted spaces and meticulous overlapping sections.

The previs was a common language for all departments at the start, but as each piece of the run was filmed, the previs was updated with completed runs and the requirements would shift. Shooting one piece of the run would instantly lock in requirements for the other connecting pieces, and we’d have to determine a more precise plan moving forward from that point. It took a high level of collaboration and flexibility from all departments to constantly narrow the margin for what level of precision was required from everyone.

Sweat preparing for escape.

Can you talk about the scene where Sweat runs over the sheriff’s deputy Tarsia?
Wahlrab: Special effects had built a rig for a partial car that would be safe to “run over” a stunt man. A shell of a vehicle was suspended from an arm off a rigged tactical truck, so that they moved in parallel. Sweat’s stunt car floated a few feet off the ground. The shell had a roof, windows, a windshield, a hood and a driver’s seat. Below that the sides, grill and wheels of the car were constructed of a soft foam. The stunt man for Tarsia was rigged with wires so they could control his drag beneath the car.

In this way, we were able to get the broad strokes of the stunt in-camera. Though the car needed to be almost completely replaced with CG, its structure took the first steps to inform the appropriate environmental re-lighting needed for the scene. The impact moment was a particular challenge because, of course, the foam grill completely gave way to Tarsia’s body. We had to simulate the cracking of the bumper and the stamp of the blood from Tarsia’s wounds. We also had to reimagine how Tarsia’s body would have moved with this rigid impact.

Tarsia’s death: Replaced stunt car, added blood and re-animated the victim.

For Tarsia himself, in addition to augmenting the chosen take, we used alt takes from the shoot for various parts of the body to recreate a Tarsia with more appropriate physical reactions to the trauma we were simulating. There was also a considerable amount of hand painting this animation to help it all mesh together. We added blood on the wheels, smok,  and animated pieces of the broken bumper, all of which helped to ground Tarsia in the space.

You also made the characters look younger. Can you talk about what tools you used for this particular effect?
Wahlrab: Our goal was to support this jump in time, but not distract by going too far. Early on, we did tests where we really studied the face of each actor. From this research, we determined targeted areas for augmentation, and the approach really ended up being quite tailored for each character.

We broke down the individual regions of the face. First, we targeted wrinkles with tailored defocusing. Second, we reshaped recessed portions of the face, mostly with selective grading. In some cases, we retextured the skin on top of this work. At the end of all of this, we had to reintegrate this into the grainy 16mm footage.

Can you talk about all the tools you used?
Griffin: At Phosphene, we use Foundry Nuke Studio and Autodesk 3ds Max. For additional support, we rely on Mocha Pro, 3DEqualizer and PF Track, among many others.


Added snow, cook fire smoke and inmates to upper tier.

Any other VFX sequences that you can talk about?
Wahlrab: As with any project, weather continuity was a challenge. Our prison was represented by five locations, but it took many more than that to fill out the lives of Tilly and Lyle beyond their workplace. Because we shot a few scenes early on with snow, we were locked into that reality in every single location going forward. The special FX team would give us practical snow in the areas with the most interaction, and we were charged with filling out much of the middle and background. For the most part, we relied on photography, building custom digital matte paintings for each shot. We spent a lot of time upstate in the winter, so I found myself pulling off the road in random places in search of different kinds of snow coverage. It became an obsession, figuring out the best way to shoot the same patch of snow from enough angles to cover my needs for different shots, at different times of day, not entirely knowing where we’d need to use it.

What was the most challenging shots?
Wahlrab: Probably the most challenging location to shoot was the North Yard within the prison. Clinton Correctional is a real prison in Dannemora, New York. It’s about 20 miles south of the Canadian border, set into the side of this hill in what is really a beautiful part of the country.This was the inmates outdoor space, divided into terraces overlooking the whole town of Dannemora and the valley beyond. Though the production value of shooting in an active prison was amazing, it also presented quite a few logistical challenges. For safety (ours as well as the prisoners), the gear allowed in was quite restricted. Many of the tools I rely on had to be left behind. Then, load-in required a military grade inspection by the COs, who examined every piece of our equipment before it could enter or exit. The crew was afforded no special privileges for entering the prison and we were shuffled through the standard intake. It was time consuming, and very much limited how long we’d be able to shoot that day once inside.


Before and After: Cooking fires in the upper courts.

Production did the math and balanced the crew and cast load-in with the coverage required. We had 150 background extras for the yard, but in reality, the average number of inmates, even on the coldest of days, was 300. Also, we needed the yard to have snow on the ground for continuity. Unfortunately it was an unseasonably warm day, and after the first few hours, the special effects snow that was painstakingly created and placed during the night was completely melted. Special effects was also charged with creating cook fire for the stoves in each court, but they could only bring in so much fuel. Our challenge was clear — fill out the background inmate population, add snow and cook fire smoke… everywhere.

The biggest challenge in this location was the shot Ben conceived of that would reveal of the enormity of the North Yard. It was this massive crane shot that began at the lowest part of the yard and panned to the upper courts. It slowly pulls out and cranes up to reveal the entire outdoor space. It’s really a beautiful way to introduce us to the North Yard, revealing one terraced level at a time until you have the whole space in view. It’s one of my favorite moments in the show.

Some shots outside the prison involved set extensions.

There’s this subtext about the North Yard and its influence on Sweat and Matt. Out in the yard, the inmates have a bit more autonomy. With good behavior, they have some ownership over the courts and are given the opportunity to curate these spaces. Some garden, many cook meals, and our characters draw and paint. For those lucky enough to be in the upper courts, they have this beautiful view beyond the walls of the prison, and you can almost forget you are locked up.

I think we’re meant to wonder, was it this autonomy or this daily reminder of the outside world beyond the prison walls that fueled their intense devotion to the escape? This location is a huge story piece, and I don’t think it would have been possible to truly render the scale of it all without the support of visual effects.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

VFX house a52 launches a52 Color

Santa Monica-based visual effects studio a52 has launched a new custom-built space called a52 Color. It focuses on color grading and finishing. a52 Color is now home to a52 colorist Paul Yacono and new hire Daniel de Vue, who joins from London where he was head of color at Glassworks. a52 Color is able to offer clients access to combined or end-to-end services from its network of affiliated companies, which include Rock Paper Scissors, a52 VFX and Elastic.

“Color has been an offering within a52 with Paul Yacono for over half a decade, so it’s already an established part of the culture here,” explains executive producer Thatcher Peterson, who now runs with a52 after coming over from a four-year stint as EP at The Mill. “And with Daniel joining us from London, the distinction of a52 Color to become a separate entity thrusts our services and talent into its own spotlight.”

Yacono’s first major color project of out a52, was the Netflix series House of Cards, which proved that this boutique facility had the bandwidth to service high-volume 4K projects. Since that time, Yacono has established a body of work that ranges from ads for Target, Nike and BMW to the iconic title sequence for Game of Thrones. Yacono’s latest work includes the feature documentaries Struggle: The Life and Art of Szukalski, 13th, Amanda Knox, the TV miniseries Five Came Back and spots for Toyota, Prada, Samsung and Lexus.

Danish colorist de Vue has worked for directors such as Martin Werner, Martin de Thurah, Andreas Nilsson and Wally Pfister, and crafted the mood for brands such as Nike, Principal Financial, Vans, Mercedes, Toyota, Adidas, H&M and Xbox. Recently he graded an Elliot Rausch-directed TUMI spot featuring Lenny Kravitz and Zoë Kravitz on a journey to their family’s Bahamian roots.

Equipped for theatrical and broadcast color grading, the studio boasts two suites outfitted with FilmLight Baselight grading systems and is equipped for HDR with Dolby Vision certification. Additionally, remote grading services are also available throughout the US and internationally.

EP Peterson was at Company 3 for over 15 years, where he helped grow their core business from commercials to features and television.

As company founder Angus Wall, also an Oscar-winning editor for The Girl With the Dragon Tattoo, explains, “In adding high-end color and DI to our suite of companies, a52 Color completes our offerings for end-to-end, best of breed creative services.”

Sydney’s Fin creates CG robot for Netflix film I Am Mother

Fin Design + Effects, an Australian-based post production house with studios in Melbourne and Sydney, brings its VFX and visual storytelling expertise to the upcoming Netflix film I Am Mother. Directed by Grant Sputore, the post-apocalyptic film stars Hilary Swank, Rose Byrne and Clara Rugaard.

In I Am Mother, a teenage girl (Rugaard) is raised underground by the robot “Mother” (voiced by Byrne), designed to repopulate the earth following an extinction event. But their unique bond is threatened when an inexplicable stranger (Swank) arrives with alarming news.

Working closely with the director, Fin Design’s Sydney office built a CG version of the AI robot Mother to be used interchangeably with the practical robot suit built by New Zealand’s Weta Workshop. Fin was involved from the early stages of the process to help develop the look of Mother, completing extensive design work and testing, which then fed back into the practical suit.

In total, Fin produced over 220 VFX shots, including the creation of a menacing droid army as well as general enhancements to the environments and bunker where this post-apocalyptic story takes place.

According to Fin Australia’s managing director, Chris Spry, “Grant was keen on creating an homage of sorts to old-school science-fiction films and embracing practical filmmaking techniques, so we worked with him to formulate the best approach that would still achieve the wow factor — seamlessly combining CG and practical effects. We created an exact CG copy of the suit, visualizing high-action moments such as running, or big stunt scenes that the suit couldn’t perform in real life, which ultimately accounted for around 80 shots.”

Director Sputore on working with Fin: “They offer suggestions and bust expectations. In particular, they delivered visual effects magic with our CG Mother, one minute having her thunder down bunker corridors and in the next moment speed-folding intricate origami creations. For the most part, the robot at the center of our film was achieved practically. But in those handful of moments where a practical solution wasn’t possible, it was paramount that the audience was not be bumped from the film by a sudden transition to a VFX version of one of our central characters. In the end, even I can’t tell which shots of Mother are CG and which are practical, and, crucially, neither can the audience.”

To create the CG replica, the Fin team paid meticulous attention to detail, ensuring the material, shaders and textures perfectly matched photographs and laser scans of the practical suit. The real challenge, however, was in interpreting the nuances of the movements.

“Precision was key,” explains VFX supervisor Jonathan Dearing. “There are many shots cutting rapidly between the real suit and CG suit, so any inconsistencies would be under a spotlight. It wasn’t just about creating a perfect CG replica but also interpreting the limitations of the suit. CG can actually depict a more seamless movement, but to make it truly identical, we needed to mimic the body language and nuances of the actor in the suit [Luke Hawker]. We did a character study of Luke and rigged it to build a CG version of the suit that could mimic him precisely.”

Fin finessed its robust automation pipeline for this project. Built to ensure greater efficiency, the system allows animators to push their work through lighting and comp at the click of a button. For example, if a shot didn’t have a specific light rig made for it, animators could automatically apply a generic light rig that suits the whole film. This tightly controlled system meant that Fin could have one lighter and one animator working on 200 shots without compromising on quality.

The studio used Autodesk Maya, Side Effects Houdini, Foundry Nuke and Redshift on this project.

I Am Mother premiered at the 2019 Sundance Film Festival and is set to stream on Netflix on June 7.

Behind the Title: MPC creative director Rupert Cresswell

This Brit is living in New York while working on spots, directing and playing dodgeball.

NAME: Rupert Cresswell

COMPANY: MPC

CAN YOU DESCRIBE YOUR COMPANY?
MPC has been one of the global leaders in VFX for nearly 50 years, with industry-leading facilities in London, Vancouver, Los Angeles, Bangalore, New York, Montréal, Shanghai, Amsterdam and Paris. Well-known for adding visuals for advertising, film and entertainment industries, some of our most famous projects include blockbuster movies such as The Jungle Book, The Martian, the Harry Potter franchise, the X-Men movies and the upcoming The Lion King, not to mention famous advertising campaigns for brands such as Samsung, BMW, Hennessy and Apple. I am based in New York.

WHAT’S YOUR JOB TITLE?
Creative Director (and Director)

WHAT DOES THAT ENTAIL?
Lots of things, depending on the project. I am repped by MPC to direct commercials, so my work often mixes live action with some form of visual effects or animation. I’m constantly pitching for jobs; if I am successful, I direct the subsequent shoot, then oversee a team of artists at MPC through the post process until delivery.

VeChain 

When I’m not directing, I work as a creative director, leading teams on animation and design projects within MPC. It’s mostly about zeroing in on a client’s needs and offering a creative solution. I critique large teams of artists’ work — sometimes up to 60 artists across our global network — ensuring a consistent creative vision. At MPC we are expected to keep the highest standards of work and make original contributions to the industry. It’s my job to make sure we do.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I feel like the lines between agency, production company and VFX studio can be blurred these days. In my job, I’m often called on for a wide range of disciplines such as writing the creative, directing actors, and even designing large-scale print and OOH (out of the home) advertising campaigns.

WHAT’S YOUR FAVORITE PART OF THE JOB?
There’s always a purity to the concepts at the pitch stage, which I tend to get really enthusiastic about, but the best bit is to get to travel to shoot. I’ve been super-lucky to film in some awesome places like the south of France, Montreal, Cape Town and the Atacama Desert in Chile.

Additionally, the industry is full of funny, cool, creative characters, and if you can take a beat to remind yourself of that, it’s always a blast working with them. The usual things can bother you, like stress and long hours; also, no one likes it when ideas with great potential get compromised. But more often than not, I’m thankful for what I get to do.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
There’s a sweet spot in the morning after I’ve had some caffeine and before I get hungry for lunch — that’s when the heavy lifting happens.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I always knew I wanted to go to art school but never really knew what to do after that. It took years to figure out how to turn my interests into a career. There’s a lot to be said for stubbornly refusing to do something less interesting.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I finished a big campaign for Timberland, which was a great experience. I worked directly with the client, first on the creative, then I directed the shoot in Montreal. I then I oversaw the post and the print campaign, which seemed to go up everywhere I went in the city. It was a huge technical and creative challenge, but great to be involved from the very start to the very end of the process.

I also worked on one of the first brand campaigns for the blockchain currency, VeChain. That was a huge VFX undertaking and lots of fun — we created a love letter to some classic sci-fi films like Star Wars and Blade Runner, which turned out pretty sweet.

In complete contrast, my most favorite recent experience was to work on the branding for the cult Hulu comedy Pen15. The show is so funny, it was a bit of a dream project. It was refreshing to go from such a large technical endeavor as Timberland with a big VFX team to working almost solo, and mostly just illustrating. There was something really cathartic about it. The job required me to spend most of the day doodling childish pictures — I got a real kick out of the puzzled faces around the office wondering if I’d had some kind of breakdown.

Pen15

WHAT OTHER PROJECTS STAND OUT?
Some of my stuff won glittery awards, but I am super-proud that I made a short film, called Charlie Cloudhead, that got picked up by many festivals. I always wanted to try writing and directing narrative work, and I wanted something that could showcase more of my live-action direction.

It was an unusually personal film, which I still feel a little awkward about, but I am really proud that I put in the effort to make it. It was amazing to work with two fantastic actors (Paul Higgins and Daisy Haggard), and I’m still humbled by all the hard work a big team of people put in just for some kooky little idea that I dreamed up.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The idea of no phone and no Internet gives me anxiety. Add to the horror by taking away AC during a New York summer and I’d be a weeping mess.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m pretty much addicted to scrolling through Instagram, but I’m lazy at posting stuff. Maybe it’ll become Myspace 2.0 and we’ll all laugh at all those folks with thousands of followers. Until then, it’s very useful for seeing inspiring new work out there.

I’m also a Brit living abroad in the US, so I’m rather masochistically glued to any news of the whole Brexit thing going down.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I do. Music is incredibly influential. Most of the time when I’m working on a project, it will be inspired by a song. It helps me create a mood for the film and I’ll listen to it repeatedly while I’m working on script or walking around thinking about it. For example, my short film was inspired by a song by Cate Le Bon.

My taste is pretty random to be honest. Recently I’ve been re-visiting Missy Elliott and checking out Rosalia, John Maus and the new Karen O stuff. I’m also a bit obsessed with an artist from Mali called Oumou Sangaré. I was introduced to her by a late-night Lyft driver recently, and she’s been helping set the mood for this Q&A right now.

I should add, I work in an open-plan studio and access to the Bluetooth speaker takes a certain restraint and responsibility to prevent arguments — I’m not necessarily the right guy for that. I usually try and turn the place into Horse Meat Disco.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently joined a dodgeball league. I had no idea how to play at first, and I’m actually very bad at it. I’m treating it as a personal challenge — learning to embrace being a laughable failure. I’m sure it’ll do me good.

Fox Sports promotes US women’s World Cup team with VFX-heavy spots

Santa Monica creative studio Jamm worked with Wieden+Kennedy New York on the Fox Sports campaign “All Eyes on US.” Directed by Joseph Kahn out of Supply & Demand, the four spots celebrate the US Women’s soccer team as it gears up for the 2019 FIFA Women’s World Cup in June.

The newest 60-second spot All Eyes on US, features tens of thousands of screaming fans thanks to Jamm’s CG crowd work. On set, Jamm brainstormed with Kahn on how to achieve the immersive effect he was looking for. Much of the on-the-ground footage was shot using wide-angle lenses, which posed a unique set of challenges by revealing the entire environment as well as the close-up action. With pacing, Jamm achieved the sense of the game occurring in realtime, as the tempo of the camera keeps in step with the team moving the ball downfield.

The 30-second spot Goliath features the first CG crowd shot by the Jamm team, who successfully filled the soccer stadium with a roaring crowd. In Goliath, the entire US women’s soccer team runs toward the camera in slow motion. Captured locked off but digitally manipulated via a 3D camera to create a dolly zoom technique replicating real-life parallax, the altered perspective translates the unsettling feeling of being an opponent as the team literally runs straight into the camera.

On set, Jamm got an initial Lidar scan of the stadium as a base. From there, they used that scan along with reference photos taken on set to build a CG stadium that included accurate seating. They extended the stadium where there were gaps as well to make it a full 360 stadium. The stadium seating tools tie in with Jamm’s in-house crowd system (based on Side Effects Houdini) and allowed them to easily direct the performance of the crowd in every shot.

The Warrior focuses on Megan Rapinoe standing on the field in the rain, with a roaring crowd behind her. Whereas CG crowd simulation is typically captured with fast-moving cameras, the stadium crowd remains locked in the background of this sequence. Jamm implemented motion work and elements like confetti to make the large group of characters appear lively without detracting from Rapinoe in the foreground. Because the live-action scenes were shot in the rain, Jamm used water graphing to seamlessly blend the real-world footage and the CG crowd work.

The Finisher centers on Alex Morgan, who earned the nickname because “she’s the last thing they’ll see before it’s too late.”  The team ran down the field at a slow motion pace, while the cameraman rigged with a steady cam sprinted backwards through the goal. Then the footage was sped up by 600%, providing a realtime quality, as Morgan kicks a perfect strike to the back of the net.

Jamm used Autodesk Flame for compositing the crowds and CG ball, camera projections to rebuild and clean up certain parts of the environment, refining the skies and adding in stadium branding. They also used Foundry Nuke and Houdini for 3D.

The edit was via FinalCut and editor Spencer Campbell. The color grade was by Technicolor’s Tom Poole.

2 Chainz’s 2 Dolla Bill gets VFX from Timber

Santa Monica’s Timber, known for its VMA-winning work on the Kendrick Lamar music video “Humble, provided visual effects and post production for the latest music video from 2 Chainz, featuring E-40 and Lil Wayne — 2 Dolla Bill.

The video begins with a group of people in a living room with the artist singing, “I’m rare” while holding a steak. It transitions to a poker game where the song continues with “I’m rare, like a two dollar bill.” We then see a two dollar bill with Thomas Jefferson singing the phrase as well. The video takes us back to the living room, the poker game, an operating room, a kitchen and other random locations.

Artists at collaborating company Kevin provided 2D visual effects for the music video, including the scene with the third eye.

According to Timber creative director/partner Kevin Lau, “The main challenge for this project was the schedule. It was a quick turnaround initially, so it was great to be able to work in tandem with offline to get ahead of the schedule. This also allowed us to work closely with the director and implement some his requests to enhance the video after it was shot.”

Timber got involved early on in the project and was on set while they shot the piece. The studio called on Autodesk Flame for clean-up, compositing and enhancement work, as well as the animation of the talking money.

Lau was happy Timber got the chance to be on set. “It was very useful to have a VFX supervisor on set for this project, given the schedule and scope of work. We were able to flag any concerns/issues right away so they didn’t become bigger problems in post.”

Arcade Edit’s Geoff Hounsell edited the piece. Daniel de Vue from A52 provided the color grade.

 

NAB 2019: An engineer’s perspective

By John Ferder

Last week I attended my 22nd NAB, and I’ve got the Ross lapel pin to prove it! This was a unique NAB for me. I attended my first 20 NABs with my former employer, and most of those had me setting up the booth visits for the entire contingent of my co-workers and making sure that the vendors knew we were at each booth and were ready to go. Thursday was my “free day” to go wandering and looking at the equipment, cables, connectors, test gear, etc., that I was looking for.

This year, I’m part of a new project, so I went with a shopping list and a rough schedule with the vendors we needed to see. While I didn’t get everywhere I wanted to go, the three days were very full and very rewarding.

Beck Video IP panel

Sessions and Panels
I also got the opportunity to attend the technical sessions on Saturday and Sunday. I spent my time at the BEITC in the North Hall and the SMPTE Future of Cinema Conference in the South Hall. Beck TV gave an interesting presentation on constructing IP-based facilities of the future. While SMPTE ST2110 has been completed and issued, there are still implementation issues, as NMOS is still being developed. Today’s systems are and will for the time being be hybrid facilities. The decision to be made is whether the facility will be built on an IP routing switcher core with gateways to SDI, or on an SDI routing switcher core with gateways to IP.

Although more expensive, building around an IP core would be more efficient and future-proof. Fiber infrastructure design, test equipment and finding engineers who are proficient in both IP and broadcast (the “Purple Squirrels”) are large challenges as well.

A lot of attention was also paid to cloud production and distribution, both in the BEITC and the FoCC. One such presentation, at the FoCC, was on VFX in the cloud with an eye toward the development of 5G. Nathaniel Bonini of BeBop Technology reported that BeBop has a new virtual studio partnership with Avid, and that the cloud allows tasks to be performed in a “massively parallel” way. He expects that 5G mobile technology will facilitate virtualization of the network.

VFX in the Cloud panel

Ralf Schaefer, of the Fraunhofer Heinrich-Hertz Institute, expressed his belief that all devices will be attached to the cloud via 5G, resulting in no cables and no mobile storage media. 5G for AR/VR distribution will render the scene in the network and transmit it directly to the viewer. Denise Muyco of StratusCore provided a link to a virtual workplace: https://bit.ly/2RW2Vxz. She felt that 5G would assist in the speed of the collaboration process between artist and client, making it nearly “friction-free.” While there are always security concerns, 5G would also help the prosumer creators to provide more content.

Chris Healer of The Molecule stated that 5G should help to compress VFX and production workflows, enable cloud computing to work better and perhaps provide realtime feedback for more perfect scene shots, showing line composites of VR renders to production crews in remote locations.

The Floor
I was very impressed with a number of manufacturers this year. Ross Video demonstrated new capabilities of Inception and OverDrive. Ross also showed its new Furio SkyDolly three-wheel rail camera system. In addition, 12G single-link capability was announced for Acuity, Ultrix and other products.

ARRI AMIRA (Photo by Cotch Diaz)

ARRI showed a cinematic multicam system built using the AMIRA camera with a DTS FCA fiber camera adapter back and a base station controllable by Sony RCP1500 or Skaarhoj RCP. The Sony panel will make broadcast-centric people comfortable, but I was very impressed with the versatility of the Skaarhoj RCP. The system is available using either EF, PL, or B4 mount lenses.

During the show, I learned from one of the manufacturers that one of my favorite OLED evaluation monitors is going to be discontinued. This was bad news for the new project I’ve embarked on. Then we came across the Plura booth in the North Hall. Plura as showing a new OLED monitor, the PRM-224-3G. It is a 24.5-inch diagonal OLED, featuring two 3G/HD/SD-SDI and three analog inputs, built-in waveform monitors and vectorscopes, LKFS audio measurement, PQ and HLG, 10-bit color depth, 608/708 closed caption monitoring, and more for a very attractive price.

Sony showed the new HDC-3100/3500 3xCMOS HD cameras with global shutter. These have an upgrade program to UHD/HDR with and optional processor board and signal format software, and a 12G-SDI extension kit as well. There is an optional single-mode fiber connector kit to extend the maximum distance between camera and CCU to 10 kilometers. The CCUs work with the established 1000/1500 series of remote control panels and master setup units.

Sony’s HDC-3100/3500 3xCMOS HD camera

Canon showed its new line of 4K UHD lenses. One of my favorite lenses has been the HJ14ex4.3B HD wide-angle portable lens, which I have installed in many of the studios I’ve worked in. They showed the CJ14ex4.3B at NAB, and I even more impressed with it. The 96.3-degree horizontal angle of view is stunning, and the minimization of chromatic aberration is carried over and perhaps improved from the HJ version. It features correction data that support the BT.2020 wide color gamut. It works with the existing zoom and focus demand controllers for earlier lenses, so it’s  easily integrated into existing facilities.

Foot Traffic
The official total of registered attendees was 91,460, down from 92,912 in 2018. The Evertz booth was actually easy to walk through at 10a.m. on Monday, which I found surprising given the breadth of new interesting products and technologies. Evertz had to show this year. The South Hall had the big crowds, but Wednesday seemed emptier than usual, almost like a Thursday.

The NAB announced that next year’s exhibition will begin on Sunday and end on Wednesday. That change might boost overall attendance, but I wonder how adversely it will affect the attendance at the conference sessions themselves.

I still enjoy attending NAB every year, seeing the new technologies and meeting with colleagues and former co-workers and clients. I hope that next year’s NAB will be even better than this year’s.

Main Image: Barbie Leung.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

Autodesk’s Flame 2020 features machine learning tools

Autodesk’s new Flame 2020 offers a new machine-learning-powered feature set with a host of new capabilities for Flame artists working in VFX, color grading, look development or finishing. This latest update will be showcased at the upcoming NAB Show.

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

New creative tools include:
· Z-Depth Map Generator— Enables Z-depth map extraction analysis using machine learning for live-action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
· Human Face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
· Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity updates include:
· Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
· Simpler UX in Core Areas— A new expanded full-width UX layout for MasterGrade, Image surface and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
· Manager for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment.
· Open FX Support—Flame, Flare and Flame Assist version 2020 now include comprehensive support for industry-standard Open FX creative plugins such as Batch/BFX nodes or on the Flame timeline.
· Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene.

For single-user licenses, Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.
Flame, Flare, Flame Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. Pricing details can be found at the Autodesk website.

VFX and color for new BT spot via The Mill

UK telco BT wanted to create a television spot that showcased the WiFi capabilities of its broadband hub and underline its promise of “whole home coverage.” Sonny director Fredrik Bond visualized a fun and fast-paced spot for agency AMV BBDO, and a The Mill London was brought onboard to help with VFX and color. It is called Complete WiFi.

In the piece, the hero comes home to find it full of soldiers, angels, dancers, fairies, a giant and a horse — characters from the myriad of games and movies the family are watching simultaneously. Obviously, the look depends upon multiple layers of compositing, which have to be carefully scaled to be convincing.

They also need to be very carefully color matched, with similar lighting applied, so all the layers sit together. In a traditional workflow, this would have meant a lot of loops between VFX and grading to get the best from each layer, and a certain amount of compromise as the colorist imposed changes on virtual elements to make the final grade.

To avoid this, and to speed progress, The Mill recently started using BLG for Flame, a FilmLilght plugin that allows Baselight grades to be rendered identically within Flame — and with no back and forth to the color suite to render out new versions of shots. It means the VFX supervisor is continually seeing the latest grade and the colorist can access the latest Flame elements to match them in.

“Of course it was frustrating to grade a sequence and then drop the VFX on top,” explains VFX supervisor Ben Turner. “To get the results our collaborators expect, we were constantly pushing material to and fro. We could end up with more than a hundred publishes on a single job.”

With the BLG for Flame plugin, the VFX artist sees the latest Baselight grade automatically applied, either from FilmLight’s BLG format files or directly from a Baselight scene, even while the scene is still being graded — although Turner says he prefers to be warned when updates are coming.

This works because all systems have access to the raw footage. Baselight grades non-destructively, by building up layers of metadata that are imposed in realtime. The metadata includes all the grading information, multiple windows and layers, effects and relights, textures and more – the whole process. This information can be imposed on the raw footage by any BLG-equipped device (there are Baselight Editions software plugins for Avid and Nuke, too) for realtime rendering and review.

That is important because it also allows remote viewing. For this BT spot, director Bond was back in Los Angeles by the time of the post. He sat in a calibrated room in The Mill in LA and could see the graded images at every stage. He could react quickly to the first animation tests.

“I can render a comp and immediately show it to a client with the latest grade from The Mill’s colorist, Dave Ludlam,” says Turner. “When the client really wants to push a certain aspect of the image, we can ensure through both comp and grade that this is done sympathetically, maintaining the integrity of the image.”

(L-R) VFX supervisor Ben Turner and colorist Dave Ludlam.

Turner admits that it means more to-ing and fro-ing, but that is a positive benefit. “If I need to talk to Dave then I can pop in and solve a specific challenge in minutes. By creating the CGI to work with the background, I know that Dave will never have to push anything too hard in the final grade.”

Ludlam agrees that this is a complete change, but extremely beneficial. “With this new process, I am setting looks but I am not committing to them,” he says. “Working together I get a lot more creative input while still achieving a much slicker workflow. I can build the grade and only lock it down when everyone is happy.

“It is a massive speed-up, but more importantly it has made our output far superior. It gives everyone more control and — with every job under huge time pressure — it means we can respond quickly.”

The spot was offlined by Patric Ryan from Marshall Street. Audio post was via 750mph with sound designers Sam Ashwell and Mike Bovill.

Veteran VFX supervisor Lindy De Quattro joins MPC Film

Long-time visual effects supervisor Lindy De Quattro has joined MPC Film in Los Angeles. Over the last two and a half decades, which included 21 years at ILM, De Quattro has worked with directors such as Guillermo Del Toro, Alexander Payne and Brad Bird. She also currently serves on the Executive Committee for the VFX branch of the Academy of Motion Picture Arts and Sciences.

De Quattro’s VFX credits include Iron Man 2, Mission Impossible: Ghost Protocol, Downsizing and Pacific Rim, for which she won a VES Award for Outstanding Visual Effects. In addition to supervising visual effects teams, she has also provided on-set supervision.

De Quattro says she was attracted to MPC because of “their long history of exceptional high-quality visual effects, but I made the decision to come on board because of their global commitment to inclusion and diversity in the VFX industry. I want to be an active part of the change that I see beginning to happen all around me, and MPC is giving me the opportunity to do just that. They say, ‘If you can see it, you can be it.’ Girls need role models, and women and other underrepresented groups in the industry need mentors. In my new role at MPC I will strive to be both while contributing to MPC’s legacy of outstanding visual effects.”

The studio’s other VFX supervisors include Richard Stammers (Dumbo, The Martian, X-Men: Days of Future Past), Erik Nash (Avengers Assemble, Titanic), Nick Davis (The Dark Knight, Edge of Tomorrow) and Adam Valdez (The Lion King, Maleficent, The Jungle Book).

MPC Film is currently working on The Lion King, Godzilla: King of the Monsters, Detective Pikachu, Call of the Wild and The New Mutants.

Behind the Title: Nice Shoes animator Yandong Dino Qiu

This artist/designer has taken to sketching people on the subway to keep his skills fresh and mind relaxed.

NAME: Yandong Dino Qiu

COMPANY: New York’s Nice Shoes

CAN YOU DESCRIBE YOUR COMPANY?
Nice Shoes is a full-service creative studio. We offer design, animation, VFX, editing, color grading, VR/AR, working with agencies, brands and filmmakers to help realize their creative vision.

WHAT’S YOUR JOB TITLE?
Designer/Animator

WHAT DOES THAT ENTAIL?
Helping our clients to explore different looks in the pre-production stage, while aiding them in getting as close as possible to the final look of the spot. There’s a lot of exploration and trial and error as we try to deliver beautiful still frames that inform the look of the moving piece.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Not so much for the title, but for myself, design and animation can be quite broad. People may assume you’re only 2D, but it also involves a lot of other skill sets such as 3D lighting and rendering. It’s pretty close to a generalist role that requires you to know nearly every software as well as to turn things around very quickly.

WHAT TOOLS DO YOU USE?
Photoshop, After Effects,. Illustrator, InDesign — the full Adobe Creative Suite — and Maxon Cinema 4D.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Pitch and exploration. At that stage, all possibilities are open. The job is alive… like a baby. You’re seeing it form and helping to make new life. Before this, you have no idea what it’s going to look like. After this phase, everyone has an idea. It’s very challenging, exciting and rewarding.

WHAT’S YOUR LEAST FAVORITE?
Revisions. Especially toward the end of a project. Everything is set up. One little change will affect everything else.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
2:15pm. Its right after lunch. You know you have the whole afternoon. The sun is bright. The mood is light. It’s not too late for anything.

Sketching on the subway.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a Manga artist.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
La Mer. Frontline. Friskies. I’ve also been drawing during my commute everyday, sketching the people I see on the subway. I’m trying to post every week on Instagram. I think it’s important for artists to keep to a routine. I started up with this at the beginning of 2019, and there’ve been about 50 drawings already. Artists need to keep their pen sharp all the time. By doing these sketches, I’m not only benefiting my drawing skills, but I’m improving my observation about shapes and compositions, which is extremely valuable for work. Being able to break down shapes and components is a key principle of design, and honing that skill helps me in responding to client briefs.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
TED-Ed What Is Time? We had a lot of freedom in figuring out how to animate Einstein’s theories in a fun and engaging way. I worked with our creative director Harry Dorrington to establish the look and then with our CG team to ensure that the feel we established in the style frames was implemented throughout the piece.

TED-Ed What Is Time?

The film was extremely well received. There was a lot of excitement at Nice Shoes when it premiered, and TED-Ed’s audience seemed to respond really warmly as well. It’s rare to see so much positivity in the YouTube comments.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Wacom tablet for drawing and my iPad for reading.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I take time and draw for myself. I love that drawing and creating is such a huge part of my job, but it can get stressful and tiring only creating for others. I’m proud of that work, but when I can draw something that makes me personally happy, any stress or exhaustion from the work day just melts away.

FilmLight offers additions to Baselight toolkit

FilmLight will be at NAB showing updates to its Baselight toolkit, including T-Cam v2. This is FilmLight’s new and improved color appearance model, which allows the user to render an image for all formats and device types with confidence of color.

It combines with the Truelight Scene Looks and ARRI Look Library, now implemented within the Baselight software. “T-CAM color handling with the updated Looks toolset produces a cleaner response compared to creative, camera-specific LUTs or film emulations,” says Andrea Chlebak, senior colorist at Deluxe’s Encore in Hollywood. “I know I can push the images for theatrical release in the creative grade and not worry about how that look will translate across the many deliverables.”

FilmLight had added what they call “a new approach to color grading” with the addition of Texture Blend tools, which allow the colorist to apply any color grading operation dependent on image detail. This gives the colorist fine control over the interaction of color and texture.

Other workflow improvements aimed at speeding the process include enhanced cache management; a new client view that displays a live web-based representation of a scene showing current frame and metadata; and multi-directory conform for a faster and more straightforward conform process.

The latest version of Baselight software also includes per-pixel alpha channels, eliminating the need for additional layer mattes when compositing VFX elements. Tight integration with VFX suppliers, including Foundry Nuke and Autodesk, means that new versions of sequences can be automatically detected, with the colorist able to switch quickly between versions within Baselight.

VFX house Rodeo FX acquires Rodeo Production

Visual effects studio Rodeo FX, whose high-profile projects include Dumbo, Aquaman and Bumblebee, has purchased Rodeo Production and added its roster of photographers and directors to its offerings.

The two companies, whose common name is just a coincidence, will continue to operate as distinct entities. Rodeo Production’s 10-year-old Montreal office will continue to manage photo and video production, but will now also offer RodeoFX’s post production services and technical expertise.

In Toronto, Rodeo FX plans to open an Autodesk Flame editing suite in the Rodeo Production’ studio and expand its Toronto roster of photographers and directors with the goal of developing stronger production and post services for clients in the city’s advertising, television and film industries.

“This is a milestone in our already incredible history of growth and expansion,” says Sébastien Moreau, founder/president of Rodeo FX, which has offices in LA and Munich in addition to Montreal.

“I have always worked hard to give our artists the best possible opportunities, and this partnership was the logical next step,” says Rodeo Production’s founder Alexandra Saulnier. “I see this as a fusion of pure creativity and innovative technology. It’s the kind of synergy that Montreal has become famous for; it’s in our DNA.”

Rodeo Production clients include Ikea, Under Armour and Mitsubishi.

Quick Chat: Lord Danger takes on VFX-heavy Devil May Cry 5 spot

By Randi Altman

Visual effects for spots have become more and more sophisticated, and the recent Capcom trailer promoting the availability of its game Devil May Cry 5 is a perfect example.

 The Mike Diva-directed Something Greater starts off like it might be a commercial for an anti-depressant with images of a woman cooking dinner for some guests, people working at a construction site, a bored guy trimming hedges… but suddenly each of our “Everyday Joes” turns into a warrior fighting baddies in a video game.

Josh Shadid

The hedge trimmer’s right arm turns into a futuristic weapon, the construction worker evokes a panther to fight a monster, and the lady cooking is seen with guns a blazin’ in both hands. When she runs out of ammo, and to the dismay of her dinner guests, her arms turn into giant saws. 

Lord Danger’s team worked closely with Capcom USA to create this over-the-top experience, and they provided everything from production to VFX to post, including sound and music.

We reached out to Lord Danger founder/EP Josh Shadid to learn more about their collaboration with Capcom, as well as their workflow.

How much direction did you get from Capcom? What was their brief to you?
Capcom’s fight-games director of brand marketing, Charlene Ingram, came to us with a simple request — make a memorable TV commercial that did not use gameplay footage but still illustrated the intensity and epic-ness of the DMC series.

What was it shot on and why?
We shot on both Arri Alexa Mini and Phantom Flex 4k using Zeiss Super Speed MKii Prime lenses, thanks to our friends at Antagonist Camera, and a Technodolly motion control crane arm. We used the Phantom on the Technodolly to capture the high-speed shots. We used that setup to speed ramp through character actions, while maintaining 4K resolution for post in both the garden and kitchen transformations.

We used the Alexa Mini on the rest of the spot. It’s our preferred camera for most of our shoots because we love the combination of its size and image quality. The Technodolly allowed us to create frame-accurate, repeatable camera movements around the characters so we could seamlessly stitch together multiple shots as one. We also needed to cue the fight choreography to sync up with our camera positions.

You had a VFX supervisor on set. Can you give an example of how that was beneficial?
We did have a VFX supervisor on site for this production. Our usual VFX supervisor is one of our lead animators — having him on site to work with means we’re often starting elements in our post production workflow while we’re still shooting.

Assuming some of it was greenscreen?
We shot elements of the construction site and gardening scene on greenscreen. We used pop-ups to film these elements on set so we could mimic camera moves and lighting perfectly. We also took photogrammetry scans of our characters to help rebuild parts of their bodies during transition moments, and to emulate flying without requiring wire work — which would have been difficult to control outside during windy and rainy weather.

Can you talk about some of the more challenging VFX?
The shot of the gardener jumping into the air while the camera spins around him twice was particularly difficult. The camera starts on a 45-degree frontal, swings behind him and then returns to a 45-degree frontal once he’s in the air.

We had to digitally recreate the entire street, so we used the technocrane at the highest position possible to capture data from a slow pan across the neighborhood in order to rebuild the world. We also had to shoot this scene in several pieces and stitch it together. Since we didn’t use wire work to suspend the character, we also had to recreate the lower half of his body in 3D to achieve a natural looking jump position. That with the combination of the CG weapon elements made for a challenging composite — but in the end, it turned out really dramatic (and pretty cool).

Were any of the assets provided by Capcom? All created from scratch?
We were provided with the character and weapons models from Capcom — but these were in-game assets, and if you’ve played the game you’ll see that the environments are often dark and moody, so the textures and shaders really didn’t apply to a real-world scenario.

Our character modeling team had to recreate and re-interpret what these characters and weapons would look like in the real world — and they had to nail it — because game culture wouldn’t forgive a poor interpretation of these iconic elements. So far the feedback has been pretty darn good.

In what ways did being the production company and the VFX house on the project help?
The separation of creative from production and post production is an outdated model. The time it takes to bring each team up to speed, to manage the communication of ideas between creatives and to ensure there is a cohesive vision from start to finish, increases both the costs and the time it takes to deliver a final project.

We shot and delivered all of Devil May Cry’s Something Greater in four weeks total, all in-house. We find that working as the production company and VFX house reduces the ratio of managers per creative significantly, putting more of the money into the final product.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Review: Yanobox Nodes 3 — plugins for Premiere, AE, FCPX, Motion

By Brady Betzel

Did you ever see a plugin preview and immediately think, “I need to have that?” Well, Nodes 3 by Yanobox is that plugin for me. Imagine if Video CoPilot’s Element 3D and Red Giant’s Trapcode and Form had a baby — you would probably end up with something like Nodes 3.

Nodes 3 is a MacOS-only plugin for Adobe’s After Effects and Premiere Pro and Apple’s Final Cut Pro X and Motion. I know what you are thinking: Why isn’t this made for Windows? Good question, but I don’t think it will ever be ported over.

Final Cut Pro

What is it? Nodes 3 is a particle, text, .obj and point cloud replicator, as well as overall mind-blower. With just one click in their preset library you can create stunning fantasy user interfaces (FUIs), such as HUDs or the like. From Transformer-like HUDs to visual data representations interconnected with text and bar graphs, Nodes 3 needs to be seen to be believed. Ok, enough gloating and fluff, let’s get to the meat and potatoes.

A Closer Look
Nodes 3 features a new replicator, animation module and preset browser. The replicator allows you to not only create your HUD or data representation, but also replicates it onto other 2D and 3D primitive shapes (like circles or rectangles) and animates those replications individually or as a group. One thing I really love is the ability to randomize node and/or line values — Yanobox labels this “Probabilities.” You can immediately throw multiple variations of your work together with a few mouse-clicks instead of lines of scripting.

As I mentioned earlier, Nodes 3 is essentially a mix of Element 3D and Trapcode — it’s part replicator/part particle generator and it works easily with After Effect’s 3D cameras (obviously if you are working inside of After Effects) to affect rotations, scale and orientation. The result is a particle replication that feels organic and fresh instead of static and stale. The Auto-Animations offering allows you to quickly animate up to four parts of a structure you’ve built, with 40 parameter choices under each of the four slots. You can animate the clockwise rotation of an ellipse with a point on it, while also rotating the entire structure in toward the z-axis.

Replicator

The newly updated preset browser allows you to save a composition as a preset and open it from within any other compatible host. This allows you to make something with Nodes 3 inside of After Effects and then work with it inside of Final Cut Pro X. That can be super handy and help streamline VFX work. From importing an .obj file to real video, you can generate point clouds from unlimited objects and literally explode them into hundreds of interconnecting points and lines, all animated randomly. It’s amazing.

If you are seeing this and thinking about using Nodes for data representation, that is one of the more beautiful functions of this plugin. First, check out how to turn seemingly boring bar graphs into mesmerizing creations.

For me Nodes really began to click when they described how each node is defined by an index number. Meaning, each node has even and odd numbers assigned to them, allowing for some computer-science geeky-ness, like skipping even or odd rows and adding animated oscillations for some really engrossing graph work.

When I reviewed Nodes 2 back in 2014, what really gave me a “wow” moment was when they showed a map of the United States along with text for each state and its capital. From there you could animate an After Effect’s 3D camera to reproduce a fly-over but with this futuristic HUD/FUI.

Adobe Premiere

On a motion graphics primal level, this really changed and evolved my way of thinking. Not only did United States graphics not have to be plain maps with animated dotted lines, they could be reimagined with sine-wave-based animations or even gently oscillating data points. Nodes 3 really can turn boring into mesmerizing quickly. The only limiting factor is your mind and some motion graphic design creativity.

To get a relatively quick look into the new replicator options inside of Nodes 3, go to FxFactory Plugins’ YouTube page for great tutorials and demos.

If you get even a tiny bit excited when seeing work from HUD masters like Jayse Hansen or plugins like Element 3D, run over to fxfactory.com and download their plugin app to use Yanobox Nodes 3. You can even get a fully working trial to just test out some of their amazing presets. And if you like what you see, you should definitely hand them $299 for the Nodes 3 plugin.

One slight negative for me — I’m not a huge fan of the FxFactory installer. Not because it messes anything up, but because I have to download a plugin loader for the plugin — double download and potential bloating. Not that I see any slowdown on my system, but it would be nice if I could just download Nodes 3 and nothing else. That is small potatoes though; Nodes 3 is really an interesting and unbridled way to visualize 2D and 3D data quickly.

Oh, and if you are curious, Yanobox has been used on big-name projects from The Avengers to Rise of the Planet of the Apes — HUDs, FUIs and GUIs have been created using Yanobox Nodes.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

VFX supervisor Christoph Schröer joins NYC’s Artjail

New York City-based VFX house Artjail has added Christoph Schröer as VFX supervisor. Previously a VFX supervisor/senior compositor at The Mill, Schröer brings over a decade of experience to his new role at Artjail. His work has been featured in spots for Mercedes-Benz, Visa, Volkswagen, Samsung, BMW, Hennessy and Cartier.

Combining his computer technology expertise and a passion for graffiti design, Schröer applied his degree in Computer and Media Sciences to begin his career in VFX. He started off working at visual effects studios in Germany and Switzerland where he collaborated with a variety of European auto clients. His credits from his tenure in the European market include lead compositor for multiple Mercedes-Benz spots, two global Volkswagen campaign launches and BMW’s “Rev Up Your Family.”

In 2016, Schröer made the move to New York to take on a role as senior compositor and VFX supervisor at The Mill. There, he teamed with directors such as Tarsem Singh and Derek Cianfrance, and worked on campaigns for Hennessy, Nissan Altima, Samsung, Cartier and Visa.

Roper Technologies set to acquire Foundry

Roper Technologies, a technology company and a constituent of the S&P 500, Fortune 1000 and the Russell 1000 indices, is expected to purchase Foundry — the deal is expected to close in April 2019, subject to regulatory approval and customary closing conditions.Foundry makes software tools used to create visual effects and 3D for the media and entertainment world, including Nuke, Modo, Mari and Katana.

Craig Rodgerson

It’s a substantial move that enables Foundry to remain an independent company, with Roper assuming ownership from Hg. Roper has a successful history of acquiring well-run technology companies in niche markets that have strong, sustainable growth potential.

“We’re excited about the opportunities this partnership brings. Roper understands our strategy and chose to invest in us to help us realize our ambitious growth plans,” says Foundry CEO Craig Rodgerson. “This move will enable us to continue investing in what really matters to our customers: continued product improvement, R&D and technology innovation and partnerships with global leaders in the industry.”