Tag Archives: VFX

Company 3 buys Sixteen19, offering full-service post in NYC

Company 3 has acquired Sixteen19, a creative editorial, production and post company based in New York City. The deal includes Sixteen19’s visual effects wing, PowerHouse VFX, and a mobile dailies operation with international reach.

The acquisition helps Company 3 further serve NYC’s booming post market for feature film and episodic TV. As part of the acquisition, industry veterans and Sixteen19 co-founders Jonathan Hoffman and Pete Conlin, along with their longtime collaborator, EVP of business development and strategy Alastair Binks, will join Company 3’s leadership team.

“With Sixteen19 under the Company 3 umbrella, we significantly expand what we bring to the production community, addressing a real unmet need in the industry,” says Company 3 president Stefan Sonnenfeld. “This infusion of talent and infrastructure will allow us to provide a complete suite of services for clients, from the start of production through the creative editing process to visual effects, final color, finishing and mastering. We’ve worked in tandem with Sixteen19 many times over the years, so we know that they have always provided strong client relationships, a best-in-class team and a deeply creative environment. We’re excited to bring that company’s vision into the fold at Company 3.”

Sonnenfeld will continue to serve as president of Company 3, and oversee operations of Sixteen19. As a subsidiary of Deluxe, Company 3 is part of a broad portfolio of post services. Bringing together the complementary services and geographic reach of Company3, Sixteen19 and Powerhouse VFX, will expand Company 3’s overall portfolio of post offerings and reach new markets in the US and internationally.

Sixteen19’s New York location includes 60 large editorial suites; two 4K digital cinema grading theaters; and a number of comfortable spaces, open environments and many common areas. Sixteen19’s mobile dailies services will add a perfect companion to Company 3’s existing offerings in that arena. PowerHouse VFX includes dedicated teams of experienced supervisors, producers and artists in 2D and 3D visual effects and compositing.

“The New York film community initially recognized the potential for a Company 3 and Sixteen19 partnership,” says Sixteen19’s Hoffman. “It’s not just the fact that a significant majority of the projects we work on are finished at Company 3, it’s more that our fundamental vision about post has always been aligned with Stefan’s. We value innovation; we’ve built terrific creative teams; and above all else, we both put clients first, always.”

Sixteen19 and Powerhouse VFX will retain their company names.

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Behind the Title: MPC Senior Compositor Ruairi Twohig

After studying hand-drawn animation, this artist found his way to visual effects.

NAME: NYC-based Ruairi Twohig

COMPANY: Moving Picture Company (MPC)

CAN YOU DESCRIBE YOUR COMPANY?
MPC is a global creative and visual effects studio with locations in London, Los Angeles, New York, Shanghai, Paris, Bangalore and Amsterdam. We work with clients and brands across a range of different industries, handling everything from original ideas through to finished production.

WHAT’S YOUR JOB TITLE?
I work as a 2D lead/senior compositor.

Cadillac

WHAT DOES THAT ENTAIL?
The tasks and responsibilities can vary depending on the project. My involvement with a project can begin before there’s even a script or storyboard, and we need to estimate how much VFX will be involved and how long it will take. As the project develops and the direction becomes clearer, with scripts and storyboards and concept art, we refine this estimate and schedule and work with our clients to plan the shoot and make sure we have all the information and assets we need.

Once the commercial is shot and we have an edit, the bulk of the post work begins. This can involve anything from compositing fully CG environments, dragons or spaceships to beauty and product/pack-shot touch-ups or rig removal. So, my role involves a combination of overall project management and planning. But I also get into the detailed shot work and ultimately delivering the final picture. But the majority of the work I do can require a large team of people with different specializations, and those are usually the projects I find the most fun and rewarding due to the collaborative nature of the work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think the variety of the work would surprise most people unfamiliar with the industry. In a single day, I could be working on two or three completely different commercials with completely different challenges while also bidding future projects or reviewing prep work in the early stages of a current project.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I’ve been working in the industry for over 10 years.

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING?
The VFX industry is always changing. I find it exciting to see how quickly the technology is advancing and becoming more widely accessible, cost-effective and faster.

I still find it hard to comprehend the idea of using optical printers for VFX back in the day … before my time. Some of the most interesting areas for me at the moment are the developments in realtime rendering from engines such as Unreal and Unity, and the implementation of AI/machine learning tools that might be able to automate some of the more time-consuming tasks in the future.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
I remember when I was 13, my older brother — who was studying architecture at the time — introduced me to 3ds Max, and I started playing around with some very simple modeling and rendering.

I would buy these monthly magazines like 3D World, which came with demo discs for different software and some CG animation compilations. One of the issues included the short CG film Fallen Art by Tomek Baginski. At the time I was mostly familiar with Pixar’s feature animation work like Toy Story and A Bug’s Life, so watching this short film created using similar techniques but with such a dark, mature tone and story really blew me away. It was this film that inspired me to pursue animation and, ultimately, visual effects.

DID YOU GO TO FILM SCHOOL?
I studied traditional hand-drawn animation at the Dun Laoghaire Institute of Art, Design and Technology in Dublin. This was a really fun course in which we spent the first two years focusing on the craft of animation and the fundamental principles of art and design, followed by another two years in which we had a lot of freedom to make our own films. It was during these final two years of experimentation that I started to move away from traditional animation and focus more on learning CG and VFX.

I really owe a lot to my tutors, who were really supportive during that time. I also had the opportunity to learn from visiting animation masters such as Andreas Deja, Eric Goldberg and John Canemaker. Although on the surface the work I do as a compositor is very different to animation, understanding those fundamental principles has really helped my compositing work; any additional disciplines or skills you develop in your career that require an eye for detail and aesthetics will always make you a better overall artist.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Even after 10 years in the industry, I still get satisfaction from the problem-solving aspect of the job, even on the smaller tasks. I love getting involved on the more creative projects, where I have the freedom to develop the “look” of the commercial/film. But, day to day, it’s really the team-based nature of the work that keeps me going. Working with other artists, producers, directors and clients to make a project look great is what I find really enjoyable.

WHAT’S YOUR LEAST FAVORITE?
Sometimes even if everything is planned and scheduled accordingly, a little hiccup along the way can easily impact a project, especially on jobs where you might only have a limited amount of time to get the work done. So it’s always important to work in such a way that allows you to adapt to sudden changes.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I used to draw all day, every day as a kid. I still sketch occasionally, but maybe I would have pursued a more traditional fine art or illustration career if I hadn’t found VFX.

Tiffany & Co.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Over the past year, I’ve worked on projects for clients such as Facebook, Adidas, Samsung and Verizon. I also worked on the Tiffany & Co. campaign “Believe in Dreams” directed by Francis Lawrence, as well as the company’s holiday campaign directed by Mark Romanek.

I also worked on Cadillac’s “Rise Above” campaign for the 2019 Oscars, which was challenging since we had to deliver four spots within a short timeframe. But it was a fun project. There was also the Michelob Ultra Robots Super Bowl spot earlier this year. That was an interesting project, as the work was completed between our LA, New York and London studios.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Last year, I had the chance to work with my friend and director Sofia Astrom on the music video for the song “Bone Dry” by Eels. It was an interesting project since I’d never done visual effects for a stop-motion animation before. This had its own challenges, and the style of the piece was very different compared to what I’m used to working on day to day. It had a much more handmade feel to it, and the visual effects design had to reflect that, which was such a change to the work I usually do in commercials, which generally leans more toward photorealistic visual effects work.

WHAT TOOLS DO YOU USE DAY TO DAY?
I mostly work with Foundry Nuke for shot compositing. When leading a job that requires a broad overview of the project and timeline management/editorial tasks, I use Nuke Studio or
Autodesk Flame, depending on the requirements of the project. I also use ftrack daily for project management.

WHERE DO YOU FIND INSPIRATION NOW?
I follow a lot of incredibly talented concept artists and photographers/filmmakers on Instagram. Viewing these images/videos on a tiny phone doesn’t always do justice to the work, but the platform is so active that it’s a great resource for inspiration and finding new artists.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I like to run and cycle around the city when I can. During the week it can be easy to get stuck in a routine of sitting in front of a screen, so getting out and about is a much-needed break for me.

Shipping + Handling adds Jerry Spivack, Mike Pethel, Matthew Schwab

VFX creative director Jerry Spivack and colorists Michael Pethel and Matthew Schwab have joined LA’s Shipping + Handling, Spot Welders‘ VFX, color grading, animation, and finishing arm/sister company.

Alongside executive producer Scott Friske and current creative director Casey Price, Spivack will help lead the company’s creative team. As the creative director/co-founder at Ring of Fire, Spivack was responsible for crafting and spearheading VFX on commercials for brands including FedEx, Nike and Jaguar; episodic work for series television including Netflix’s Wormwood and 12 seasons of FX’s It’s Always Sunny in Philadelphia; promos for NBC’s The Voice and The Titan Games; and feature films such as Sony Pictures’ Spider-Man 2, Bold Films’ Drive and Warner Bros.’ The Bucket List.

Colorist Pethel was a founding partner of Company 3 and for the past five years has served client and director relationships under his BeachHouse Color brand, which he will continue to maintain. Pethel’s body of work includes campaigns for Carl’s Jr., Chase, Coke, Comcast/Xfinity, Hyundai, Jeep, Netflix and Southwest Airlines.

Commenting on the move, Pethel says, “I’m thrilled to be joining such a fantastic group of highly regarded and skilled professionals at Shipping + Handling. There is so much creativity here; the people are awesome to work with and the technology they are able to offer clientele at the facility is top-notch.”

Schwab formally joins the Shipping + Handling roster after working closely with the company over the past two years on multiple campaigns for Apple, Acura, QuickBooks and many others. Aside from his role at Shipping + Handling, Schwab will also continue his work through Roving Picture Company. Having worked with a number of internationally recognized brands, Schwab has collaborated on projects for Amazon, Honda, Mercedes-Benz, National Geographic, Netflix, Nike, PlayStation and Smirnoff.

“It’s exciting to be part of a team that approaches every project with such energy. This partnership represents a shared commitment to always deliver outstanding color and technical results for our clients,” says Schwab.

“Pethel is easily amongst the best colorists in our industry. As a longtime client of his, I have a real understanding of the professionalism he brings to every session. He is a delight in the room and wickedly talented. Schwab’s talent has just been realized in the last few years, and we are pleased to offer his skill to our clients. If our experience working with him over the last couple of years is any indication, we’re going to make a lot of clients happy he’s on our roster,” adds Friske.

Spivack, Pethel and Schwab will operate out of Shipping + Handling’s West Coast office on the creative campus it shares with its sister company, editorial post house Spot Welders.

Image: (L-R) Mike Pethel, Matthew Schwab, Jerry Spivack

 

Matthew Bristowe joins Jellyfish as COO

UK-based VFX and animation studio Jellyfish Pictures has hired Matthew Bristowe as director of operations. With a career spanning over 20 years, Bristowe joins Jellyfish Pictures after a stint as head of production at Technicolor.

During his 20 years in the industry, Bristowe has overseen hundreds of productions, including; Aladdin (Disney), Star Wars: The Last Jedi (Lucasfilm/Disney), Avengers: Age of Ultron (Marvel) and Guardians of the Galaxy (Marvel). In 2014 he was honored with the Advanced Imaging Society’s Lumiere Award for his work on Alfonso Cuarón’s Academy Award-winning Gravity.

Bristowe led the One Of Us VFX team to success in the category of Special, Visual and Graphic Effects at the BAFTAs and Best Digital Effects at the Royal Television Society Awards for The Crown Season 1. Another RTS award and BAFTA nomination followed in 2018 for The Crown Season 2. Prior to working with Technicolor and One of Us, Bristowe held senior positions at MPC and Prime Focus.

“Matt joining Jellyfish Pictures is a substantial hire for the company,” explains CEO Phil Dobree. “2019 has seen us focus on our growth, following the opening of our newest studio in Sheffield, and Matt’s extensive experience of bringing together creativity and strategy will be instrumental in our further expansion.”

Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”

Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.

Review: Dell UltraSharp 27 4K InfinityEdge monitor

By Sophia Kyriacou

The Dell UltraSharp U2718Q monitor did not disappoint. Getting started requires minimal effort. You are up and running in no time — from taking it out of the box to switching it on. The stand, the standard Dell mount, is simple to assemble and intuitive, so you barely need to look at any instructions. But if you do, there is a step-by-step guide to help you set up within minutes.

The monitor comes in a well-designed package, which ensures it gets to you safely and securely. The Dell stand is easily adjustable without fuss and remains in place to your liking, with a swivel of 45 degrees to the left or right, a 90-degree pivot clockwise and counter clockwise, and a maximum height of 130mm. This adjustability means it will certainly meet all your comfort and workflow needs, with the pivot being incredibly useful when working in portrait formats.

The InfinityEdge display not only makes the screen look attractive but, more importantly, gives you extra surface area. When working with more than one monitor, having the ultra-thin edge makes the viewing experience less of a distraction, especially when monitors are butted up together. For me, the InfinityEdge is what makes it … in addition to the image quality and resolution, of course!

The Dell UltraSharp U2718Q has a flicker-free screen, making it comfortable on the eyes. It also has 3480×2160 pixels and boasts a color depth of 1.07 billion colors. The anti-glare coating works very well and meets all the needs of work environments with multiple and varied lighting conditions.

There are several connectors to choose from: one DP (v 1.2), one mDP (v 1.2), one HDMI (v 2.0), one USB 3.0 port (upstream), four USB 3.0 ports (including two USB 3.0 BC 1.2) with charging capability at 2A (max), and an audio line out. You are certainly not going to be short of inputs. I found the on-screen navigation incredibly easy to use. The overall casing design is minimal and subtle, with tones of black and dark silver. With the addition of the InfinityEdge, this monitor looks attractive. There is also a matching keyboard and mouse available.

Summing Up
Personally, I like to set my main monitor at a comfortable distance, with the second monitor butted up to my left at an angle of -35 degrees. Being left-handed, this setup works for me ergonomically, keeping my browser, timeline and editing window on that side, so I’m free to focus on the larger-scale composition in front of me.

The two Dell UltraSharp U2718Q monitors I use are great, as they give me the breathing space to focus on creating without having to constantly move windows around, breaking my flow. And thanks to InfinityEdge, the overall experience feels seamless. I have both monitors set up exactly the same so the color matches and retains the same maximum quality perfectly.


Sophia Kyriacou is an award-winning conceptual creative motion designer and animator with over 22 years experience within the broadcast design industry. She’s splits her time between working at the BBC in London and taking on freelance jobs. She is a full voting member at BAFTA and is currently working on a script for a 3D animated short film. 

Brittany Howard music video sets mood with color and VFX

The latest collaboration between Framestore and director Kim Gehrig is for Brittany Howard’s debut solo music video for Stay High, which features a color grade and subtle VFX by the studio. A tribute to the Alabama Shakes’ lead singer’s late father, the stylized music video stars actor Terry Crews (Brooklyn Nine-Nine, The Expendables) as a man finishing a day’s work and returning home to his family.

Produced by production company Somesuch, the aim of Stay High is to present a natural and emotionally driven story that honors the singer’s father, K.J. Howard. Shot in her hometown of Nashville, the music video features Howard’s family and friends while the singer pops up in several scenes throughout the video as different characters.

The video begins with Howard’s father getting off of work at his factory job. The camera follows him on his drive home, all the while he’s singing “Stay High.” As he drives home, we see images people and locations where Howard grew up. The video ends when her dad pulls into his driveway and is met by his daughters and wife.

“Kim wanted to really highlight the innocence of the video’s story, something I kept in mind while grading the film,” says Simon Bourne, Framestore’s head of creative color, who’s graded several films for the director. “The focus needed to always be on Terry with nothing in his surroundings distracting from that and the grade needed to reflect that idea.”

Framestore’s creative director Ben Cronin, who was also a compositor on the project along with Nuke compositor Christian Baker, adds, “From a VFX point of view, our job was all about invisible effects that highlighted the beautiful job that Ryley Brown, the film’s DP, did and to complement Kim’s unique vision.”

“We’ve worked with Kim on several commercials and music video projects, and we love collaborating because her films are always visually-interesting and she knows we’ll always help achieve the ground-breaking and effortlessly cool work that she does.”

Jody Madden upped to CEO at Foundry

Jody Madden, who joined Foundry in 2013 and has held positions as chief operating officer and, most recently, chief customer officer and chief product officer, has been promoted to chief executive officer. She takes over the role from Craig Rodgerson.

Madden, who has a rich background in VFX, has been with Foundry for six years. Prior to joining the company, she spent more than a decade in technology management and studio leadership roles at Industrial Light & Magic, Lucasfilm and Digital Domain after graduating from Stanford University.

“During a time of rapid change in creative industries, Foundry is committed to delivering innovations in workflow and future looking research,” says Madden.  “As the company continues to grow, delivering further improvements in speed, quality and user-experience remains a core focus to enable our customers to meet the demands of their markets.”

“Jody is well known for her collaborative leadership style and this has been crucial in enabling our engineering, product and research teams to achieve results for our customers and build the foundation for the future,” says Simon Robinson, co-founder/chief scientist. “I have worked closely with Jody and have seen the difference she has made to the business so I am extremely excited to see where she will lead Foundry in her new role and look forward to continuing to work with her.”

Technicolor opens prepro studio in LA

Technicolor is opening a new studio in Los Angeles dedicated to creating a seamless pipeline for feature projects — from concept art and visualization through virtual production, production and into final VFX.

As new distribution models increase the demand for content, Technicolor Pre-Production will provide the tools, the talent and the space for creatives to collaborate from day one of their project – from helping set the vision at the start of a job to ensuring that the vision carries through to production and VFX. The result is a more efficient filmmaking process.

Technicolor Pre-Production studio is headed by Kerry Shea, an industry veteran with over 20 years of experience. She is no stranger to this work, having held executive positions at Method Studios, The Third Floor, Digital Domain, The Jim Henson Company, DreamWorks Animation and Sony Pictures Imageworks.

Kerry Shea

Credited on more than 60 feature films including The Jungle Book, Pirates of the Caribbean: Dead Men Tell No Tales and Guardians of the Galaxy Vol. 2, Shea has an extensive background in VFX and post production, as well as live action, animatronics and creature effects.

While the Pre-Production studio stands apart from Technicolor’s visual effects studios — MPC Film, Mill Film, MR. X and Technicolor VFX — it can work seamlessly in conjunction with one or any combination of them.

The Technicolor Pre-Production Studio will comprise of key departments:
– The Business Development Department will work with clients, from project budgeting to consulting on VFX workflows, to help plan and prepare projects for a smooth transition into VFX.
– The VFX Supervisors Department will offer creative supervision across all aspects of VFX on client projects, whether delivered by Technicolor’s studios or third-party vendors.
– The Art Department will work with clients to understand their vision – including characters, props, technologies, and environments – creating artwork that delivers on that vision and sets the tone for the rest of the project.
– The Virtual Production Department will partner with filmmakers to bridge the gap between them and VFX through the production pipeline. Working on the ground and on location, the department will deliver a fully integrated pipeline and shooting services with the flexibility of a small, manageable team — allowing critical players in the filmmaking process to collaborate, view and manipulate media assets and scenes across multiple locations as the production process unfolds.
– The Visualization Department will deliver visualizations that will assist in achieving on screen exactly what clients envisioned.

“With the advancements of tools and technologies, such as virtual production, filmmaking has reached an inflection point, one in which storytellers can redefine what is possible on-set and beyond,” says Shea. “I am passionate about the increasing role and influence that the tools and craft of visual effects can have on the production pipeline and the even more important role in creating more streamlined and efficient workflows that create memorable stories.”

EP Nick Litwinko leads Nice Shoes’ new long-form VFX arm

NYC-based creative studio Nice Shoes has hired executive producer Nick Litwinko to lead its new film and episodic VFX division. Litwinko, who has built a career on infusing a serial entrepreneur approach to the development of creative studios, will grow the division, recruiting talent to bring a boutique, collaborative approach to visual effects for long-form entertainment projects. The division will focus on feature film and episodic projects.

Since coming on board with Nice Shoes, Litwinko and his team already have three long-form projects underway and will continue working to sign on new talent.

Litwinko launched his career at MTV during the height of its popularity, working as a senior producer for MTV Promos/Animation before stepping up as executive producer/director for MTV Commercials. His decade-long tenure led him to launch his own company, Rogue Creative, where he served dual roles as EP and director and oversaw a wide range of animated, live-action and VFX-driven branded campaigns. He was later named senior producer for Psyop New York before launching the New York office of Blind. He moved on to join the team at First Avenue Machine as executive producer/head of production. He was then recruited to join Shooters Inc. as managing director, leading a strategic rebrand, building the company’s NYC offices and playing an instrumental part in the rebrand to Alkemy X.

Behind the Title: Artifex VFX supervisor Rob Geddes

NAME: Rob Geddes

COMPANY: Artifex Studios (@artifexstudios)

CAN YOU DESCRIBE YOUR COMPANY?
Artifex is a small to mid-sized independent VFX studio based in Vancouver, BC. We’ve built up a solid team over the years, with very low staff turnover. We try our best to be an artist-centric shop.

That probably means something different to everyone, but for me it means ensuring that people are being challenged creatively, supported as they grow their skills and encouraged to maintain a healthy work-life balance.

WHAT’S YOUR JOB TITLE?
VFX Supervisor

WHAT DOES THAT ENTAIL?
I guess the simplest explanation is that I have to interpret the needs and requests of our clients, and then provide the necessary context and guidance to our team of artists to bring those requests to life.

Travelers – “Ave Machina” episode

I have to balance the creative and technical challenges of the work, and work within the constraints of budget, schedule and our own studio resources.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The seemingly infinite number of decisions and compromises that must be made each day, often with incomplete information.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I started out back in 2000 as a 3D generalist. My first job was building out environments in 3ds Max for a children’s animated series. I spent some years providing 3D assets, animation and programming to various military and private sector training simulations. Eventually, I made the switch over to the 2D side of things and started building up my roto, paint and compositing skills. This led me to Vancouver, and then to Artifex.

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? 
The biggest change I have seen over the years is the growth in demand for content. All of the various content portals and streaming services have created this massive appetite for new stories. This has brought new opportunities for vendors and artists, but it’s not without challenges. The quality bar is always being raised, and the push to 4K for broadcast puts a lot of pressure on pipelines and infrastructure.

WHY DO YOU LIKE BEING ON SET FOR SHOTS? WHAT ARE THE BENEFITS?
As the in-house VFX supervisor for Artifex, I don’t end up on set — though there have been projects for which we were brought in prior to shooting and could help drive the creative side of the VFX in support of the storytelling. There’s really no substitute for getting all of the context behind what was shot in order to help inform the finished product.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
When I was younger, I always assumed I would end up in classical animation. I devoured all of the Disney classics (Beauty and the Beast, The Lion King, etc.) Jurassic Park was a huge eye-opener though, and seeing The Matrix for the first time made it seem like anything was possible in VFX at that point.

DID YOU GO TO FILM SCHOOL?
Not film school specifically. Out of high school I still wasn’t certain of the path I wanted to take. I went to university first and ended up with a degree in math and computing science. By the time I left university I was convinced that animation and VFX were what I wanted. I worked through two diploma programs in 3D modeling, animation and film production.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The best part of the job for me is seeing the evolution of a shot, as a group of artists come together to solve all of the creative and technical challenges.

WHAT’S YOUR LEAST FAVORITE?
Realizing the limits of what can be accomplished on any given day and then choosing what has to be deferred.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
That’s a tough one. When I wasn’t working in VFX, I was working toward it. I’m obsessed with video game development, and I like to write, so maybe in an alternate timeline I’d be doing something like that.

Zoo

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This past year has been a pretty busy one. We’ve been on Travelers and The Order for Netflix, The Son for AMC, Project Blue Book for A&E, Kim Possible for Disney, Weird City for YouTube, and a couple of indie features for good measure!

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
I’m a big fan of our work on Project Blue Book. It was an interesting challenge to contribute to a project with historical significance and I think our team really rose to the occasion.

WHAT TOOLS DO YOU USE DAY TO DAY?
At Artifex we run our shows through ftrack for reviews and management, so I spend a lot of time in the browser keeping tabs on things. For daily communication we use Slack and email. I use Google Docs for organizational stuff. I pop into Foundry Nuke to test out some things or to work with an artist. I use Photoshop or Affinity Photo on the iPad to do draw-overs and give notes.

WHERE DO YOU FIND INSPIRATION NOW?
It’s such an incredible time to be a visual artist. I try to keep an eye on work getting posted from around the world on sites like ArtStation and Instagram. Current films, but also any other visual mediums like graphic novels, video games, photography, etc. Great ideas can come from anywhere.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I play a lot of video games, drink a lot of tea, and hang out with my daughter.

SIGGRAPH making-of sessions: Toy Story 4, GoT, more

The SIGGRAPH 2019 Production Sessions program offers attendees a behind-the-scenes look at the making of some of the year’s most impressive VFX films, shows, games and VR projects. The 11 production sessions will be held throughout the conference week of July 28 through August 1 at the Los Angeles Convention Center.

With 11 total sessions, attendees will hear from creators who worked on projects such as Toy Story 4, Game of Thrones, The Lion King and First Man.

Other highlights include:

Swing Into Another Dimension: The Making of Spider-Man: Into the Spider-Verse
This production session will explore the art and innovation behind the creation of the Academy Award-winning Spider-Man: Into the Spider-Verse. The filmmaking team behind the first-ever animated Spider-Man feature film took significant risks to develop an all-new visual style inspired by the graphic look of comic books.

Creating the Immersive World of BioWare’s Anthem
The savage world of Anthem is volatile, lush, expansive and full of unexpected characters. Bringing these aspects to life in a realtime, interactive environment presented a wealth of problems for BioWare’s technical artists and rendering engineers. This retrospective panel will highlight the team’s work, alongside reflections on innovation and the successes and challenges of creating a new IP.

The VFX of Netflix Series
From the tragic tales of orphans to a joint force of super siblings to sinister forces threatening 1980s Indiana, the VFX teams on Netflix series have delivered some of the year’s most best visuals. Creatives behind A Series of Unfortunate Events, The Umbrella Academy and Stranger Things will present the work and techniques that brought these worlds and characters into being.

The Making of Marvel Studios’ Avengers: Endgame
The fourth installment in the Avengers saga is the culmination of 22 interconnected films and has drawn audiences to witness the turning point of this epic journey. SIGGRAPH 2019 keynote speaker Victoria Alonso will join Marvel Studios, Digital Domain, ILM and Weta Digital as they discuss how the diverse collection of heroes, environments, and visual effects were assembled into this ultimate, climactic final chapter.

Space Explorers — Filming VR in Microgravity
Felix & Paul Studios, along with collaborators from NASA and the ISS National Lab, share insights from one of the most ambitious VR projects ever undertaken. In this session, the team will discuss the background of how this partnership came to be before diving into the technical challenges of capturing cinematic virtual reality on the ISS.

Productions Sessions are open to conference participants with Select Conference, Full Conference or Full Conference Platinum registrations. The Production Gallery can be accessed with an Experiences badge and above.

Axis provides 1,000 VFX shots for the TV series Happy!

UK-based animation and visual effects house Axis Studios has delivered 1,000 shots across 10 episodes on the second series of the UCP-produced hit Syfy show Happy!.

Based on Grant Morrison and Darick Robertson’s graphic novel, Happy! follows alcoholic ex-cop turned hitman Nick Sax (Christopher Meloni), who teams up with imaginary unicorn Happy (voiced by Patton Oswalt). In the second season, the action moves from Christmastime to “the biggest holiday rebranding of all time” and a plot to “make Easter great again,” courtesy of last season’s malevolent child-kidnapper, Sonny Shine (Christopher Fitzgerald).

Axis Studios, working across its three creative sites in Glasgow, Bristol, and London, collaborated with executive producer and director Brian Taylor and showrunner Patrick Macmanus to raise the bar on the animation of the fully CG character. The studio also worked on a host of supporting characters, including a “chain-smoking man-baby,” a gimp-like Easter Bunny and even a Jeff Goldblum-shaped cloud. Alongside the extensive animation work, the team’s VFX workload greatly increased from the first season — including two additional episodes, creature work, matte painting, cloud simulations, asset building and extensive effects and clean-up work.

Building on the success of the first season, the 100-person team of artists further developed the animation of the lead character, Happy!, improving the rig, giving more nuanced emotions and continually working to integrate him more in the real-world environments.

UK’s Jellyfish adds virtual animation studio and Kevin Spruce

London-based visual effects and animation studio Jellyfish Pictures is opening of a new virtual animation facility in Sheffield. The new site is the company’s fifth studio in the UK, in addition to its established studios in Fitzrovia, Central London; Brixton; South London; and Oval, South London. This addition is no surprise considering Jellyfish created one of Europe’s first virtual VFX studios back in 2017.

With no hardware housed onsite, Jellyfish Pictures’ Sheffield studio — situated in the city center within the Cooper Project Complex — will operate in a completely PC-over-IP environment. With all technology and pipeline housed in a centrally-based co-location, the studio is able to virtualize its distributed workstations through Teradici’s remote visualization solution, allowing for total flexibility and scalability.

The Sheffield site will sit on the same logical LAN as the other four studios, providing access to the company’s software-defined storage (SDS) from Pixit Media, enabling remote collaboration and support for flexible working practices. With the rest of Jellyfish Pictures’ studios all TPN-accredited, the Sheffield studio will follow in their footsteps, using Pixit Media’s container solution within PixStor 5.

The innovative studio will be headed up by Jellyfish Pictures’ newest appointment, animation director Kevin Spruce. With a career spanning over 30 years, Spruce joins Jellyfish from Framestore, where he oversaw a team of 120 as the company’s head of animation. During his time at Framestore, Spruce worked as animation supervisor on feature films such as Fantastic Beasts and Where to Find Them, The Legend of Tarzan and Guardians of the Galaxy. Prior to his 17-year stint at Framestore, Spruce held positions at Canadian animation company, Bardel Entertainment and Spielberg-helmed feature animation studio Amblimation.

Jellyfish Pictures’ northern presence will start off with a small team of animators working on the company’s original animation projects, with a view to expand its team and set up with a large feature animation project by the end of the year.

“We have multiple projects coming up that will demand crewing up with the very best talent very quickly,” reports Phil Dobree, CEO of Jellyfish Pictures. “Casting off the constraints of infrastructure, which traditionally has been the industry’s way of working, means we are not limited to the London talent pool and can easily scale up in a more efficient and economical way than ever before. We all know London, and more specifically Soho, is an expensive place to play, both for employees working here and for the companies operating here. Technology is enabling us to expand our horizon across the UK and beyond, as well as offer talent a way out of living in the big city.”

For Spruce, the move made perfect sense: “After 30 years working in and around Soho, it was time for me to move north and settle in Sheffield to achieve a better work life balance with family. After speaking with Phil, I was excited to discover he was interested in expanding his remote operation beyond London. With what technology can offer now, the next logical step is to bring the work to people rather than always expecting them to move south.

“As animation director for Jellyfish Pictures Sheffield, it’s my intention to recruit a creative team here to strengthen the company’s capacity to handle the expanding slate of work currently in-house and beyond. I am very excited to be part of this new venture north with Jellyfish. It’s a vision of how creative companies can grow in new ways and access talent pools farther afield.”

 

Amazon’s Good Omens: VFX supervisor Jean-Claude Deguara

By Randi Altman

Good versus evil. It’s a story that’s been told time and time again, but Amazon’s Good Omens turns that trope on its head a bit. With Armageddon approaching, two unlikely heroes and centuries-long frenemies— an angel (Michael Sheen) and demon (David Tennant) — team up to try to fight off the end of the world. Think buddy movie, but with the fate of the world at stake.

In addition to Tennant and Sheen, the Good Omens cast is enviable — featuring Jon Hamm, Michael McKean, Benedict Cumberbatch and Nick Offerman, just to name a few. The series is based on the 1990 book by Terry Pratchett and Neil Gaiman.

Jean-Claude Degaura

As you can imagine, this six-part end-of-days story features a variety of visual effects, from creatures to environments to particle effects and fire. London’s Milk was called on to provide 650 visual effects shots, and its co-founder Jean-Claude Deguara supervised all.

He was also able to talk directly with Gaiman, which he says was a huge help. “Having access to Neil Gaiman as the author of Good Omens was just brilliant, as it meant we were able to ask detailed questions to get a more detailed brief when creating the VFX and receive such insightful creative feedback on our work. There was never a question that couldn’t be answered. You don’t often get that level of detail when you’re developing the VFX.”

Let’s find out more about Deguara’s process and the shots in the show as he walks us through his collaboration and creating some very distinctive characters.

Can you talk about how early you got involved on Good Omens?
We were involved right at the beginning, pre-script. It’s always the best scenario for VFX to be involved at the start, to maximize planning time. We spent time with director Douglas Mackinnon, breaking down all six scripts to plan the VFX methodology — working out and refining how to best use VFX to support the storytelling. In fact, we stuck to most of what we envisioned and we continued to work closely with him throughout the project.

How did getting involved when you did help the process?
With the sheer volume and variety of work — 650 shots, a five-month post production turnaround and a crew of 60 — the planning and development time in preproduction was essential. The incredibly wide range of work spanned multiple creatures, environments and effects work.

Having constant access to Neil as author and showrunner was brilliant as we could ask for clarification and more details from him directly when creating the VFX and receive immediate creative feedback. And it was invaluable to have Douglas working with us to translate Neil’s vision in words onto the screen and plan out what was workable. It also meant I was able to show them concepts the team were developing back in the studio while we were on set in South Africa. It was a very collaborative process.

It was important to have strong crew across all VFX disciplines as they worked together on multiple sequences at the same time. So you’re starting in tracking on one, in effects on another and compositing and finishing everything off on another. It was a big logistical challenge, but certainly the kind that we relish and are well versed in at Milk.

Did you do previs? If so, how did that help and what did you use?
We only used previs to work out how to technically achieve certain shots or to sell an idea to Douglas and Neil. It was generally very simple, using gray scale animation with basic geometry. We used it to do a quick layout of how to rescale the dog to be a bigger hellhound, for example.

You were on set supervising… can you talk about how that helped?
It was a fast-moving production with multiple locations in the UK over about six months, followed by three months in South Africa. It was crucial for the volume and variety of VFX work required on Good Omens that I was across all the planning and execution of filming for our shots.

Being on set allowed me to help solve various problems as we went along. I could also show Neil and Douglas various concepts that were being developed back in the studio, so that we could move forward more quickly with creative development of the key sequences, particularly the challenging ones such as Satan and the Bentley.

What were the crucial things to ensure during the shoot?
Making sure all the preparation was done meticulously for each shot — given the large volume and variety of the environments and sets. I worked very closely with Douglas on the shoot so we could have discussions to problem-solve where needed and find creative solutions.

Can you point to an example?
We had multiple options for shots involving the Bentley, so our advance planning and discussions with Douglas involved pulling out all the car sequences in the series scripts and creating a “mini script” specifically for the Bentley. This enabled us to plan which assets (the real car, the art department’s interior car shell or the CG car) were required and when.

You provided 650 VFX shots. Can you describe the types of effects?
We created everything from creatures (Satan exploding up out of the ground; a kraken; the hellhound; a demon and a snake) to environments (heaven – a penthouse with views of major world landmarks, a busy Soho street); feathered wings for Michael Sheen’s angel Aziraphale and David Tennant’s demon Crowley, and a CG Bentley in which Tennant’s Crowley hurtles around London.

We also had a large effects team working on a whole range of effects over the six episodes — from setting the M25 and the Bentley on fire to a flaming sword to a call center filled with maggots to a sequence in which Crowley (Tennant) travels through the internet at high speed.

Despite the fantasy nature of the subject matter, it was important to Gaiman that the CG elements did not stand out too much. We needed to ensure the worlds and characters were always kept grounded in reality. A good example is how we approached heaven and hell. These key locations are essentially based around an office block. Nothing too fantastical, but they are, as you would expect, completely different and deliberately so.

Hell is the basement, which was shot in a disused abattoir in South Africa, whilst heaven is a full CG environment located in the penthouse with a panoramic view over a cityscape featuring landmarks such as the Eiffel Tower, The Shard and the Pyramids.

You created many CG creatures. Can you talk about the challenges of that and how you accomplished them?
Many of the main VFX features, such as Satan (voiced by Benedict Cumberbatch), appear only once in the six-part series as the story moves swiftly toward the apocalypse. So we had to strike a careful balance between delivering impact yet ensuring they were immediately recognizable and grounded in reality. Given our fast five-month post- turnaround, we had our key teams working concurrently on creatures such as a kraken; the hellhound; a small, portly demon called Usher who meets his demise in a bath of holy water; and the infamous snake in the Garden of Eden.

We have incorporated Ziva VFX into our pipeline, which ensured our rigging and modeling teams maximized the development and build phases in the timeframe. For example, the muscle, fat and skin simulations are all solved on the renderfarm; the animators can publish a scene and then review the creature effects in dailies the next day.

We use our proprietary software CreatureTools for rigging all our creatures. It is a modular rigging package, which allows us to very quickly build animation rigs for previs or blocking and we build our deformation muscle and fat rigs in Ziva VFX. It means the animators can start work quickly and there is a lot of consistency between the rigs.

Can you talk about the kraken?
The kraken pays homage to Ray Harryhausen and his work on Clash of the Titans. Our team worked to create the immense scale of the kraken and take water simulations to the next level. The top half of the kraken body comes up out of the water and we used a complex ocean/water simulation system that was originally developed for our ocean work on the feature film Adrift.

Can you dig in a bit more about Satan?
Near the climax of Good Omens, Aziraphale, Crowley and Adam witness the arrival of Satan. In the early development phase, we were briefed to highlight Satan’s enormous size (about 400 feet) without making him too comical. He needed to have instant impact given that he appears on screen for just this one long sequence and we don’t see him again.

Our first concept was pretty scary, but Neil wanted him simpler and more immediately recognizable. Our concept artist created a horned crown, which along with his large, muscled, red body delivered the look Neil had envisioned.

We built the basic model, and when Cumberbatch was cast, the modeling team introduced some of his facial characteristics into Satan’s FACS-based blend shape set. Video reference of the actor’s voice performance, captured on a camera phone, helped inform the final keyframe animation. The final Satan was a full Ziva VFX build, complete with skeleton, muscles, fat and skin. The team set up the muscle scene and fat scene in a path to an Alembic cache of the skeleton so that they ended up with a blended mesh of Satan with all the muscle detail on it.

We then did another skin pass on the face to add extra wrinkles and loosen things up. A key challenge for our animation team — lead by Joe Tarrant — lay in animating a creature of the immense scale of Satan. They needed to ensure the balance and timing of his movements felt absolutely realistic.

Our effects team — lead by James Reid — layered multiple effects simulations to shatter the airfield tarmac and generate clouds of smoke and dust, optimizing setups so that only those particles visible on camera were simulated. The challenge was maintaining a focus on the enormous size and impact of Satan while still showing the explosion of the concrete, smoke and rubble as he emerges.

Extrapolating from live-action plates shot at an airbase, the VFX team built a CG environment and inserted live action of the performers into otherwise fully digital shots of the gigantic red-skinned devil bursting out of the ground.

And the hellhound?
Beelzebub (Anna Maxwell Martin) sends the antichrist (a boy named Adam) a giant hellhound. By giving the giant beast a scary name, Adam will set Armageddon in motion. In reality, Adam really just wants a loveable pet and transforms the hellhound into a miniature hound called, simply, Dog.

A Great Dane performed as the hellhound, photographed in a forest location while a grip kept pace with a small square of bluescreen. The Milk team tracked the live action and performed a digital head and neck replacement. Sam Lucas modeled the head in Autodesk Maya, matching the real dog’s anatomy before stretching its features into grotesquery. A final round of sculpting followed in Pixologic ZBrush, with artists refining 40-odd blend shapes for facial expression.

Once our rigging team got the first iteration of the blend shapes, they passed the asset off to animation for feedback. They then added an extra level of tweaking around the lips. In the creature effects phase, they used Ziva VFX to add soft body jiggle around the bottom of the lips and jowls.

What about creating the demon Usher?
One of our favorite characters was the small, rotund, quirky demon creature called Usher. He is a fully rigged CG character. Our team took a fully concepted image and adapted it to the performance and physicality of the actor. To get the weight of Usher’s rotund body, the rigging team — lead by Neil Roche — used Ziva VFX to run a soft body simulation on the fatty parts of the creature, which gave him a realistic jiggle. They then added a skin simulation using Ziva’s cloth solver to give an extra layer of wrinkling across Usher’s skin. Finally they used nCloth in Maya to simulate his sash and medals.

Was one more challenging/rewarding than the others?
Satan, because of his huge scale and the integrated effects.

Out of all of the effects, can you talk about your favorite?
The CG Bentley without a doubt! The digital Bentley featured in scenes showing the car tearing around London and the countryside at 90 miles per hour. Ultimately, Crowley drives through hell fire on the M25, it catches fire and burns continuously as he heads toward the site of Armageddon. The production located a real Bentley 3.5 Derby Coupe Thrupp & Maberly 1934, which we photo scanned and modeled in intricate detail. We introduced subtle imperfections to the body panels, ensuring the CG Bentley had the same handcrafted appearance as the real thing and would hold up in full-screen shots, including continuous transitions from the street through a window to the actors in an interior replica car.

In order to get the high speed required, we shot plates on location from multiple cameras, including on a motorbike to achieve the high-speed bursts. Later, production filled the car with smoke and our effects team added CG fire and burning textures to the exterior of our CG car, which intensified as he continued his journey.

You’ve talked about the tight post turnaround? How did you show the client shots for approval?
Given the volume and wide range of work required, we were working on a range of sequences concurrently to maximize the short post window — and align our teams when they were working on similar types of shot.

We had constant access to Neil and Douglas throughout the post period, which was crucial for approvals and feedback as we developed key assets and delivered key sequences. Neil and Douglas would visit Milk regularly for reviews toward delivery of the project.

What tools did you use for the VFX?
Amazon (AWS) for cloud rendering, Ziva for creature rigging, Maya, Nuke, Houdini for effects and Arnold for rendering.

What haven’t I asked that is important to touch on?
Our work on Soho, in which Michael Sheen’s Aziraphale bookshop is situated. Production designer Michael Ralph created a set based on Soho’s Berwick Street, comprising a two-block street exterior constructed up to the top of the first story, with the complete bookshop — inside and out — standing on the corner.

Four 20-x-20-foot mobile greenscreens helped our environment team complete the upper levels of the buildings and extend the road into the far distance. We photo scanned both the set and the original Berwick Street location, combining the reference to build digital assets capturing the district’s unique flavor for scenes during both day and nighttime.


Before and After: Soho

Mackinnon wanted crowds of people moving around constantly, so on shooting days crowds of extras thronged the main section of street and a steady stream of vehicles turned in from a junction part way down. Areas outside this central zone remained empty, enabling us to drop in digital people and traffic without having to do takeovers from live-action performers and cars. Milk had a 1,000-frame cycle of cars and people that it dropped into every scene. We kept the real cars always pulling in round the corner and devised it so there was always a bit of gridlock going on at the back.

And finally, we relished the opportunity to bring to life Neil Gaiman and Douglas Mackinnon’s awesome apocalyptic vision for Good Omens. It’s not often you get to create VFX in a comedy context. For example, the stuff inside the antichrist’s head: whatever he thinks of becomes reality. However, for a 12-year-old child, this means reality is rather offbeat.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Behind the Title: Ntropic Flame artist Amanda Amalfi

NAME: Amanda Amalfi

COMPANY: Ntropic (@ntropic)

CAN YOU DESCRIBE YOUR COMPANY?
Ntropic is a content creator producing work for commercials, music videos and feature films as well as crafting experiential and interactive VR and AR media. We have offices in San Francisco, Los Angeles, New York City and London. Some of the services we provide include design, VFX, animation, color, editing, color grading and finishing.

WHAT’S YOUR JOB TITLE?
Senior Flame Artist

WHAT DOES THAT ENTAIL?
Being a senior Flame artist involves a variety of tasks that really span the duration of a project. From communicating with directors, agencies and production teams to helping plan out any visual effects that might be in a project (also being a VFX supervisor on set) to the actual post process of the job.

Amanda worked on this lipstick branding video for the makeup brand Morphe.

It involves client and team management (as you are often also the 2D lead on a project) and calls for a thorough working knowledge of the Flame itself, both in timeline management and that little thing called compositing. The compositing could cross multiple disciplines — greenscreen keying, 3D compositing, set extension and beauty cleanup to name a few. And it helps greatly to have a good eye for color and to be extremely detail-oriented.

WHAT MIGHT SURPRISE PEOPLE ABOUT YOUR ROLE?
How much it entails. Since this is usually a position that exists in a commercial house, we don’t have as many specialties as there would be in the film world.

WHAT’S YOUR FAVORITE PART OF THE JOB?
First is the artwork. I like that we get to work intimately with the client in the room to set looks. It’s often a very challenging position to be in — having to create something immediately — but the challenge is something that can be very fun and rewarding. Second, I enjoy being the overarching VFX eye on the project; being involved from the outset and seeing the project through to delivery.

WHAT’S YOUR LEAST FAVORITE?
We’re often meeting tight deadlines, so the hours can be unpredictable. But the best work happens when the project team and clients are all in it together until the last minute.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
The evening. I’ve never been a morning person so I generally like the time right before we leave for the day, when most of the office is wrapping up and it gets a bit quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably a tactile art form. Sometimes I have the urge to create something that is tangible, not viewed through an electronic device — a painting or a ceramic vase, something like that.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved films that were animated and/or used 3D elements growing up and wanted to know how they were made. So I decided to go to a college that had a computer art program with connections in the industry and was able to get my first job as a Flame assistant in between my junior and senior years of college.

ANA Airlines

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Most recently I worked on a campaign for ANA Airlines. It was a fun, creative challenge on set and in post production. Before that I worked on a very interesting project for Facebook’s F8 conference featuring its AR functionality and helped create a lipstick branding video for the makeup brand Morphe.

IS THERE A PROJECT THAT YOU ARE MOST PROUD OF?
I worked on a spot for Vaseline that was a “through the ages” concept and we had to create looks that would read as from 1880s, 1900, 1940s, 1970s and present day, in locations that varied from the Arctic to the building of the Brooklyn Bridge to a boxing ring. To start we sent the digitally shot footage with our 3D and comps to a printing house and had it printed and re-digitized. This worked perfectly for the ’70s-era look. Then we did additional work to age it further to the other eras — though my favorite was the Arctic turn-of-the-century look.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Flame… first and foremost. It really is the most inclusive software — I can grade, track, comp, paint and deliver all in one program. My monitors — the 4K Eizo and color-calibrated broadcast monitor, are also essential.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Mostly Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? 
I generally have music on with clients, so I will put on some relaxing music. If I’m not with clients, I listen to podcasts. I love How Did This Get Made and Conan O’Brien Needs a Friend.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Hiking and cooking are two great de-stressors for me. I love being in nature and working out and then going home and making a delicious meal.

NYC’s The-Artery expands to larger space in Chelsea

The-Artery has expanded and moved into a new 7,500-square-foot space in Manhattan’s Chelsea neighborhood. Founded by chief creative officer Vico Sharabani, The-Artery will use this extra space while providing visual effects, post supervision, offline editorial, live action and experience design and development across multiple platforms.

According to Sharabani, the new space is not only a response to the studio’s growth, but allows The-Artery to foster better collaboration and reinforce its relationships with clients and creative partners. “As a creative studio, we recognize how important it is for our artists, producers and clients to be working in a space that is comfortable and supportive of our creative process,” he says. “The extraordinary layout of this new space, the size, the lighting and even our location, allows us to provide our clients with key capabilities and plays an important part in promoting our mission moving forward.”

Recent The-Artery projects include 2018’s VR-enabled production for Mercedez-Benz, their work on Under Armour’s “Rush” campaign and Beyonce’s Coachella documentary, Homecoming.

They have also worked on feature films like Netflix’s Beasts of No Nation, Wes Anderson’s Oscar-winning Grand Budapest Hotel and the crime caper Ocean’s 8.

The-Artery’s new studio features a variety of software including Flame, Houdini, Cinema 4D, 3ds Max, Maya, the Adobe Creative Cloud suite of tools, Avid Media Composer, Shotgun for review and approval and more.

The-Artery features a veteran team of talented team of artists and creative collaborators, including a recent addition — editor and former Mad River Post owner Michael Elliot. “Whether they are agencies, commercial and film directors or studios, our clients always work directly with our creative directors and artists, collaborating closely throughout a project,” says Sharabani.

Main Image: Vico Sharabani (far right) and team in their new space.

FXhome, Vegas Creative Software partner on Vegas Post

HitFilm creator FXhome has partnered with Vegas Creative Software to launch a new suite of editing, VFX, compositing and imaging tools for video pros, editors and VFX artists called Vegas Post.

Vegas Post will combine the editing tools of Vegas Pro with FXhome’s expertise in compositing and visual effects to offer an array of features and capabilities.

FXhome is developing customized effects and compositing tools specifically for Vegas Post. The new software suite will also integrate a custom-developed version of FXhome’s new non-destructive RAW image compositor that will enable video editors to work with still-image and graphical content and incorporate it directly into their final productions. All tools will work together seamlessly in an integrated, end-to-end workflow to accelerate and streamline the post production process for artists.

The new software suite is ideally suited for video pros in post facilities of all sizes and requirements — from individual artists to large post studios, broadcasters and small/medium enterprise installations. It will be available in the third quarter, with pricing to be announced.

Meanwhile, FXhome has teamed up with Filmstro, which offers a royalty-free music library, to provide HitFilm users with access to the entire Filmstro music library for 12 months. With Filmstro available directly from the FXhome store, HitFilm users can use Filmstro soundtracks on unlimited projects and get access to weekly new music updates.

Offering more than just a royalty-free music library, Filmstro has developed a user interface that gives artists flexibility and control over selected music tracks for use in their HitFilm projects. HitFilm users can control the momentum, depth and power of any Filmstro track, using sliders to perfectly match any sequence in a HitFilm project. Users can also craft soundtracks to perfectly fit images by using a keyframe graph editor within Filmstro. Moving sliders automatically create keyframes for each element and can be edited at any point.

Filmstro offers over 60 albums’ worth of music with weekly music releases. All tracks are searchable using keywords, film and video genre, musical style, instrumental palette or mood. All Filmstro music is licensed for usage worldwide and in perpetuity. The Filmstro dynamic royalty-free music library is available now on the FXhome Store for $249 and can be purchased here.

Checking In: Glassworks’ Duncan Malcolm, Flame Award winner

Back in April, during an event at NAB, Autodesk presented its 2019 Flame Award to Duncan Malcolm. This Flame artist and director of 2D at Glassworks VFX in London is being celebrated for his 20-plus years of artistic achievements.

Malcolm has been working in production and post for 33 years. At Glassworks, he works closely with the studio’s CG artists to seamlessly blend CG photoreal assets and real-world environments for high-end commercial clients. Alongside his work in commercials, Malcolm has worked closely with the creators of the television series Black Mirror on look development and compositing for the award-winning Netflix series, including the critically acclaimed Bandersnatch interactive episode.

Duncan Malcolm

Let’s find out more about Malcolm’s beginnings, and the path that led him to Glassworks. And you can check out his showreel here.

You have a rich history in this industry. How did you get started working in VFX?
I started straight out of school at 15 years old at TVP, a small production company in Scotland that made corporate films and crewed for visiting broadcast companies. It was very small so I got involved in everything — camera work, location sound, sound design, edit and even made the VHS dubs, 8mm cine film transfers and designed the tape covers. So I learned a lot by getting on and doing it. It was before the Internet was prevalent, so you couldn’t just Google it back then; it really was trial and error.

TVP are still based in Aberdeen and still doing incredible work with a tiny crew. I often tell people in London about their feature film Sawney Bean, which they self-funded and made with a complete crew of five in their “spare time” and for all that, is completely inspirational.

I then became an offline and online editor at Picardy Television, which was at the time the biggest and most creative edit house in Scotland. It was there that I started using Quantel’s Editbox. I was focused on the offline  but also started to incorporate more sophisticated VFX into the online work. Around 1998 I made quite an abrupt move to London, I think as a reaction to my dad’s death. Back then the London industry didn’t really accept that one person could be good at more than one part of the filmmaking process, so I decided to focus on the VFX string on my bow.

I freelanced through Soho Editors as an Editbox artist in London and Denmark until I was offered the creative director/lead compositor position at Saatchi’s in-house company, Triangle. This is where I first met the Flame, and together we spent many a long day and night together making commercials and music videos.

I think my first big lead Flame job was Craig David’s Walking Away for Max and Dania. Apart from a few relatively simple commercials I hadn’t truly put the toolset to the test by then. It was quite frankly my personal VFX version of a baptism by fire. I barely left the room for weeks but felt more inspired (and tired) by the end.

Flame became my best VFX friend and my work grew in complexity. Eventually I was offered a position by Joce Capper and Bill McNamara at Rushes and spent quite a few years there working on a fair mixture of commercials and music videos.

How did you find your way to Glassworks?
Around 14 years ago, Hector Macleod offered me a Flame operator position at Glassworks. I jumped at that chance, and since then we have been building on Glassworks’ reputation for seamless VFX and innovative techniques. It’s been fun times, but also very interesting to watch the growth of our industry and the changes in expectations in projects. Even more interesting to me is that, even though on large projects we still effectively specialize, the industry in London and worldwide is much more accepting of the multi-skilled approach to filmmaking. Finally, the world is beginning to embrace the principles I first learned 33 years ago at TVP.

For the Bandersnatch episode of Black Mirror, how did your creative process on this episode differ from other TV projects, and did you use Flame any differently as a result?
I should mention that Bandersnatch has been nominated for a few BAFTAs (best single drama, best editing and best special, visual and graphic effect) so everyone involved are massively excited about that.

I really like working with House of Tomorrow on the Black Mirror films, but I especially loved working on Bandersnatch with producer Russell McLean and director David Slade. It really felt like we were involved in something fresh and new. Nobody knew for sure how the audience was going to watch and engage with such a complex story told in the interactive format. This made it impossible to make any of the normal assumptions. For VFX the goal was the same as normal: to realize director David Slade’s vision and, in the process, make every shot as engaging as possible. But the fact that it didn’t play out in a single linear timeline meant that every single decision had to be considered from this new point of view.

When did you get involved in the project?
I was involved in the very early stages of Bandersnatch, helping with ideas for the viewer’s interactive choice points. These tests were more basic editorial and content tests. I shot our head of production Duncan Buxton acting out parts of the script and cut decision-point sequences to illustrate ways the choices could work. I used Flame as an offline, basic online and audio editing tool for these. Almost every stage in the VFX planning went through some look developed in Flame.

For the environmental work we used traditional matte painting techniques and some clever CG techniques in places, but on a lot of it, I used the Flame to build and paint concept layouts. The pre-shoot the Trellick concept work in fact carried through to the final shots. The moment the mirror cracks was completely built in Flame using some pictures of west London vandalism I came across by accident on the way back from a Bandersnatch preproduction meeting.

The “through the mirror” sequences were shot with 3x-synced ARRI 65 cameras and the footage was unwrapped and used to re-project onto a 3D Stefan [the show’s young programmer] to make his reflection whilst he emerged from the mirror. The VFX requirements on this section of the shoot schedule were quite significant, so on set we had to be confident of the technique used and very quick to react to changes. Since rebuilding his reflection would take many weeks, I built versions of all the shots in Flame. These were used by editor Tony Kearns to find a pace for the sequence, and this fed into our CG artists who were building the reflection.

There were all sorts of Flame tools used to look-develop and finish this show. It really was my complete VFX supervisor companion throughout.

Can you talk about your Mr-benn.com initiative and how that came about?
Mr-benn.com is an art site I set up to exhibit and sell some of what I refer to as ‘the other art” created by people who work in the film and television industry. A portion from every sale is donated to plasticpollutioncoalition.org. It raises awareness about and fights plastic pollution, which is something worth standing behind.

I talked with so many friends and colleagues, talented in their own work fields, who had such an Insatiable appetite for creating that even after the grueling schedules of film projects had beaten them, they still had more to create and show. Their “other” is an amazing mixture of photography, found art, land art, fractals, infrared photography and digital design. It all could be — and often is — exhibited separately on generic art sites without much importance put on the creators’ cinematic achievements. Mr-benn is about the achievement in both their day jobs their “other art” together. It’s starting to get talked about; I hope people like what they see and help support a good cause.

How has your use of Flame changed or evolved over the past 20 years? Are there any particular features that have been added that make your job easier?
Flame has changed greatly since I started with it. I think the addition of the timeline was a particular game-changer, and it’s difficult to remember what it was like without 16-bit float capabilities. On terms of recent changes, the color management has made color workflow much easier. To be fair, every update makes something a little easier.

What other tools are in your arsenal?
I have the demo of almost every type of 3D and 2D package on my laptop, but I haven’t made enough time to master any of them apart from Flame, a little Nuke and Photoshop. I do rely on my Canon DSLR a lot, and I grade stills with Lightroom.

Was there a particular film that motivated you to work in VFX?
Not one in particular. There have been some that along the way have impressed me. I’m thinking District 9 as I type, but there have been a few with a similar effect on me.

What inspires your work?
I take an interest in a lot of everyday things, what the world looks and moves like. Not enough to be an expert in anything, but enough to understand (on a basic level) how it could be recreated. I’m certainly not very clever, just interested enough to spend proper time to find solutions.

The other part is that I seem to have is a gene that makes me feel really bad if I let people down. So I keep going until a problem shot is better, or I hit an immovable delivery date. I’d have done okay in any service industry really.

Any tips for young people starting out?
I see a direct link between exceptional creativity in VFX work to how deeply curious people are in the real world, with all of its incredible qualities. A good place to start is getting interested in what the real world actually looks like through a real lens. Take your own pictures, as it makes you understand relationship between lens and objects.

Start your own projects, and make sure they’re ambitious. Work out how to make them amazing. Then show these as an example of what you can do. Don’t show roto for rotos sake. Once you get a job, don’t get complacent and think you’ve made it. The next step in a career isn’t automatic. It only happens with added effort.

Sydney’s Fin creates CG robot for Netflix film I Am Mother

Fin Design + Effects, an Australian-based post production house with studios in Melbourne and Sydney, brings its VFX and visual storytelling expertise to the upcoming Netflix film I Am Mother. Directed by Grant Sputore, the post-apocalyptic film stars Hilary Swank, Rose Byrne and Clara Rugaard.

In I Am Mother, a teenage girl (Rugaard) is raised underground by the robot “Mother” (voiced by Byrne), designed to repopulate the earth following an extinction event. But their unique bond is threatened when an inexplicable stranger (Swank) arrives with alarming news.

Working closely with the director, Fin Design’s Sydney office built a CG version of the AI robot Mother to be used interchangeably with the practical robot suit built by New Zealand’s Weta Workshop. Fin was involved from the early stages of the process to help develop the look of Mother, completing extensive design work and testing, which then fed back into the practical suit.

In total, Fin produced over 220 VFX shots, including the creation of a menacing droid army as well as general enhancements to the environments and bunker where this post-apocalyptic story takes place.

According to Fin Australia’s managing director, Chris Spry, “Grant was keen on creating an homage of sorts to old-school science-fiction films and embracing practical filmmaking techniques, so we worked with him to formulate the best approach that would still achieve the wow factor — seamlessly combining CG and practical effects. We created an exact CG copy of the suit, visualizing high-action moments such as running, or big stunt scenes that the suit couldn’t perform in real life, which ultimately accounted for around 80 shots.”

Director Sputore on working with Fin: “They offer suggestions and bust expectations. In particular, they delivered visual effects magic with our CG Mother, one minute having her thunder down bunker corridors and in the next moment speed-folding intricate origami creations. For the most part, the robot at the center of our film was achieved practically. But in those handful of moments where a practical solution wasn’t possible, it was paramount that the audience was not be bumped from the film by a sudden transition to a VFX version of one of our central characters. In the end, even I can’t tell which shots of Mother are CG and which are practical, and, crucially, neither can the audience.”

To create the CG replica, the Fin team paid meticulous attention to detail, ensuring the material, shaders and textures perfectly matched photographs and laser scans of the practical suit. The real challenge, however, was in interpreting the nuances of the movements.

“Precision was key,” explains VFX supervisor Jonathan Dearing. “There are many shots cutting rapidly between the real suit and CG suit, so any inconsistencies would be under a spotlight. It wasn’t just about creating a perfect CG replica but also interpreting the limitations of the suit. CG can actually depict a more seamless movement, but to make it truly identical, we needed to mimic the body language and nuances of the actor in the suit [Luke Hawker]. We did a character study of Luke and rigged it to build a CG version of the suit that could mimic him precisely.”

Fin finessed its robust automation pipeline for this project. Built to ensure greater efficiency, the system allows animators to push their work through lighting and comp at the click of a button. For example, if a shot didn’t have a specific light rig made for it, animators could automatically apply a generic light rig that suits the whole film. This tightly controlled system meant that Fin could have one lighter and one animator working on 200 shots without compromising on quality.

The studio used Autodesk Maya, Side Effects Houdini, Foundry Nuke and Redshift on this project.

I Am Mother premiered at the 2019 Sundance Film Festival and is set to stream on Netflix on June 7.

Behind the Title: MPC creative director Rupert Cresswell

This Brit is living in New York while working on spots, directing and playing dodgeball.

NAME: Rupert Cresswell

COMPANY: MPC

CAN YOU DESCRIBE YOUR COMPANY?
MPC has been one of the global leaders in VFX for nearly 50 years, with industry-leading facilities in London, Vancouver, Los Angeles, Bangalore, New York, Montréal, Shanghai, Amsterdam and Paris. Well-known for adding visuals for advertising, film and entertainment industries, some of our most famous projects include blockbuster movies such as The Jungle Book, The Martian, the Harry Potter franchise, the X-Men movies and the upcoming The Lion King, not to mention famous advertising campaigns for brands such as Samsung, BMW, Hennessy and Apple. I am based in New York.

WHAT’S YOUR JOB TITLE?
Creative Director (and Director)

WHAT DOES THAT ENTAIL?
Lots of things, depending on the project. I am repped by MPC to direct commercials, so my work often mixes live action with some form of visual effects or animation. I’m constantly pitching for jobs; if I am successful, I direct the subsequent shoot, then oversee a team of artists at MPC through the post process until delivery.

VeChain 

When I’m not directing, I work as a creative director, leading teams on animation and design projects within MPC. It’s mostly about zeroing in on a client’s needs and offering a creative solution. I critique large teams of artists’ work — sometimes up to 60 artists across our global network — ensuring a consistent creative vision. At MPC we are expected to keep the highest standards of work and make original contributions to the industry. It’s my job to make sure we do.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I feel like the lines between agency, production company and VFX studio can be blurred these days. In my job, I’m often called on for a wide range of disciplines such as writing the creative, directing actors, and even designing large-scale print and OOH (out of the home) advertising campaigns.

WHAT’S YOUR FAVORITE PART OF THE JOB?
There’s always a purity to the concepts at the pitch stage, which I tend to get really enthusiastic about, but the best bit is to get to travel to shoot. I’ve been super-lucky to film in some awesome places like the south of France, Montreal, Cape Town and the Atacama Desert in Chile.

Additionally, the industry is full of funny, cool, creative characters, and if you can take a beat to remind yourself of that, it’s always a blast working with them. The usual things can bother you, like stress and long hours; also, no one likes it when ideas with great potential get compromised. But more often than not, I’m thankful for what I get to do.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
There’s a sweet spot in the morning after I’ve had some caffeine and before I get hungry for lunch — that’s when the heavy lifting happens.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I always knew I wanted to go to art school but never really knew what to do after that. It took years to figure out how to turn my interests into a career. There’s a lot to be said for stubbornly refusing to do something less interesting.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I finished a big campaign for Timberland, which was a great experience. I worked directly with the client, first on the creative, then I directed the shoot in Montreal. I then I oversaw the post and the print campaign, which seemed to go up everywhere I went in the city. It was a huge technical and creative challenge, but great to be involved from the very start to the very end of the process.

I also worked on one of the first brand campaigns for the blockchain currency, VeChain. That was a huge VFX undertaking and lots of fun — we created a love letter to some classic sci-fi films like Star Wars and Blade Runner, which turned out pretty sweet.

In complete contrast, my most favorite recent experience was to work on the branding for the cult Hulu comedy Pen15. The show is so funny, it was a bit of a dream project. It was refreshing to go from such a large technical endeavor as Timberland with a big VFX team to working almost solo, and mostly just illustrating. There was something really cathartic about it. The job required me to spend most of the day doodling childish pictures — I got a real kick out of the puzzled faces around the office wondering if I’d had some kind of breakdown.

Pen15

WHAT OTHER PROJECTS STAND OUT?
Some of my stuff won glittery awards, but I am super-proud that I made a short film, called Charlie Cloudhead, that got picked up by many festivals. I always wanted to try writing and directing narrative work, and I wanted something that could showcase more of my live-action direction.

It was an unusually personal film, which I still feel a little awkward about, but I am really proud that I put in the effort to make it. It was amazing to work with two fantastic actors (Paul Higgins and Daisy Haggard), and I’m still humbled by all the hard work a big team of people put in just for some kooky little idea that I dreamed up.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The idea of no phone and no Internet gives me anxiety. Add to the horror by taking away AC during a New York summer and I’d be a weeping mess.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m pretty much addicted to scrolling through Instagram, but I’m lazy at posting stuff. Maybe it’ll become Myspace 2.0 and we’ll all laugh at all those folks with thousands of followers. Until then, it’s very useful for seeing inspiring new work out there.

I’m also a Brit living abroad in the US, so I’m rather masochistically glued to any news of the whole Brexit thing going down.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I do. Music is incredibly influential. Most of the time when I’m working on a project, it will be inspired by a song. It helps me create a mood for the film and I’ll listen to it repeatedly while I’m working on script or walking around thinking about it. For example, my short film was inspired by a song by Cate Le Bon.

My taste is pretty random to be honest. Recently I’ve been re-visiting Missy Elliott and checking out Rosalia, John Maus and the new Karen O stuff. I’m also a bit obsessed with an artist from Mali called Oumou Sangaré. I was introduced to her by a late-night Lyft driver recently, and she’s been helping set the mood for this Q&A right now.

I should add, I work in an open-plan studio and access to the Bluetooth speaker takes a certain restraint and responsibility to prevent arguments — I’m not necessarily the right guy for that. I usually try and turn the place into Horse Meat Disco.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently joined a dodgeball league. I had no idea how to play at first, and I’m actually very bad at it. I’m treating it as a personal challenge — learning to embrace being a laughable failure. I’m sure it’ll do me good.

Fox Sports promotes US women’s World Cup team with VFX-heavy spots

Santa Monica creative studio Jamm worked with Wieden+Kennedy New York on the Fox Sports campaign “All Eyes on US.” Directed by Joseph Kahn out of Supply & Demand, the four spots celebrate the US Women’s soccer team as it gears up for the 2019 FIFA Women’s World Cup in June.

The newest 60-second spot All Eyes on US, features tens of thousands of screaming fans thanks to Jamm’s CG crowd work. On set, Jamm brainstormed with Kahn on how to achieve the immersive effect he was looking for. Much of the on-the-ground footage was shot using wide-angle lenses, which posed a unique set of challenges by revealing the entire environment as well as the close-up action. With pacing, Jamm achieved the sense of the game occurring in realtime, as the tempo of the camera keeps in step with the team moving the ball downfield.

The 30-second spot Goliath features the first CG crowd shot by the Jamm team, who successfully filled the soccer stadium with a roaring crowd. In Goliath, the entire US women’s soccer team runs toward the camera in slow motion. Captured locked off but digitally manipulated via a 3D camera to create a dolly zoom technique replicating real-life parallax, the altered perspective translates the unsettling feeling of being an opponent as the team literally runs straight into the camera.

On set, Jamm got an initial Lidar scan of the stadium as a base. From there, they used that scan along with reference photos taken on set to build a CG stadium that included accurate seating. They extended the stadium where there were gaps as well to make it a full 360 stadium. The stadium seating tools tie in with Jamm’s in-house crowd system (based on Side Effects Houdini) and allowed them to easily direct the performance of the crowd in every shot.

The Warrior focuses on Megan Rapinoe standing on the field in the rain, with a roaring crowd behind her. Whereas CG crowd simulation is typically captured with fast-moving cameras, the stadium crowd remains locked in the background of this sequence. Jamm implemented motion work and elements like confetti to make the large group of characters appear lively without detracting from Rapinoe in the foreground. Because the live-action scenes were shot in the rain, Jamm used water graphing to seamlessly blend the real-world footage and the CG crowd work.

The Finisher centers on Alex Morgan, who earned the nickname because “she’s the last thing they’ll see before it’s too late.”  The team ran down the field at a slow motion pace, while the cameraman rigged with a steady cam sprinted backwards through the goal. Then the footage was sped up by 600%, providing a realtime quality, as Morgan kicks a perfect strike to the back of the net.

Jamm used Autodesk Flame for compositing the crowds and CG ball, camera projections to rebuild and clean up certain parts of the environment, refining the skies and adding in stadium branding. They also used Foundry Nuke and Houdini for 3D.

The edit was via FinalCut and editor Spencer Campbell. The color grade was by Technicolor’s Tom Poole.

2 Chainz’s 2 Dolla Bill gets VFX from Timber

Santa Monica’s Timber, known for its VMA-winning work on the Kendrick Lamar music video “Humble, provided visual effects and post production for the latest music video from 2 Chainz, featuring E-40 and Lil Wayne — 2 Dolla Bill.

The video begins with a group of people in a living room with the artist singing, “I’m rare” while holding a steak. It transitions to a poker game where the song continues with “I’m rare, like a two dollar bill.” We then see a two dollar bill with Thomas Jefferson singing the phrase as well. The video takes us back to the living room, the poker game, an operating room, a kitchen and other random locations.

Artists at collaborating company Kevin provided 2D visual effects for the music video, including the scene with the third eye.

According to Timber creative director/partner Kevin Lau, “The main challenge for this project was the schedule. It was a quick turnaround initially, so it was great to be able to work in tandem with offline to get ahead of the schedule. This also allowed us to work closely with the director and implement some his requests to enhance the video after it was shot.”

Timber got involved early on in the project and was on set while they shot the piece. The studio called on Autodesk Flame for clean-up, compositing and enhancement work, as well as the animation of the talking money.

Lau was happy Timber got the chance to be on set. “It was very useful to have a VFX supervisor on set for this project, given the schedule and scope of work. We were able to flag any concerns/issues right away so they didn’t become bigger problems in post.”

Arcade Edit’s Geoff Hounsell edited the piece. Daniel de Vue from A52 provided the color grade.

 

Marvel Studios’ Victoria Alonso to keynote SIGGRAPH 2019

Marvel Studios executive VP of production Victoria Alonso has been name keynote speaker for SIGGRAPH 2019, which will run from July 28 through August 1 in downtown Los Angeles. Registration is now open. The annual SIGGRAPH conference is a melting pot for researchers, artists and technologists, among other professionals.

“Victoria is the ultimate symbol of where the computer graphics industry is headed and a true visionary for inclusivity,” says SIGGRAPH 2019 conference chair Mikki Rose. “Her outlook reflects the future I envision for computer graphics and for SIGGRAPH. I am thrilled to have her keynote this summer’s conference and cannot wait to hear more of her story.”

One of few women in Hollywood to hold such a prominent title, Alonso’s dedication to the industry has been admired for a long time, leading to multiple awards and honors, including the 2015 New York Women in Film & Television Muse Award for Outstanding Vision and Achievement, the Advanced Imaging Society’s first female Harold Lloyd Award recipient, and the 2017 VES Visionary Award (another female first). A native of Buenos Aires, her career began in visual effects and included a four-year stint at Digital Domain.

Alonso’s film credits include productions such as Ridley Scott’s Kingdom of Heaven, Tim Burton’s Big Fish, Andrew Adamson’s Shrek, and numerous Marvel titles — Iron Man, Iron Man 2, Thor, Captain America: The First Avenger, Iron Man 3, Captain America: The Winter Soldier, Captain America: Civil War, Thor: The Dark World, Avengers: Age of Ultron, Ant-Man, Guardians of the Galaxy, Doctor Strange, Guardians of the Galaxy Vol. 2, Spider-Man: Homecoming, Thor: Ragnarok, Black Panther, Avengers: Infinity War, Ant-Man and the Wasp and, most recently, Captain Marvel.

“I’ve been attending SIGGRAPH since before there was a line at the ladies’ room,” says Alonso. “I’m very much looking forward to having a candid conversation about the state of visual effects, diversity and representation in our industry.”

She adds, “At Marvel Studios, we have always tried to push boundaries with both our storytelling and our visual effects. Bringing our work to SIGGRAPH each year offers us the opportunity to help shape the future of filmmaking.”

The 2019 keynote session will be presented as a fireside chat, allowing attendees the opportunity to hear Alonso discuss her life and career in an intimate setting.

Review: Maxon Cinema 4D Release 20

By Brady Betzel

Last August, Maxon made available its Cinema 4D Release 20. From the new node-based Material Editor to the all new console used to debug and develop scripts, Maxon has really upped the ante.

At the recent NAB show, Maxon announced that they acquired Redshift Rendering Technologies, the makers of the Redshift rendering engine. This acquisition will hopefully tie in an industry standard GPU-based rendering engine inside of Cinema 4D R20’s workflow and speed up rendering. As of now there is still the same licensing fees attached to Redshift as there were before the acquisition: Node-Locked is $500 and Floating is $600.

Digging In
The first update to Cinema 4D R20 that I wanted to touch on is the new node-based Material Editor. If you are familiar with Blackmagic’s DaVinci Resolve or Nuke’s applications, then you have seen how nodes work. I love how nodes work, allowing the user to not only layer up effects — or in Cinema 4D R20’s case — diffusion to camera distance. There are over 150 nodes inside of the material editor to build textures with.

One small change that I noticed inside of the updated Material Editor was the new gradient settings. When you are working with gradient knots you can now select multiple knots at once and then right click and double the selected knots, invert the knots, select different knot interpolations (including stepped, smooth, cubic, linear, and blend) and even distribute the knots to clean up your pattern. A real nice and convenient update to gradient workflows.

In Cinema 4D R20, not only can you add new nodes from the search menu, but you can also click the node dots in the Basic properties window and route nodes through there. When you are happy with your materials made in the node editor, you can save them as assets in the scene file or even compress them in a .zip file to share with others.

In a related update category, Cinema 4D Release 20 has introduced the Uber Material. In simple terms (and I mean real simple), the Uber Material is a node-based material that is different from standard or physical materials because it can be edited inside of the Attribute Manager or Material Editor but retain the properties available in the Node Editor.

The Camera Tracking and 2D Camera View has been updated. While the Camera Tracking mode has been improved, the new 2D Camera View mode has combined the Film Move mode with the Film Zoom mode. Adding the ability to use standard shortcuts to move around a scene instead of messing with the Film Offset or Focal Length in the Camera Object Properties dialogue. For someone like me who isn’t a certified pro in Cinema 4D, these little shortcuts really make me feel at home. Much more like apps I’m used to such as Mocha Pro or After Effects. Maxon has also improved the 2D tracking algorithm for much tighter tracks as well as added virtual keyframes. The virtual keyframes are an extreme help when you don’t have time for minute adjustments.

Volume Modeling
What seems to be one of the largest updates in Cinema 4D R20 is the addition of Volume Modeling with the OpenVDB-based Volume Builder. According to www.openvdb.org, “OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids,” developed by Ken Museth at DreamWorks Animation. It uses 3D pixels called voxels instead of polygons. When using the Volume Builder you can combine multiple polygon and primitive objects using Boolean operations: Union, Subtract or Intersect. Furthermore you can smooth your volume using multiple techniques, including one that made me do some extra Google work: Laplacian Flow.

Fields
When going down the voxel rabbit hole in Cinema 4D R20, you will run into another new update: Fields. Prior to Cinema 4D R20, we would use Effectors to affect strength values of an object. You would stack and animate multiple effectors to achieve different results. In Cinema 4D R20, under the Falloff tab you will now see a Fields list along with the types of Field Objects to choose from.

Imagine you make a MoGraph object that you want its opacity to be controlled by a box object moving through your MoGraph but also physically modified by a capsule poking through. You can combine these different field object effectors by using compositing functions in the Fields list. In addition you can animate or alter these new fields straight away in the Objects window.

Summing Up
Cinema 4D Release 20 has some amazing updates that will greatly improve efficiency and quality of your work. From tracking updates to field updates, there are plenty of exciting tools to dive into. And if you are reading this as an After Effects user who isn’t sure about Cinema 4D, now is the time to dive in. Once you learn the basics, whether it’s from Youtube tutorials or you sign up for www.cineversity.com classes, you will immediately see an increase in the quality of your work.

Combining Adobe After Effects, Element 3D and Cinema 4D R20 is the ultimate in 3D motion graphics and 2D compositing — accessible to almost everyone. And I didn’t even touch on the dozens of other updates to Cinema 4D R20 like the multitude of ProRender updates, FBX import/export options, new node materials and CAD import support for Cataia, Iges, JT, Solidworks and Step formats. Check out Cinema 4D Release 20’s newest features on YouTube and on their website.

And, finally, I think it’s safe to assume that Maxon’s acquisition of RedShift renderer poses a bright future for Cinema 4D users.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.

Autodesk’s Flame 2020 features machine learning tools

Autodesk’s new Flame 2020 offers a new machine-learning-powered feature set with a host of new capabilities for Flame artists working in VFX, color grading, look development or finishing. This latest update will be showcased at the upcoming NAB Show.

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

New creative tools include:
· Z-Depth Map Generator— Enables Z-depth map extraction analysis using machine learning for live-action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
· Human Face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
· Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity updates include:
· Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
· Simpler UX in Core Areas— A new expanded full-width UX layout for MasterGrade, Image surface and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
· Manager for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment.
· Open FX Support—Flame, Flare and Flame Assist version 2020 now include comprehensive support for industry-standard Open FX creative plugins such as Batch/BFX nodes or on the Flame timeline.
· Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene.

For single-user licenses, Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.
Flame, Flare, Flame Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. Pricing details can be found at the Autodesk website.

VFX and color for new BT spot via The Mill

UK telco BT wanted to create a television spot that showcased the WiFi capabilities of its broadband hub and underline its promise of “whole home coverage.” Sonny director Fredrik Bond visualized a fun and fast-paced spot for agency AMV BBDO, and a The Mill London was brought onboard to help with VFX and color. It is called Complete WiFi.

In the piece, the hero comes home to find it full of soldiers, angels, dancers, fairies, a giant and a horse — characters from the myriad of games and movies the family are watching simultaneously. Obviously, the look depends upon multiple layers of compositing, which have to be carefully scaled to be convincing.

They also need to be very carefully color matched, with similar lighting applied, so all the layers sit together. In a traditional workflow, this would have meant a lot of loops between VFX and grading to get the best from each layer, and a certain amount of compromise as the colorist imposed changes on virtual elements to make the final grade.

To avoid this, and to speed progress, The Mill recently started using BLG for Flame, a FilmLilght plugin that allows Baselight grades to be rendered identically within Flame — and with no back and forth to the color suite to render out new versions of shots. It means the VFX supervisor is continually seeing the latest grade and the colorist can access the latest Flame elements to match them in.

“Of course it was frustrating to grade a sequence and then drop the VFX on top,” explains VFX supervisor Ben Turner. “To get the results our collaborators expect, we were constantly pushing material to and fro. We could end up with more than a hundred publishes on a single job.”

With the BLG for Flame plugin, the VFX artist sees the latest Baselight grade automatically applied, either from FilmLight’s BLG format files or directly from a Baselight scene, even while the scene is still being graded — although Turner says he prefers to be warned when updates are coming.

This works because all systems have access to the raw footage. Baselight grades non-destructively, by building up layers of metadata that are imposed in realtime. The metadata includes all the grading information, multiple windows and layers, effects and relights, textures and more – the whole process. This information can be imposed on the raw footage by any BLG-equipped device (there are Baselight Editions software plugins for Avid and Nuke, too) for realtime rendering and review.

That is important because it also allows remote viewing. For this BT spot, director Bond was back in Los Angeles by the time of the post. He sat in a calibrated room in The Mill in LA and could see the graded images at every stage. He could react quickly to the first animation tests.

“I can render a comp and immediately show it to a client with the latest grade from The Mill’s colorist, Dave Ludlam,” says Turner. “When the client really wants to push a certain aspect of the image, we can ensure through both comp and grade that this is done sympathetically, maintaining the integrity of the image.”

(L-R) VFX supervisor Ben Turner and colorist Dave Ludlam.

Turner admits that it means more to-ing and fro-ing, but that is a positive benefit. “If I need to talk to Dave then I can pop in and solve a specific challenge in minutes. By creating the CGI to work with the background, I know that Dave will never have to push anything too hard in the final grade.”

Ludlam agrees that this is a complete change, but extremely beneficial. “With this new process, I am setting looks but I am not committing to them,” he says. “Working together I get a lot more creative input while still achieving a much slicker workflow. I can build the grade and only lock it down when everyone is happy.

“It is a massive speed-up, but more importantly it has made our output far superior. It gives everyone more control and — with every job under huge time pressure — it means we can respond quickly.”

The spot was offlined by Patric Ryan from Marshall Street. Audio post was via 750mph with sound designers Sam Ashwell and Mike Bovill.

FilmLight offers additions to Baselight toolkit

FilmLight will be at NAB showing updates to its Baselight toolkit, including T-Cam v2. This is FilmLight’s new and improved color appearance model, which allows the user to render an image for all formats and device types with confidence of color.

It combines with the Truelight Scene Looks and ARRI Look Library, now implemented within the Baselight software. “T-CAM color handling with the updated Looks toolset produces a cleaner response compared to creative, camera-specific LUTs or film emulations,” says Andrea Chlebak, senior colorist at Deluxe’s Encore in Hollywood. “I know I can push the images for theatrical release in the creative grade and not worry about how that look will translate across the many deliverables.”

FilmLight had added what they call “a new approach to color grading” with the addition of Texture Blend tools, which allow the colorist to apply any color grading operation dependent on image detail. This gives the colorist fine control over the interaction of color and texture.

Other workflow improvements aimed at speeding the process include enhanced cache management; a new client view that displays a live web-based representation of a scene showing current frame and metadata; and multi-directory conform for a faster and more straightforward conform process.

The latest version of Baselight software also includes per-pixel alpha channels, eliminating the need for additional layer mattes when compositing VFX elements. Tight integration with VFX suppliers, including Foundry Nuke and Autodesk, means that new versions of sequences can be automatically detected, with the colorist able to switch quickly between versions within Baselight.

VFX house Rodeo FX acquires Rodeo Production

Visual effects studio Rodeo FX, whose high-profile projects include Dumbo, Aquaman and Bumblebee, has purchased Rodeo Production and added its roster of photographers and directors to its offerings.

The two companies, whose common name is just a coincidence, will continue to operate as distinct entities. Rodeo Production’s 10-year-old Montreal office will continue to manage photo and video production, but will now also offer RodeoFX’s post production services and technical expertise.

In Toronto, Rodeo FX plans to open an Autodesk Flame editing suite in the Rodeo Production’ studio and expand its Toronto roster of photographers and directors with the goal of developing stronger production and post services for clients in the city’s advertising, television and film industries.

“This is a milestone in our already incredible history of growth and expansion,” says Sébastien Moreau, founder/president of Rodeo FX, which has offices in LA and Munich in addition to Montreal.

“I have always worked hard to give our artists the best possible opportunities, and this partnership was the logical next step,” says Rodeo Production’s founder Alexandra Saulnier. “I see this as a fusion of pure creativity and innovative technology. It’s the kind of synergy that Montreal has become famous for; it’s in our DNA.”

Rodeo Production clients include Ikea, Under Armour and Mitsubishi.

Quick Chat: Lord Danger takes on VFX-heavy Devil May Cry 5 spot

By Randi Altman

Visual effects for spots have become more and more sophisticated, and the recent Capcom trailer promoting the availability of its game Devil May Cry 5 is a perfect example.

 The Mike Diva-directed Something Greater starts off like it might be a commercial for an anti-depressant with images of a woman cooking dinner for some guests, people working at a construction site, a bored guy trimming hedges… but suddenly each of our “Everyday Joes” turns into a warrior fighting baddies in a video game.

Josh Shadid

The hedge trimmer’s right arm turns into a futuristic weapon, the construction worker evokes a panther to fight a monster, and the lady cooking is seen with guns a blazin’ in both hands. When she runs out of ammo, and to the dismay of her dinner guests, her arms turn into giant saws. 

Lord Danger’s team worked closely with Capcom USA to create this over-the-top experience, and they provided everything from production to VFX to post, including sound and music.

We reached out to Lord Danger founder/EP Josh Shadid to learn more about their collaboration with Capcom, as well as their workflow.

How much direction did you get from Capcom? What was their brief to you?
Capcom’s fight-games director of brand marketing, Charlene Ingram, came to us with a simple request — make a memorable TV commercial that did not use gameplay footage but still illustrated the intensity and epic-ness of the DMC series.

What was it shot on and why?
We shot on both Arri Alexa Mini and Phantom Flex 4k using Zeiss Super Speed MKii Prime lenses, thanks to our friends at Antagonist Camera, and a Technodolly motion control crane arm. We used the Phantom on the Technodolly to capture the high-speed shots. We used that setup to speed ramp through character actions, while maintaining 4K resolution for post in both the garden and kitchen transformations.

We used the Alexa Mini on the rest of the spot. It’s our preferred camera for most of our shoots because we love the combination of its size and image quality. The Technodolly allowed us to create frame-accurate, repeatable camera movements around the characters so we could seamlessly stitch together multiple shots as one. We also needed to cue the fight choreography to sync up with our camera positions.

You had a VFX supervisor on set. Can you give an example of how that was beneficial?
We did have a VFX supervisor on site for this production. Our usual VFX supervisor is one of our lead animators — having him on site to work with means we’re often starting elements in our post production workflow while we’re still shooting.

Assuming some of it was greenscreen?
We shot elements of the construction site and gardening scene on greenscreen. We used pop-ups to film these elements on set so we could mimic camera moves and lighting perfectly. We also took photogrammetry scans of our characters to help rebuild parts of their bodies during transition moments, and to emulate flying without requiring wire work — which would have been difficult to control outside during windy and rainy weather.

Can you talk about some of the more challenging VFX?
The shot of the gardener jumping into the air while the camera spins around him twice was particularly difficult. The camera starts on a 45-degree frontal, swings behind him and then returns to a 45-degree frontal once he’s in the air.

We had to digitally recreate the entire street, so we used the technocrane at the highest position possible to capture data from a slow pan across the neighborhood in order to rebuild the world. We also had to shoot this scene in several pieces and stitch it together. Since we didn’t use wire work to suspend the character, we also had to recreate the lower half of his body in 3D to achieve a natural looking jump position. That with the combination of the CG weapon elements made for a challenging composite — but in the end, it turned out really dramatic (and pretty cool).

Were any of the assets provided by Capcom? All created from scratch?
We were provided with the character and weapons models from Capcom — but these were in-game assets, and if you’ve played the game you’ll see that the environments are often dark and moody, so the textures and shaders really didn’t apply to a real-world scenario.

Our character modeling team had to recreate and re-interpret what these characters and weapons would look like in the real world — and they had to nail it — because game culture wouldn’t forgive a poor interpretation of these iconic elements. So far the feedback has been pretty darn good.

In what ways did being the production company and the VFX house on the project help?
The separation of creative from production and post production is an outdated model. The time it takes to bring each team up to speed, to manage the communication of ideas between creatives and to ensure there is a cohesive vision from start to finish, increases both the costs and the time it takes to deliver a final project.

We shot and delivered all of Devil May Cry’s Something Greater in four weeks total, all in-house. We find that working as the production company and VFX house reduces the ratio of managers per creative significantly, putting more of the money into the final product.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Review: Yanobox Nodes 3 — plugins for Premiere, AE, FCPX, Motion

By Brady Betzel

Did you ever see a plugin preview and immediately think, “I need to have that?” Well, Nodes 3 by Yanobox is that plugin for me. Imagine if Video CoPilot’s Element 3D and Red Giant’s Trapcode and Form had a baby — you would probably end up with something like Nodes 3.

Nodes 3 is a MacOS-only plugin for Adobe’s After Effects and Premiere Pro and Apple’s Final Cut Pro X and Motion. I know what you are thinking: Why isn’t this made for Windows? Good question, but I don’t think it will ever be ported over.

Final Cut Pro

What is it? Nodes 3 is a particle, text, .obj and point cloud replicator, as well as overall mind-blower. With just one click in their preset library you can create stunning fantasy user interfaces (FUIs), such as HUDs or the like. From Transformer-like HUDs to visual data representations interconnected with text and bar graphs, Nodes 3 needs to be seen to be believed. Ok, enough gloating and fluff, let’s get to the meat and potatoes.

A Closer Look
Nodes 3 features a new replicator, animation module and preset browser. The replicator allows you to not only create your HUD or data representation, but also replicates it onto other 2D and 3D primitive shapes (like circles or rectangles) and animates those replications individually or as a group. One thing I really love is the ability to randomize node and/or line values — Yanobox labels this “Probabilities.” You can immediately throw multiple variations of your work together with a few mouse-clicks instead of lines of scripting.

As I mentioned earlier, Nodes 3 is essentially a mix of Element 3D and Trapcode — it’s part replicator/part particle generator and it works easily with After Effect’s 3D cameras (obviously if you are working inside of After Effects) to affect rotations, scale and orientation. The result is a particle replication that feels organic and fresh instead of static and stale. The Auto-Animations offering allows you to quickly animate up to four parts of a structure you’ve built, with 40 parameter choices under each of the four slots. You can animate the clockwise rotation of an ellipse with a point on it, while also rotating the entire structure in toward the z-axis.

Replicator

The newly updated preset browser allows you to save a composition as a preset and open it from within any other compatible host. This allows you to make something with Nodes 3 inside of After Effects and then work with it inside of Final Cut Pro X. That can be super handy and help streamline VFX work. From importing an .obj file to real video, you can generate point clouds from unlimited objects and literally explode them into hundreds of interconnecting points and lines, all animated randomly. It’s amazing.

If you are seeing this and thinking about using Nodes for data representation, that is one of the more beautiful functions of this plugin. First, check out how to turn seemingly boring bar graphs into mesmerizing creations.

For me Nodes really began to click when they described how each node is defined by an index number. Meaning, each node has even and odd numbers assigned to them, allowing for some computer-science geeky-ness, like skipping even or odd rows and adding animated oscillations for some really engrossing graph work.

When I reviewed Nodes 2 back in 2014, what really gave me a “wow” moment was when they showed a map of the United States along with text for each state and its capital. From there you could animate an After Effect’s 3D camera to reproduce a fly-over but with this futuristic HUD/FUI.

Adobe Premiere

On a motion graphics primal level, this really changed and evolved my way of thinking. Not only did United States graphics not have to be plain maps with animated dotted lines, they could be reimagined with sine-wave-based animations or even gently oscillating data points. Nodes 3 really can turn boring into mesmerizing quickly. The only limiting factor is your mind and some motion graphic design creativity.

To get a relatively quick look into the new replicator options inside of Nodes 3, go to FxFactory Plugins’ YouTube page for great tutorials and demos.

If you get even a tiny bit excited when seeing work from HUD masters like Jayse Hansen or plugins like Element 3D, run over to fxfactory.com and download their plugin app to use Yanobox Nodes 3. You can even get a fully working trial to just test out some of their amazing presets. And if you like what you see, you should definitely hand them $299 for the Nodes 3 plugin.

One slight negative for me — I’m not a huge fan of the FxFactory installer. Not because it messes anything up, but because I have to download a plugin loader for the plugin — double download and potential bloating. Not that I see any slowdown on my system, but it would be nice if I could just download Nodes 3 and nothing else. That is small potatoes though; Nodes 3 is really an interesting and unbridled way to visualize 2D and 3D data quickly.

Oh, and if you are curious, Yanobox has been used on big-name projects from The Avengers to Rise of the Planet of the Apes — HUDs, FUIs and GUIs have been created using Yanobox Nodes.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

VFX supervisor Christoph Schröer joins NYC’s Artjail

New York City-based VFX house Artjail has added Christoph Schröer as VFX supervisor. Previously a VFX supervisor/senior compositor at The Mill, Schröer brings over a decade of experience to his new role at Artjail. His work has been featured in spots for Mercedes-Benz, Visa, Volkswagen, Samsung, BMW, Hennessy and Cartier.

Combining his computer technology expertise and a passion for graffiti design, Schröer applied his degree in Computer and Media Sciences to begin his career in VFX. He started off working at visual effects studios in Germany and Switzerland where he collaborated with a variety of European auto clients. His credits from his tenure in the European market include lead compositor for multiple Mercedes-Benz spots, two global Volkswagen campaign launches and BMW’s “Rev Up Your Family.”

In 2016, Schröer made the move to New York to take on a role as senior compositor and VFX supervisor at The Mill. There, he teamed with directors such as Tarsem Singh and Derek Cianfrance, and worked on campaigns for Hennessy, Nissan Altima, Samsung, Cartier and Visa.

Roper Technologies set to acquire Foundry

Roper Technologies, a technology company and a constituent of the S&P 500, Fortune 1000 and the Russell 1000 indices, is expected to purchase Foundry — the deal is expected to close in April 2019, subject to regulatory approval and customary closing conditions.Foundry makes software tools used to create visual effects and 3D for the media and entertainment world, including Nuke, Modo, Mari and Katana.

Craig Rodgerson

It’s a substantial move that enables Foundry to remain an independent company, with Roper assuming ownership from Hg. Roper has a successful history of acquiring well-run technology companies in niche markets that have strong, sustainable growth potential.

“We’re excited about the opportunities this partnership brings. Roper understands our strategy and chose to invest in us to help us realize our ambitious growth plans,” says Foundry CEO Craig Rodgerson. “This move will enable us to continue investing in what really matters to our customers: continued product improvement, R&D and technology innovation and partnerships with global leaders in the industry.”

Alkemy X: A VFX guide to pilot season

Pilot season is an important time for visual effects companies that work in television. Pilots offer an opportunity to establish the look of key aspects of a show and, if the show gets picked up, present the potential of a long-term gig. But pilots also offer unique challenges.

Time is always short and budgetary resources are often in even shorter supply, yet expectations may be sky high. Alkemy X, which operates visual effects studios in New York and Los Angeles, has experienced the trials as well as enjoyed the fruits of pilot season, delivering effects for shows that have gone onto successful runs, including Frequency, Time After Time, Do No Harm, The Leftovers, Flesh and Bone, Outcast, Mr. Robot, Deception and The Marvelous Mrs. Maisel.

Mark Miller

We recently reached out to Mark Miller, executive producer/business development, at Alkemy X to find out how his company overcomes the obstacles of time and budget to produce great effects for hopeful, new shows.

How does visual effects production for pilots differ from a regular series?
The biggest difference between a series and a pilot is that with a pilot you are establishing the look of the show. You work in concert with the director to implement his or her vision and offer ideas on how to get there.

Typically, we work on pilots with serious needs for VFX to drive the stories. We are not often told how to get there but simply listen to the producters, interpret their vision and do our best to give it to them on screen. The quality of the visuals we create is often the difference between a pick-up and a pass.

In the case of one show I was involved with, the time and budget available made it impossible to complete all the required visual effects. As a result, the VFX supervisor decided to put the time and money they had into the most important plot points in the script and use simple tests as placeholders for less important VFX. That sold the studio and the show went to series.

Had we attempted to complete the show in its entirety, it may not have seemed as viable by the studio. Again, that was a collaborative decision made by the director, studio, VFX supervisor and VFX company.

Mr. Robot

What should studios consider in selecting a visual effects provider for a pilot?
Often the deciding factors in choosing a VFX vendor are its cost and location within an incentivized region. Usually the final arbitrator is the VFX supervisor, occasionally with restrictions as to which company he or she may use. I find that good-quality VFX companies, shops with strong creative vision and the ability to deliver the shots with little pain, are unable to meet a production’s budgets, even if they are in a favorable region. That drives productions to smaller shops and results in less-polished shows.

Shots may not be delivered on time or may not have the desired creative impact. We are all aware that, even if a pilot you work on goes to series, there is no guarantee you will get the work. These days, many pilots employ feature directors and their crew. So, when one is picked up, it usually has a whole new crew.

The other issue with pilots is time. When the shoot runs longer than anticipated, it delays the director’s cut and VFX work can’t begin until that is done. Even a one-day delay in turnover can impact the quality of the visual effects. And it’s not a matter of throwing more artists at a shot. Many shots are not shareable among multiple artists so adding more artists won’t shorten the time to completion. Visual effects are like fine-art painting; one artist can’t create the sky while another works on the background. Under the best circumstances, it is hard to deliver polished work for pilots and such delays add to the problem. With pilots, our biggest enemy is time.

The Leftovers

How do you handle staffing and workflow issues in managing short-term projects like pilots?
You need to be very smart and nimble. A big issue for New York-based studios is infrastructure. Many buildings lack enough electricity to accommodate high power demands, high-speed connectivity and even the physical space required by visual effects studios.

New York studios therefore have to be as efficient as possible with streamlined pipelines built to push work through. We are addressing this issue by increasingly relying on cloud solutions for software and infrastructure. It helps us maximize flexibility.

Staffing is also an ongoing issue. Qualified artists are in short supply. More and more, we look to schools, designed by VFX supervisors, artists and producers, for junior artists with the skills to hit the ground running.

Disney Channel’s Fast Layne director Hasraf ‘HaZ’ Dulull

By Randi Altman

London-based Hasraf “HaZ” Dulull is a man with a rich industry background. He started out in this business as a visual effects artist (The Dark Knight, Hellboy 2) and VFX supervisor (America: The Story of the US), and has expanded his resume in recent years to include producer, screenwriter and feature film director of his own projects (The Beyond, 2036 Origin Unknown).

HaZ (left) on set directing Disney’s Fast Layne.

Even more recently, he added television series director to that long list, thanks to his work on Disney Channel’s action-comedy miniseries Fast Layne, where he directed Episodes 1, 2, 7 and 8. He is currently developing a slate of feature and TV projects with his next film being a sci-fi/horror offering called Lunar, which is scheduled to start shooting later in the year.

Fast Layne focuses on a very bright 12-year-old girl named Layne and her eccentric neighbor, who find V.I.N., a self-driving and talking car in an abandoned shed. The car, the girls and a classmate with experience fixing cars embark on high-speed adventures while trying to figure out why V.I.N. was created, all the while tangling with bad guys and secret agents. You can watch Fast Layne on Sundays at 7:00pm ET/PT on Disney Channel.

We reached out to Dulull to find out more about establishing the look of the show, as well as his process, and how he uses his post background to inform his directing.

As the pilot director, what was your process in establishing the look for the show?
My process was very similar to how I worked on my feature films, since I come from a filmmaking-style that is very visually driven and hands-on. As a director, I would usually do lots of look development on my end anyway, which for Fast Layne involved creating style frames in Photoshop with direction notes and ideas. These eventually became a look bible for the show.

I worked closely with the Disney Channel’s development team and the showrunners Matt Dearborn, Tom Burkhard and Travis Braun (the creator of the show). We would discuss the ideas from the early style frames I had created and developed further, along with a set of rules of what the color palette should be, the graphics and even the style of framing with the key sequences.

By the end of the process, we firmly set the tone and mood of the show as having a saturated and punchy look, while feeling slick and cinematic with a lot of energy. Since we were shooting in Vancouver during the time of year that it gets overcast/grey very quickly, we made sure the art department had many colorful objects in the environment/sets to help — including the cast’s wardrobes.

How did you work with the DP and colorist? Who did the color, and do you know the tools they used?
We had a great DP — Neil Cervin and his team of camera ninjas! They are super-fast and so collaborative in pushing the shots further.

During the prep stage, I worked closely with Neil on the look of the show, and he was really into what we wanted to do something punchy, so he made sure we retained that throughout.

Our A camera was always the ARRI Alexa during the pilot shoot. We had a DIT, Jay Rego, who would quickly apply looks on the frames we had shot using DaVinci Resolve. During this on-set color process, we would see how far we could push it with the grade and what additional lighting we would need to achieve the look we were after. This really helped us nail the look very quickly and get it approved by the showrunners and the Disney Channel team on set before we continued shooting.

We then saved those looks as DPX frames along with CDLs (color decision lists) and sent those over to colorist Lionel Barton over at Vancouver’s OmniFilm Entertainment to work from in Blackmagic Resolve. This saved time in the grading process since that was done early during the shoot. Larry and his team at Omnifilm were taking the look we had set and pushing it further with each shot across all the episodes.

Colorist Lionel Barton during grading session.

Can you talk about the car sequences? They are fun!
On the first days of prepping the show, I cut a mood reel of car chase action scenes, making clear that I love well-designed car chases and that we need to give the kids that cinematic experience they get in movies. Plus, Travis came from a NASCAR racing family, so he backed this up.

We designed the car action scenes to be fun and energetic with cool camera angles — not violent and frenetic (like the Bourne films). We were not doing crazy camera shake and motion blur action scenes; this is slick and cool action — we want the kids to experience those key action moments and go “wow.”

You are known for directing your own feature films. What was it like to direct your first TV series for a studio as big as Disney Channel?
Firstly, I’m incredibly grateful for Disney Channel giving me the opportunity to be on this journey. I have to thank Rafael Garcia at Disney Channel, who lobbied hard for me early in the process.

The first thing I quickly picked up and made sure stayed in my mind is that feature film is a director’s medium, whereas TV is a writer’s medium. So with that in mind, I ensured I collaborated very closely with Matt, Tom, and Travis on everything. Those guys were such a bundle of joy to work with. They were continually pushing the show with additional writing, and they supported me and the other directors (Joe Menendez, Rachel Leiterman) on our episodes throughout, making sure we hit those essential comedy and drama moments they wanted for the show. In fact, I would be in the same car as Matt (some days with Tom) to the shoot location every morning and back to our hotel every evening, going through things on the script, the shoot, etc. — this was a very tight collaboration, and I loved it.

The big difference between the feature films I had done and this TV series is the sheer amount of people involved from an executive and creative level. We had the writing team/execs/showrunners, then we had the executives at the Disney Channel, and we also had the team from the production company Omnifilm.

Therefore, we all had to be in sync with the vision and decisions taken. So once a decision was made, it was tough to go back and retract, so that ensured we were all making the right decisions throughout. I have to say the Fast Layne team were all very collaborative and respectful to each other, which made the “network studio” experience a very pleasant and creative one.

You are also credited as creative consultant on all the episodes? What did that entail?
I fell into that role almost automatically after shooting my first block (Episodes 1 and 2). I think it’s due to my filmmaking nature — being so hands-on technically and creatively and having that know-how from my previous projects on creating high-concept content (which usually involves a lot of visual effects) on a tight budget and schedule.

I had also done a lot of work in advance regarding how we would shoot stuff fast to allow things to be taken further in VFX. The network wanted to have someone that knew the show intimately to oversee that during the post production stage. So once production wrapped, I flew back home to London and continued working on the show by reviewing dailies, cuts and VFX shots and providing notes and creative solutions and being on conference calls with Disney and Omnifilm.

What tools were used for review and approval?
I used Evernote to keep all my notes neat and organized, and we would use Aspera for transferring files securely while Pix was the primary platform for reviewing cuts and shots.

Most of the time I would provide my notes visually rather than writing long emails, so a screen grab of the shot and then lots of arrows and annotations. I was in this role (while doing other stuff) right up to the end of the show’s post, so at the time of answering these questions I just signed off on the last episode grade (Episode 8) last week. I am now officially off the show.

You mostly shoot on Alexa, can you talk about what else you used during production?
Yes, we shot on Alexa with a variety of lenses at 3K to allow us to pan and scan later for HD deliverable. We also used GoPro and DJI Osmo’s (4K) for V.I.N.’s POV, and some DJI Drone shots too.

The biggest camera tech toy we had on the show was the Russian Arm! (It didn’t help that I keep quoting Micheal Bay during the prep of the car chase scenes). So somehow the production team managed to get us a Russian Arm for the day, and what we achieved with that was phenomenal.

We got so much bang for our buck. The team operating it, along with the stunt driving team, worked on films like Deadpool 2, so there was a moment during second unit when we almost forgot this was a kids’ show because it had the energy of an action feature film.

Russian Arm

Stylistically, we always kept the camera moving, even during drama scenes — a slow move helped give perspective and depth. All the camera moves had to be slick; there was no handheld-style in this show.

For earlier scenes in Episode 1 with Layne, we used the idea of a single camera move/take, which was choreographed slickly and timed with precision. This was to reflect the perfect nature of Layne’s character being super-organized like a planner. Most of these camera moves were simply achieved with a dolly/track and slider. Later on in the the show, as Layne’s character breaks out of her comfort zone of being safe and organized, she begins to be more spontaneous, so the camera language reflected that too with more loose shots and whip pans.

You are a post/VFX guy at heart, how did that affect the way you directed Fast Layne?
Oh yes, it had a massive influence on the way I directed my episodes, but only from a technical side of things, not creatively in the way I worked with the actors.

With my VFX background, I had the instinct to be sensible with things, such as how to frame the shots to make VFX life smoother, where to stage my actors to avoid them crossing over tracing markers (to save money on paint-outs) and, of course, to use minimal green/blue screen for the car scenes.

I knew the spill coming from the greenscreens would be a nightmare in VFX, so to avoid that as much as I could, we shot driving plates and then used a lot of rear/side projections playing them back.

Previs

The decision to go that route was partly based on my experience as a compositor back in the day, crying in the late hours de-spilling greenscreen on reflection and dealing with horrible hair mattes. The only time we shot greenscreen was for scenes where the camera was moving around areas we didn’t have screen projection space for. We did shoot car greenscreen for some generic interior plates to allow us to do things later in post if we needed to create an insert shot with a new background.

Did you use previs?
As you know from our conversations about my previous projects, I love previs and find that previs can save so much money later on in production if used right.

So the car chase sequences, along with a big action scene in the series finale, had to be prevised, mainly because we had to end big but only had limited time to shoot. The previs was also instrumental with getting first VFX budgets in for the sequences and helping the 1st AD create the schedule.

Vancouver’s Atmosphere VFX was kind enough to let me come in and work closely with one of the previs artists to map out these key scenes in 3D, while I also did some previs myself using the assets they generated for me. The previs also dictated what lens we needed and how much real estate we needed on the location.

Being a former VFX supervisor certainly helped when communicating with the show’s on-set VFX supervisors Andrew Karr and Greg Behrens. We had a shorthand with each other, which sped things up massively on set with decisions made quickly regarding shooting plates to work with VFX later.

Before and After

On set I would show the actors, via mockups and previs on my iPad, what was going to happen, why I wanted them to be staged in a certain way, and why they should look at this reference, etc. So I think that gave the actors (both the kids and adults) confidence in the scenes that involved VFX.

My personal approach to VFX is that it’s part of the arsenal of tools required to tell the story and, if possible, its best used in combination with the other crafts as opposed to just relying on it solely to achieve things.

Atmosphere created the visual effects?
Yes. I have been a fan of their work from the first season of The Expanse. They were the only main VFX house on the show and handled the CG V.I.N. shots, steering wheel transformation, and V.I.N.’s front grill, as well as other shots involving digital cloth, a robotic arm and a helicopter that appears in later episodes.

We also had a team of internal VFX artists (Mike Jackson and Richard Mintak) working for Omnifilm who were on throughout the post schedule. They handled the smaller VFX, compositing and graphics type shots, such as the windshield graphics, V.I.N.’s internal visual screen and other screen graphics as well as Layne’s Alonzo watch graphics.

How many VFX in total?
There were 1,197 VFX shots delivered, with Atmosphere VFX providing the main bulk of around 600, while the rest were graphics VFX shots done by our internal VFX team at Omnifilm.

Most of the visual effects involving CGI in the show involved V.I.N. doing cool things and his front grill communicating his emotion.

During my pitch for getting the job, I referenced my film 2036 Origin Unknown as an example of visual communication I had explored when it came to AI and characters.

From that we explored further and knew we wanted something with personality, but not with a face. We were very clear at the start that this was not going to be cartoony or gimmicky; it had to feel technologically cool, yet fresh and unique. We didn’t want to have the typical LED screen displaying graphics or emoji. Instead, we went for something resembling a pushpin cushion to give it a little organic touch — it showing that this was advanced tech, but used simple arrangements of pins moving in and out to create the shape of the eyes to communicate emotion.

It was important we went with a visual approach, which was simple to communicate with our core audience, for V.I.N. to come across visually as a personality with comedy beats. I remember being in my hotel room, drawing up emotive sketches on paper to see how simple we could get V.I.N. to be and then emailing them across to the writers for their thoughts.

Atmosphere spent some time developing R&D in Maya and Python scripting to create a system that could feed off the sound files to help generate the animation of the pins. The passes were rendered out of Maya and Vray and then composited with the final look established in Foundry Nuke.

To ensure we didn’t end up with a show where all the shots needed VFX, V.I.N.’s emotive visuals on the front grill can pop on and off when required. That meant that during the car chase sequences, V.I.N.’s face would only pop up when needed (like when it was angry as it was being chased or to show its competitive face during a race). Having this rule in place allowed us to stick with our budget and schedule as closely as possible without extreme overages (which tends to happen after editorial).

For the scenes that involved a CGI V.I.N., we shot the live-action plates with a special buggy developed exclusively for the show. This allowed our stunt driver to do cool car maneuvers and tricks, while also providing a body frame that had lots of space for rigging cameras to capturing the HDRI of the environment. It also had tracking markers across it to allow for full object tracking. (See before and after image of the buddy and CGI VIN).

The other big bulk of the VFX was all the UI/heads up display graphics on V.I.N.’s windshield, which was the way the car’s system displayed information. During Transformed mode, the windshield became a navigation system to help support Layne. It couldn’t be too crazy since we were dealing with pop-up windows overlaid so we can still see the driving action outside.

Most of those graphics were done by our internal team at Omnifilm, by graphic designers and compositors using Adobe After Effects with render passes such as wireframes of V.I.N. provided by Atmosphere. We wanted to show that the car was technologically cool without having to use any tech speak in the script. So we researched a lot into what automated cars are doing and what the developments are for the future and depicted this in the show.

Before and After

Can you provide an example?
In Episode 1, when the windshield presents a trajectory of the jump across the construction bridge, a wireframe of the bridge based on its LIDAR scan capabilities was shown as a safe jump option. Another example was during the first big motorway chase sequence. V.I.N. recognized the bad guys chasing them in the SUV, so we featured facial recognition tracking technology to show how V.I.N. was able to read their vitals from this scan as being hostile.

We used this same grounded-tech approach to create the POV of the car, using the graphics style we had created for the windshield, to show what V.I.N. was seeing and thinking and that it was essentially a sentient being. This also helped, editorially, to mix things up visually during the drama scenes inside the car.

The show was shot in Vancouver, what was that like?
I love Vancouver!! There is such a buzz in that city, and that’s because you can feel the filmmaking vibe every day, due to the fact there were like 30 other shows happening at the same time we were shooting Fast Layne! I can’t wait to go back and shoot there again.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Posting director Darren Lynn Bousman’s horror film, St. Agatha

Atlanta’s Moonshine Post helped create a total post production pipeline — from dailies to finishing — for the film St. Agatha, directed by Darren Lynn Bousman (Saw II, Saw III, Saw IV, Repo the Genetic Opera). 

The project, from producers Seth and Sara Michaels, was co-edited by Moonshine’s Gerhardt Slawitschka and Patrick Perry and colored by Moonshine’s John Peterson.

St. Agatha is a horror film that shot in the town of Madison, Georgia. “The house we needed for the convent was perfect, as the area was one of the few places that had not burned down during the Civil War,” explains Seth Michaels. “It was our first time shooting in Atlanta, and the number one reason was because of the tax incentive. But we also knew Georgia had an infrastructure that could handle our production.”

What the producers didn’t know during production was that Moonshine Post could handle all aspects of post, and were initially brought in only for dailies. With the opportunity to do a producer’s cut, they returned to Moonshine Post.

Time and budget dictated everything, and Moonshine Post was able to offer two editors working in tandem to edit a final cut. “Why not cut in collaboration?” suggested Drew Sawyer, founder of Moonshine Post and executive producer. “It will cut the time in half, and you can explore different ideas faster.”

“We quite literally split the movie in half,” reports Perry, who, along with Slawitschka, cut on Adobe Premiere “It’s a 90-minute film, and there was a clear break. It’s a little unusual, I will admit, but almost always when we are working on something, we don’t have a lot of time, so splitting it in half works.”

Patrick Perry

Gerhardt Slawitschka

“Since it was a producer’s cut, when it came to us it was in Premiere, and it didn’t make sense to switch over to Avid,” adds Slawitschka. “Patrick and I can use both interchangeably, but prefer Premiere; it offers a lot of flexibility.”

“The editors, Patrick and Gerhardt, were great,” says Sara Michaels. “They watched every single second of footage we had, so when we recut the movie, they knew exactly what we had and how to use it.”

“We have the same sensibilities,” explains Gerhardt. “On long-form projects we take a feature in tandem, maybe split it in half or in reels. Or, on a TV series, each of us take a few episodes, compare notes, and arrive at a ‘group mind,’ which is our language of how a project is working. On St. Agatha, Patrick and I took a bit of a risk and generated a four-page document of proposed thoughts and changes. Some very macro, some very micro.”

Colorist John Peterson, a partner at Moonshine Post, worked closely with the director on final color using Blackmagic’s Resolve. “From day one, the first looks we got from camera raw were beautiful.” Typically, projects shot in Atlanta ship back to a post house in a bigger city, “and maybe you see it and maybe you don’t. This one became a local win, we processed dailies, and it came back to us for a chance to finish it here,” he says.

Peterson liked working directly with the director on this film. “I enjoyed having him in session because he’s an artist. He knew what he was looking for. On the flashbacks, we played with a variety of looks to define which one we liked. We added a certain amount of film grain and stylistically for some scenes, we used heavy vignetting, and heavy keys with isolation windows. Darren is a director, but he also knows the terminology, which gave me the opportunity to take his words and put them on the screen for him. At the end of the week, we had a successful film.”

John Peterson

The recent expansion of Moonshine Post, which included a partnership with the audio company Bare Knuckles Creative and a visual effects company Crafty Apes, “was necessary, so we could take on the kind of movies and series we wanted to work with,” explains Sawyer. “But we were very careful about what we took and how we expanded.”

They recently secured two AMC series, along with projects from Netflix. “We are not trying to do all the post in town, but we want to foster and grow the post production scene here so that we can continue to win people’s trust and solidify the Atlanta market,” he says.

Uncork’d Entertainment’s St. Agatha was in theaters and became available on-demand starting February 8. Look for it on iTunes, Amazon, Google Play, Vudu, Fandango Now, Xbox, Dish Network and local cable providers.

Behind the Title: ATK PLN Technical Supervisor Jon Speer

NAME: Jon Speer

COMPANY: ATK PLN (@atkpln_studio) in Dallas

CAN YOU DESCRIBE YOUR COMPANY?
We are a strategic creative group that specializes in design and animation for commercials and short-form video productions.

WHAT’S YOUR JOB TITLE?
Technical Supervisor

WHAT DOES THAT ENTAIL?
In general, a technical supervisor is responsible for leading the technical director team and making sure that the pipeline enables our artists’ effort of fulfilling the client’s vision.

Day-to-day responsibilities include:
– Reviewing upcoming jobs and making sure we have the necessary hardware resources to complete them
– Working with our producers and VFX supervisors to bid and plan future work
– Working with our CG/VFX supervisors to develop and implement new technologies that make our pipeline more efficient
– When problems arise in production, I am there to determine the cause, find a solution and help implement the fix
– Developing junior technical directors so they can be effective in mitigating pipeline issues that crop up during production

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I would say the most surprising thing that falls under the title is the amount of people and personality management that you need to employ.

As a technical supervisor, you have to represent every single person’s different perspectives and goals. Making everyone from artists, producers, management and, most importantly, clients happy is a tough balancing act. That balancing act needs to be constantly evaluated to make sure you have both the short-term and long-term interests of the company, clients and artists in mind.

WHAT TOOLS DO YOU USE?
Maya, Houdini and Nuke are the main tools we support for shot production. We have our own internal tracking software that we also integrate with.

From text editors for coding, to content creation programs and even budgeting programs, I typically use it all.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Starting the next project. Each new project offers the chance for us to try out a new or revamped pipeline tool that we hope will make things that much better for our team. I love efficiencies, so getting to try new tools, whether they are internally or externally developed, is always fun.

WHAT’S YOUR LEAST FAVORITE?
I know it sounds cliché, but I don’t really have one. My entire job is based on figuring out why things don’t work or how they could work better. So when things are breaking or getting technically difficult, that is why I am here. If I had to pick one thing, I suppose it would be looking at spreadsheets of any kind.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early morning when no one else is in. This is the time of day that I get to see what new tools are out there and try them. This is when I get to come up with the crazy ideas and plans for what we do next from a pipeline standpoint. Most of the rest of my day usually includes dealing with issues that crop up during production, or being in meetings.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I think I would have to give teaching a try. Having studied architecture in school, I always thought it would be fun to teach architectural history.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped on a set of Lego spots for the new Lego 2 movie.

Fallout 76

We also did an E3 piece for Fallout 76 this year that was a lot of fun. We are currently helping out with a spot for the big game this year that has been a blast.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I think I am most proud of our Lego spots we have created over the last three years. We have really experimented with pipeline on those spots. We saw a new technology out there — rendering in Octane — and decided to jump in head first. While it wasn’t the easiest thing to do, we forced ourselves to become even more efficient in all aspects of production.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Houdini really makes the difficult things simple to do. I also love Nuke. It does what it does so well, and is amazingly fast and simple to program in.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Mainly I’ll listen to soundtracks when I am working, the lack of words is best when I am programming.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Golf is something I really enjoy on the weekends. However, like a lot of people, I find travel is easily the best way to for me to hit the reset button.

Cinesite recreates Nottingham for Lionsgate’s Robin Hood

The city of Nottingham perpetually exists in two states: the metropolitan center that it is today, and the fictional home of one of the world’s most famous outlaws. So when the filmmakers behind Robin Hood, which is now streaming and on DVD, looked to recreate the fictional Nottingham, they needed to build it from scratch with help from London’s Cinesite Studio. The film stars Taron Egerton, Jamie Foxx, Ben Mendelsohn, Eve Hewson, and Jamie Dornan.

Working closely with Robin Hood’s VFX supervisor Simon Stanley-Clamp and director Otto Bathurst, Cinesite created a handful of settings and backgrounds for the film, starting with a digital model of Nottingham built to scale. Given its modern look and feel, Nottingham of today wouldn’t do, so the team used Dubrovnik, Croatia, as its template. The Croatian city — best known to TV fans around the world as the model for Game of Thrones’ Kings Landing — has become a popular spot for filming historical fiction, thanks to its famed stone walls and medieval structures. That made it an ideal starting point for a film set around the time of the Crusades.

“Robin’s Nottingham is a teeming industrial city dominated by global influences, politics and religion. It’s also full of posh grandeur but populated by soot-choked mines and sprawling slums reflecting the gap between haves and have-nots, and we needed to establish that at a glance for audiences,” says Cinesite’s head of assets, Tim Potter. “With so many buildings making up the city, the Substance Suite allowed us to achieve the many variations and looks that were required for the large city of Nottingham in a very quick and easy manner.”

Using Autodesk Maya for the builds and Pixologic ZBrush for sculpting and displacement, the VFX team then relied on Allegorithmic Substance Designer (which was acquired by Adobe recently) to customize the city, creating detailed materials that would give life and personality to the stone and wood structures. From the slums inspired by Brazilian favelas to the gentry and nobility’s grandiose environments, the texturing and materials helped to provide audiences with unspoken clues about the outlaw archer’s world.

Creating these swings from the oppressors to the oppressed was often a matter of dirt, dust and grime, which were added to the RGB channels over the textures to add wear and tear to the city. Once the models and layouts were finalized, Cinesite then added even more intricate details using Substance Painter, giving an already realistic recreation additional touches to reflect the sometimes messy lives of the people that would inhabit a city like Nottingham.

At its peak, Cinesite had around 145 artists working on the project, including around 10 artists focusing on texturing and look development. The team spent six months alone creating the reimagined Nottingham, with another three months spent on additional scenes. Although the city of Dubrovnik informed many of the design choices, one of the pieces that had to be created from scratch was a massive cathedral, a focal point of the story. To fit with the film’s themes, Cinesite took inspiration from several real churches around the world to create something original, with a brutalist feel.

Using models and digital texturing, the team also created Robin’s childhood home of Loxley Manor, which was loosely based on a real structure in Završje, Croatia. There were two versions of the manor: one meant to convey the Loxley family in better times, and another seen after years of neglect and damage. Cinesite also helped to create one of the film’s most integral and complex moments, which saw Robin engage in a wagon chase through Nottingham. The scene was far too dangerous to use real animals in most shots, requiring Cinesite to dip back into its toolbox to create the texturing and look of the horse and its groom, along with the rigging and CFX.

“To create the world that the filmmakers wanted, we started by going through the process of understanding the story. From there we saw what the production had filmed and where the action needed to take place within the city, then we went about creating something unique,” Potter says. “The scale was massive, but the end result is a realistic world that will feel somewhat familiar, and yet still offer plenty of surprises.”

Robin Hood was released on home media on February 19.

Behind the Title: Carousel’s Head of VFX/CD Jeff Spangler

This creative has been an artist for as long as he could remember. “I’ve always loved the process of creation and can’t imagine any career where I’m not making something,” he says.

Name: Jeff Spangler

Company: NYC’s Carousel

Can you describe your company?
Carousel is a “creative collective” that was a response to this rapidly changing industry we all know and love. Our offerings range from agency creative services to editorial, design, animation (including both motion design and CGI), retouching, color correction, compositing, music licensing, content creation, and pretty much everything that falls between.

We have created a flexible workflow that covers everything from concept to execution (and delivery), while also allowing for clients whose needs are less all-encompassing to step on or off at any point in the process. That’s just one of the reasons we called ourselves Carousel — our clients have the freedom to climb on board for as much of the ride as they desire. And with the different disciplines all living under the same roof, we find that a lot of the inefficiencies and miscommunications that can get in the way of achieving the best possible result are eliminated.

What’s your job title?
Head of VFX/Creative Director

What does that entail?
That’s a really good question. There is the industry standard definition of that title as it applies to most companies. But it’s quite different if you are talking about a collective that combines creative with post production, animation and design. So for me, the dual role of CD and head of VFX works in a couple of ways. Where we have the opportunity to work with agencies, I am able to bring my experience and talents as a VFX lead to bear, communicating with the agency creatives and ensuring that the different Carousel artist involved are all able to collaborate and communicate effectively to get the work done.

Alternatively, when we work direct-to-client, I get involved much earlier in the process, collaborating with the Carousel creative directors to conceptualize and pitch new ideas, design brand elements, visualize concept art, storyboard and write copy or even work with stargeists to help hone the direction and target of a campaign.

That’s the true strength of Carousel — getting creatives from different backgrounds involved early on in the process where their experience and talent can make a much bigger impact in the long run. Most importantly, my role is not about dictating direction as much as it is about guiding and allowing for people’s talents to shine. You have to give artists the room to flourish if you really want to serve your clients and are serious about getting them something more than what they expected.

What would surprise people the most about what falls under that title?
I think that there is this misconception that it’s one creative sitting in a room that comes up with the “Big Idea” and he or she just dictates that idea to everyone. My experience is that any good idea started out as a lot of different ideas that were merged, pruned, refined and polished until they began to resemble something truly great.

Then after 24 hours, you look at that idea again and tear it apart because all of the flaws have started to show and you realize it still needs to be pummeled into shape. That process is generally a collaboration within a group of talented people who all look at the world very differently.

What tools do you use?
Anything that I can get my hands on (and my brain wrapped around). My foundation is as a traditional artist and animator and I find that those core skills are really the strength behind what I do everyday. I started out after college as a broadcast designer and later transitioned into a Flame artist where I spent many years working as a beauty retouch artist and motion designer.

These days, I primarily use Adobe Creative Suite as my role has become more creative in nature. I use Photoshop for digital painting and concept art , Illustrator for design and InDesign for layouts and decks. I also have a lot of experience in After Effects and Autodesk Maya and will use those tools for any animation or CGI that requires me to be hands-on, even if just to communicate the initial concept or design.

What’s your favorite part of the job?
Coming up with new ideas at the very start. At that point, the gloves are off and everything is possible.

What’s your least favorite?
Navigating politics within the industry that can sometimes get in the way of people doing their best work.

What is your favorite time of the day?
I’m definitely more of a night person. But if I had to choose a favorite time of day, it would be early morning — before everything has really started and there’s still a ton of anticipation and potential.

If you didn’t have this job, what would you be doing instead?
Working as a full-time concept artist. Or a logo designer. While I frequently have the opportunity to do both of those things in my role at Carousel, they are, for me, the most rewarding expression of being creative.

A&E’s Scraps

How early on did you know this would be your path?
I’ve been an artist for as long as I can remember and never really had any desire (or ability) to set it aside. I’ve always loved the process of creation and can’t imagine any career where I’m not “making” something.

Can you name some recents projects you have worked on?
We are wrapping up Season 2 of an A&E food show titled Scraps that has allowed us to flex our animation muscles. We’ve also been doing some in-store work with Victoria’s Secret for some of their flagship stores that has been amazing in terms of collaboration and results.

What is the project that you are most proud of?
It’s always hard to pick a favorite and my answer would probably change if you asked me more than once. But I recently had the opportunity to work with an up-and-coming eSports company to develop their logo. Collaborating with their CD, we landed on a design and aesthetic that makes me smile every time I see it out there. The client has taken that initial work and continues to surprise me with the way they use it across print, social media, swag, etc. Seeing their ability to be creative and flexible with what I designed is just validation that I did a good job. That makes me proud.

Name pieces of technology you can’t live without.
My iPad Pro. It’s my portable sketch tablet and presentation device that also makes for a damn good movie player during long commutes.

What do you do to de-stress from it all?
Muay Thai. Don’t get me wrong. I’m no serious martial artist and have never had the time to dedicate myself properly. But working out by punching and kicking a heavy bag can be very cathartic.

Method Studios adds Bill Tlusty joins as global head of production

Method Studios has brought on veteran production executive and features VFX Producer Bill Tlusty on board in the new role of global head of production. Reporting to EVP of global features VFX, Erika Burton, Tlusty will oversee Method’s global feature film and episodics production operation, leading teams worldwide.

Tlusty’s career as both a VFX producer and executive spans two decades. Most recently, as an executive with Universal Pictures, he managed more than 30 features, including First Man and The Huntsman: Winter’s War. His new role marks a return to Method Studios, as he served as head of studio in Vancouver prior to his gig at Universal. Tlusty also spent eight years as a VFX producer and executive producer at Rhythm & Hues.

In this capacity he was lead executive on Snow White and the Huntsman and the VFX Oscar-winning Life of Pi. His other VFX producer credits include Night at the Museum: Battle of the Smithsonian, The Mummy: Tomb of the Emperor Dragon and Yogi Bear, and he served as production manager on Hulk and Peter Pan and coordinator on A.I Artificial Intelligence. Early in his career Tlusty worked as a production aAssistant at American Zoetrope, working for its iconic filmmaker founders, Francis Ford Coppola and George Lucas. His VFX career began at Industrial Light & Magic where he worked in several capacities on the Star Wars prequel trilogy, first as a VFX coordinator and later, production  manager on the series. He is a member of the Producers Guild of America.

“Method has pursued intelligent growth, leveraging the strength across all of its studios, gaining presence in key regions and building on that to deliver high quality work on a massive scale,” Tlusty. “Coming from the client side, I understand how important it is to have the flexibility to grow as needed for projects.”

Tlusty is based in Los Angeles and will travel extensively among Method’s global studios.

Mortal Engines: Weta creates hell on wheels

By Karen Moltenbrey

Over the years, Weta Digital has made a name for itself, creating vast imaginative worlds for highly acclaimed feature film franchises such as The Lord of the Rings and The Hobbit. However, for the recently released Mortal Engines, not only did the studio have to construct wide swaths of land the size of countries, but the crew also had to build supercities that move at head-spinning speed.

Mortal Engines, produced by Universal Pictures and MRC, takes place centuries after a cataclysmic event known as the Sixty Minute War destroys civilization as we know it, leaving behind few resources. Eventually, survivors learn to adapt, and a deadly, mobile society emerges whereby gigantic moving cities roam the earth, preying on smaller towns they hunt down across a landscape called the Great Hunting Ground, basically the size of Europe. It is now a period of pre-revival, as the earth begins to renew itself, and the survivors become nomads on wheels.

Eventually, London, a traction city, emerges at the top of this vicious food chain, consuming resources from other cities and towns it devours, including fuel, food and human labor. It’s a dog-eat-dog world. But there are those who want to end this vicious cycle; they are members of the Anti-Traction League, who advocate for static, self-sustaining homelands.

Based on a book by Philip Reeve, the film is directed by Oscar-winning visual effects artist Christian Rivers (King Kong). Simon Raby (Elysium, District 9) served as cinematographer, while Weta created the visual effects, led by Ken McGaugh, Kevin Andrew Smith and Luke Millar, with Dennis Yoo as animation supervisor.

Ken McGaugh

A New World Order
In all, Weta delivered 1,682 VFX shots for the feature film, most of which pertained to the environments.

How did this work compare to some of Weta’s other world builds? “I can’t speak as to The Hobbit because I didn’t work on that. But on The Lord of the Rings, New Zealand’s landscape was used for Middle-earth, so there was a lot of location work, and most of the world building was all in camera,” says McGaugh. “On Mortal Engines, because earth has been destroyed and manipulated by these giant cities moving over it, there’s nothing left that resembles the earth that we know. So, there was no location for us to shoot; we had to build it from scratch.”

How does one go about building such a world — and then setting it in motion? “In a book there is a lot of metaphor, but film has to be fully literal,” says Rivers. Fortunately, he and the crew had the vast experience as well as the technological genius to get it done.

Such a goal, however, required new rules and workflows, even for a veteran studio like Weta, which has a history of breaking new ground, especially when it comes to animated characters and amazing landscapes. Here, those diverse elements would converge like never before.

“We have quite a bit of experience doing computer-generated vehicles as well as digital environments, but most of our workflows assume that an environment is not a vehicle, that it doesn’t move. So trying to bridge that gap was a challenge. We had nothing that would allow us to do that until we first started,” says McGaugh. “So, we had to invent some new workflows and technology internally to allow us to bridge that gap so the animators could animate the city as if it were a vehicle, but we could build the city and dress it as if it were an environment.”

The Land
The environments in Mortal Engines are CG — built and animated using Autodesk’s Maya and composited in Foundry’s Nuke — with practical set pieces used for filming embedded into them.

In addition to the unique cities, there are some large tracts of land, including the Great Hunting Ground, scarred with massive tread marks left by traction cities over the centuries. Here, the once-organic environment had been reshaped and now appears man-made, but life is establishing a foothold in this once-barren landscape.

“It has all these layered plateaus with hard edges and embedded track shapes that we placed everywhere,” says McGaugh. “Our rule of thumb was that the higher the level of the plain, the more foliage there was, since it had been a long time since it had been driven over. However, on the lower level, at the bottom of the trenches, it was also green, but more marshy and full of reeds, since that is where water accumulates.”

Some survivors of the war pushed into the mountains and founded settlements there, rather than living a nomadic existence. One such settlement is Shan Guo in the East on the Asian Steppes, protected from the mobile cities by mountain ranges. In addition, there is a massive two-kilometer shield wall (6,561 feet high) situated between two of the mountains that protects Shan Guo and the static cities in the Himalayas. This environment alone was daunting to create, as it covers 5,000 square kilometers (over 3,000 square miles).

On one side of the shield wall, the environment is very lush, fertile and green, and the buildings influenced by Bhutan monasteries. On the other side of the wall, there is a lack of foliage, with the landscape strewn with decayed ruins of traction cities that have unsuccessfully attacked the wall. And while the shield wall is massive, it had to appear smaller in comparison to the mountains surrounding it.

While constructing these mountains, the Weta artists used available geographical data, increasing the resolution through erosion simulations that would shape the mountains more naturally. “That gave us extra detail that we could use to make it look more organic,” McGaugh says. The simulation also was used to embed the ruined traction cities into the crater environment as well as situate the crater environment into the surrounding landscape.

Cities on the Move
The moving cities can cover a great deal of ground in very little time, gouging and scarring the earth in their wake with deep crevices; above, airships dot the skies. The relentless ploughing of the traction cities over the landscape has driven layers and layers of mud, debris and waste into the ground. Weta re-created this effect by starting with a precisely coupled fluid simulation with multiple viscosities; this could accurately simulate the combination of solid and liquid layers of the mud. They then began laying tracks and eroding them, then laying more tracks and eroding, repeating the process until the desired result was achieved.

In the film, there are numerous homelands, including Airhaven, a fantastical city in the clouds with a jellyfish look that is home to the Anti-Tractionists.

“Airhaven didn’t have a lot of movement, so we didn’t have to use our new layout puppet technology. But when it crashes, that had to be animation-driven, so we built a lightweight puppet with a large section of the city on each piece of the puppet, so animators could choreograph the crash,” explains McGaugh. “Then they handed that off to our effects department, and they would simulate all the individual pieces breaking apart and exploding, and then add the explosions, fire and all the dynamics on the balloons and the cloth.”

So many of the traction cities have multiple moving parts that it was impractical to animate them by hand. Alternatively, Weta developed a tool called Gumby, which is a vehicle-ground interaction toolset that allows animators to move a city from point A to point B over uneven terrain along a curve. The Gumby system then made sure that all the wheels stuck to the ground, thereby driving the suspension system that causes the infrastructure to move appropriately.

A dynamic caching system allowed for secondary bounce and wobble to occur on various pieces of a city in response to the motion from the Gumby system. “It wasn’t perfect, but it allowed for very complex animation in the blocking stage and made the motion more believable and closer to what the final version would look like,” explains McGaugh. Once blocking was approved, then the animators would refine the motion as needed.

London Lives!
According to McGaugh, the single biggest challenge was constructing London and executing it in a way that maintains its enormous size while keeping it in the realm of believability. “Concept artist Nick Keller came up with a design for London that looked like it could be self-supporting and was scalable, so we could make it as big as it needed to be in order to house 200,000 people, and when it moved, we could sell that as believable, too,” he explains.

London is the largest of the traction cities. It incorporated approximately 17 live-action sets and is a mile wide and a mile and a half long, and over a half-mile high. It is divided into seven tiers, with life aboard London progressively more desirable farther up each tier.

“This is a place where the glass is gone but stone statues have survived,” Rivers says. “We decided to make anything we see in our world today archaeological and then skew and twist things from there.” As a result, some iconic landmarks are recognizable but have an altered appearance.

“The design had to lend itself to believability for being so large and moving, but it also had to evoke a sense of contemporary London through recognizable features, such as the Trafalgar Square lions acting as sentinels on top of the outriggers, so they’re visible from a distance,” McGaugh points out. London was then crowned with a reconstructed St. Paul’s Cathedral.

A contemporary feel was evoked through the architectural style. As McGaugh notes, London is known for its diverse and contrasting architectural styles juxtaposed against each other. So, the designers followed that style when laying out the buildings atop the digital London. “That was also carried out through the front façade of London and at a much larger scale, so that from a distance, you could still feel that diversity where it’s kind of rusty and brutalist at the bottom with a layer of architecture that is reminiscent of the houses of Parliament, and then is topped with chrome and steel construction shaped like a coat of arms,” he adds.

Because of this diversity of architectural styles, the group was able to source from its library of existing buildings — whether Victorian, Georgian, contemporary office buildings, tower blocks, row houses, Buckingham Palace — and mix them together without having to maintain uniformity from building to building.

But with so much detail, it became prohibitively difficult to render, and that’s where Weta’s Cake technology came into play — which used an intelligent way of breaking down geometric and material detail into a format that could be streamed into the renderer, using just the level of detail required. “Before that, it wasn’t viable to render London,” says McGaugh. “But Cake allowed us to process all the data into a format that enabled us to render it, and render it quite efficiently.” Rendering was done within Weta’s proprietary Manuka renderer.

Lighting was also tricky, as the team was following the lighting direction from Raby, who used backlighting — which is not easy to do in CGI when using hard edges, especially when there is shiny glass and metal involved. As a result, the CG lighters, using the studio’s Foundry Katana-based pipeline, had to do tests on almost every shot to find the appropriate angle that sold the backlighting and kept the visuals interesting and not too flat, while maintaining continuity with the camera shots.

London on the Move
A city constantly on the move, London can travel at approximately 300 kilometers (186 miles) per hour, bolstered by massive engines. While that speed sounds ridiculously fast according to real-world physics, it was necessary to hold audiences’ attention, as physics and cinema were often at odds on the film. “There was a lot of testing, and we tried 100 kilometers per hour when London is chasing down [the mining traction city of] Salthook across a vast landscape, but it looked like a couple of snails racing. It was too boring,” says McGaugh. “Indeed, 300 kilometers sounds ludicrous, and if you think about it, it is. But that is what allowed us to keep the chase exciting while constantly selling that there is movement.”

Indeed, London had to move faster than physics would allow, yet just how fast depended on the camera shot. Nevertheless, this wreaked havoc on the effects that simulated natural phenomenon, such as dust. The key, however, was to use visual cues to make sure the cities felt massive and other cues to make sure audiences were not distracted by the fact that the cities are moving so fast.

When constructing the massive city of London, Weta devised the concept of so-called “lily pads,” representing 113 sections of London. Each was rigged and animated independently and contained millions of components that had to be tracked and moved. Each lily pad was constructed modularly, enabling artists to add clusters of buildings, parks, shops and so forth on each platform. More and more detail was then added to areas as needed.

These lily pads were supported by complex suspension systems for individual movement; at times there was some inter-movement among them, as well. “[The movement] was pretty subliminal at times, but if it wasn’t there, you’d have noticed it and everything would have felt static and locked,” McGaugh says.

Shrike
While Weta’s work on the film was heavily focused on environments, Mortal Engines does contain one digital character, Shrike, who had raised the movie’s heroine, Hester Shaw, after her mother’s murder. Half-man/half-machine, Shrike was a dead soldier resurrected by technology. He stands at seven feet tall and weighs close to 1,000 pounds.

Shrike’s anatomy is not human — he has extra appendages and extra mechanical bits that had to be rigged to move differently from that of a typical human. “It was determined early on that we could not use motion capture because we needed him to be inhuman, so we had to invest quite a bit of effort into finding his motion through keyframe techniques,” McGaugh notes.

Shrike’s face comprises metal parts and human skin. To achieve a realistic tug and stretch of the skin against the metal, Weta developed a custom facial-muscle rig so animators could use the visible muscles and skin to allow him to emote in some particularly dramatic moments in the movie, inspired by the performance from actor Stephen Lang.

A New Day
While the scale of the world building for Mortal Engines was not at the level of The Hobbit, it was not without big challenges for the VFX veterans at Weta. Initially, the concept of massive cities on the move was difficult to wrap one’s head around. But, as always, Weta’s artists and animators were able to bring that unique visual to life in a realistic way.

Now with Mortal Engines in theaters, the studio remains on the move with a number of other mega projects in the works, including the Avatar sequels and more on the big screen as well as the final season of Game of Thrones for the small screen. All resulting in more expansive, unique worlds brought to cinematic life.


Karen Moltenbrey is a veteran VFX and post writer.

Behind the Title: We Are Royale CD Chad Howitt

NAME: Chad Howitt

COMPANY: We Are Royale in Los Angeles

CAN YOU DESCRIBE YOUR COMPANY?
We Are Royale is a creative and digital agency looking to partner with brands to create unique experiences across platforms. In the end, we make pretty things for lots of different applications, depending on the creative problem our clients are looking to solve. We’re a full-service production studio that directs live-action commercials, creates full 3D worlds, designs 2D character spots, and develops immersive AR and VR experiences.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
Anything and everything needed to get the job done. On the service side, I’ll work directly with clients and agencies to address their wide variety of needs. So whether that’s creating an idea from scratch or curating a look around an already developed script, I try to figure out how we can help.

Then in-house, I’ll work with our talented team of art directors, CG supervisors and producers to help execute those ideas.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
While the title stays the same, the responsibilities vary by location, person and company culture. So don’t think there’s a hard-and-fast rule about what a creative director is and does.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Seeing a finished project out in the world knowing the hard work the team put in to get it there. Whether it’s on TV, in a space as a part of an installation or online as a part of a pre-roll… it’s a proud moment whenever I see it in the wild.

WHAT’S YOUR LEAST FAVORITE?
Seeing the results of a job we lost or had to pass on knowing that the creative we were planning will never see the light of day.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be in the video game industry, but that wasn’t really a feasible career path back then.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
As a little kid, I was obsessed with drawing and computers. So merging those into a profession always seemed like the most natural course. That said, as an LA native, working on film sets just seemed like what out-of-towners wanted to do. So I never saw that coming.

Under Armour spot for its UA HOVR running line

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’ve wrapped a few projects with Under Armour, a trio of spots for NASDAQ, and a promo for Billy Bob Thornton’s series on Amazon called Goliath.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It’d probably be the first project I worked on at We Are Royale, which was an Under Armour spot for its UA HOVR running shoe line. It allowed me to work with merging live-action, CG and beautiful type design.

NAME THREE THINGS YOU CAN’T LIVE WITHOUT.
Fire, indoor plumbing and animal husbandry

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
The last social media I had was MySpace, unless you count LinkedIn…which you really shouldn’t.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Some of my current go-to tracks are “We Were So Young” by Hammock, “Galang” by Vijay Iyer Trio, “Enormous” by Llgl Tndr, “Almost Had to Start a Fight” by Parquet Courts and “Pray” by Jungle.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I stress eat. Cake, cookies and pizza make most problems go away. Although diabetes could become a new problem, but that’s tomorrow.

Artifex provides VFX for Jordan Peele’s Weird City

Vancouver-based VFX house Artifex Studios was the primary visual effects vendor for Weird City, Oscar-winner Jordan Peele’s first foray into scripted OTT content. The dystopian sci-fi/comedy Weird City — from Peele and Charlie Sanders — premieres on YouTube Premium on  February 13. They have released a first trailer and it features a variety of Artifex’s visual effects work.

Artifex’s CG team created the trailer’s opening aerial shots of the futuristic city. Additionally, video/holographic screens, user interfaces, graphics, icons and other interactive surfaces that the characters interact with were tasked to Artifex.

Artifex’s team, led by VFX supervisor Rob Geddes, provided 250 visual effects shots in all, including Awkwafina’s and Yvette Nicole Brown’s outfit swapping (our main image), LeVar Burton’s tube traveling and a number of additional environment shots.

Artifex called on Autodesk Maya, V-ray, Foundry’s Nuke and Adobe Photoshop, along with a mix of Dell, HP, generic PC workstations and Dell and HP render nodes. They also used Side Effects Houdini for procedural generation of the “below the line” buildings in the opening city shot. Qumulo was called on for storage.

 

VFX editor Warren Mazutinec on life, work and Altered Carbon

By Jeremy Presner

Long-time assistant editor Warren Mazutinec’s love for filming began when he saw Star Wars as an eight-year-old in a small town in Edmonton, Alberta. Unlike many other Lucas-heads, however, this one got to live out his dream grinding away in cutting rooms from Vancouver to LA working with some of the biggest editors in the galaxy.

We met back in 1998 when he assisted me on the editing of the Martin Sheen “classic” Voyage of Terror. We remain friends to this day. One of Warren’s more recent projects was Netflix’s VFX-heavy Altered Carbon, which got a lot of love from critics and audiences alike.

My old friend, who is now based in Vancouver, has an interesting story to tell, moving from assistant editor to VFX editor working on films like Underworld 4, Tomorrowland, Elysium and Chappie, so I threw some questions at him. Enjoy!

Warren Mazutinec

How did you get into the business?
I always wanted to work in the entertainment industry, but that was hard to find in Alberta. No film school-type programs were even offered, so I took the closest thing at a local college: audiovisual communications. While there, I studied photography, audio and video, but nothing like actual filmmaking. After that I attended Vancouver Film School. After film school, and with the help of some good friends, I got an opportunity to be a trainee at Shavick Entertainment.

What was it like working at a “film factory” that cranked out five to six pictures a year?
It was fun, but the product ultimately became intolerable. Movies for nine-year-olds can only be so interesting… especially low-budget ones.

What do your parents think of your career option?
Being from Alberta, everyone thought it wasn’t a real job — just a Hollywood dream. It took some convincing; my dad still tells me to look for work between gigs.

How did you learn Avid? Were you self-taught?
I was handed the manual by a post supervisor on day one. I never read it. I just asked questions and played around on any machine available. So I did have a lot of help, but I also went into work during my free time and on weekends to sit and learn what I needed to do.

Over the years I’ve been lucky enough to have cool people to work with and to learn with and from. I did six movies before I had an email address, more before I even owned a computer.

As media strayed away from film into digital, how did your role change in the cutting room? How did you refine your techniques with a changing workflow?
My first non-film movie was Underworld 4. It was shot with a Red One camera. I pretty much lied and said I knew how to deal with it. There was no difference really; just had to say goodbye to lab rolls, Keykode, etc. It was also a 3D stereo project, so that was a pickle, but not too hard to figure out.

How did you figure out the 3D stereo post?
It was basically learning to do everything twice. During production we really only played back in 3D for the novelty. I think most shows are 3D-ified in post. I’m not sure though, I’ve only done the one.

Do you think VR/AR will be something you work with in the future?
Yes, I want to be involved in VR at some point. It’s going to be big. Even just doing sound design would be cool. I think it’s the next step, and I want in.

Who are some of your favorite filmmakers?
David Lynch is my number one, by far. I love his work in all forms. A real treasure tor sure. David Fincher is great too. Scorsese, Christopher Nolan. There are so many great filmmakers working right now.

Is post in your world constantly changing, or have things more or less leveled off?
Both. But usually someone has dailies figured out, so Avid is pretty much the same. We cut in DNx115 or DnX36, so nothing like 4K-type stuff. Conform at the end is always fun, but there are tests we do at the start to figure it all out. We are rarely treading in new water.

What was it like transitioning to VFX editor? What tools did you need to learn to do that role?
FileMaker. And Jesus, son, I didn’t learn it. It’s a tough beast but it can do a lot. I managed to wrangle it to do what I was asked for, but it’s a hugely powerful piece of software. I picked up a few things on Tomorrowland and went from there.

I like the pace of the VFX editor. It’s different than assisting and is a nice change. I’d like to do more of it. I’d like to learn and use After Effects more. On the film I was VFX editor for, I was able to just use the Avid, as it wasn’t that complex. Mostly set extensions, etc.

How many VFX shot revisions would a typical shot go through on Elysium?
On Elysium, the shot version numbers got quite high, but part of that would be internal versioning by the vendor. Director Neil Blomkamp is a VFX guy himself, so he was pretty involved and knew what he wanted. The robots kept looking cooler and cooler as the show went on. Same for Chappie. That robot was almost perfect, but it took a while to get there.

You’ve worked with a vast array of editors, from, including Walter Murch, Lee Smith, Julian Clarke, Nancy Richardson and Bill Steinkamp. Can you talk about that, and have any of them let you cut material?
I’ll assemble scenes if asked to, just to help the editor out so he isn’t starting from scratch. If I get bored, I start cutting scenes as well. On Altered Carbon, when Julian (Clark) was busy with Episodes 2 and 3, I’d try to at least string together a scene or two for Episode 8. Not fine-cutting, mind you, just laying out the framework.

Walter asked a lot of us — the workload was massive. Lee Smith didn’t ask for much. Everyone asks for scene cards that they never use, ha!

Walter hadn’t worked on the Avid for five years or so prior to Tomorrowland, so there was a lot of him walking out of his room asking, “How do I?” It was funny because a lot of the time I knew what he was asking, but I had to actually do it on my machine because it’s so second nature.

What is Walter Murch like in the cutting room? Was learning his organizational process something you carried over into future cutting rooms?
I was a bit intimidated prior to meeting him. He’s awesome though. We got along great and worked well together. There was Walter, a VFX editor and four assistants. We all shared in the process. Of course, Walter’s workflow is unlike any other so it was a huge adjustment, but within a few weeks we were a well-oiled machine.

I’d come in at 6:30am to get dailies sorted and would usually finish around lunch. Then we’d screen in our theater and make notes, all of us. I really enjoyed screening the dailies that way. Then he would go into his room and do his thing. I really wish all films followed his workflow. As tough as it is, it all makes sense and nothing gets lost.

I have seen photos with the colored boxes and triangles on the wall. What does all that mean, and how often was that board updated?
Ha. That’s Walter’s own version of scene cards. It makes way better sense. The colors and shapes mean a particular thing — the longer the card the longer the scene. He did all that himself, said it helps him see the picture. I would peek into his room and watch him do this. He seemed so happy doing it, like a little kid.

Do you always add descriptions and metadata to your shots in Avid Media Composer?
We add everything possible. Usually there is a codebook the studios want, so we generate that with FileMaker on almost all the bigger shows. Walter’s is the same just way bigger and better. It made the VFX database look like a toy.

What is your workflow for managing/organizing footage?
A lot of times you have to follow someone else’s procedure, but if left to my own devices I try to make it the simplest it can be so anyone can figure out what was done.

How do you organize your timeline?
It’s specific to the editor, but I like to use as many audio tracks as possible and as few video tracks as possible, but when it’s a VFX-heavy show, that isn’t possible due to stacking various shot versions.

What did you learn from Lee Smith and Julian Clarke?
Lee Smith is a suuuuuper nice guy. He always had great stories from past films and he’s a very good editor. I’m glad he got the Oscar for Dunkirk, he’s done a lot of great work.

Julian is also great to work with. I’ve worked with him on Elysium, Chappie and Altered Carbon. He likes to cut with a lot of sound, so it’s fun to work with him. I love cutting sound, and on Altered Carbon we had over 60 tracks. It was a alternating stereo setup and we used all the tracks possible.

Altered Carbon

It was such a fun world to create sound for. Everything that could make a sound we put in. We also invented signature sounds for the tech we hoped they’d use in the final. And they did for some things.

Was that a 5.1 temp mix?? Have you ever done one?
No. I want to do a 5.1 Avid mix. Looks fun.

What was the schedule like on Altered Carbon? How was that different than some of the features you’ve worked on?
It was six-day weeks and 12 hours a day. Usually one week per month I’d trade off with the 2nd assistant and she’d let me have an actual weekend. It was a bit of a grind. I worked on Episodes 2, 3 and 8, and the schedules for those were tight, but somehow we got through it all. We had a great team up here for Vancouver’s editorial. They were also cutting in LA as well. It was pretty much non-stop editing the whole way through.

How involved was Netflix in terms of the notes process? Were you working with the same editors on the episodes you assisted?
Yes, all episodes were with Julian. First it went through Skydance notes, then Netflix. Skydance usually had more as they were the first to see the cuts. There were many versions for sure.

What was it like working with Neil Blomkamp?
It was awesome. He makes cool films, and it’s great to see footage like that. I love shooting guns, explosions, swords and swearing. I beat him in ping-pong once. I danced around in victory and he demanded we play again. I retired. One of the best environments I’ve ever worked in. Elysium was my favorite gig.

What’s the largest your crew has gotten in post?
Usually one or two editors, up to four assistants, a PA, a post super — so eight or nine, depending.

Do you prefer working with a large team or do you like smaller films?
I like the larger team. It can all be pretty overwhelming and having others there to help out, the easier it can be to get through. The more the merrier!

Altered Carbon

How do you handle long-ass-days?
Long days aren’t bad when you have something to do. On Altered Carbon I kept a skateboard in my car for those times. I just skated around the studio waiting for a text. Recently I purchased a One-Wheel (skateboard with 1 wheel) and plan to use it to commute to work as much as possible.

How do you navigate the politics of a cutting room?
Politics can be tricky. I usually try to keep out of things unless I’m asked, but I do like to have a sit down or a discussion of what’s going on privately with the editor or post super. I like to be aware of what’s coming, so the rest of us are ready.

Do you prefer features to TV?
It doesn’t matter anymore because the good filmmakers work in both mediums. It used to be that features were one thing and TV was another, with less complex stories. Now that’s different and at times it’s the opposite. Features usually pay more though, but again that’s changing. I still think features are where it’s at, but that’s just vanity talking.

Sometimes your project posts in Vancouver but moves to LA for finishing. Why? Does it ever come back?
Mostly I think it’s because that’s where the director/producers/studio lives. After it’s shot everyone just goes back home. Home is usually LA or NY. I wish they’d stay here.

How long do you think you’ll continue being an AE? Until you retire? What age do you think that’ll be?
No idea; I just want to keep working on projects that excite me.

Would you ever want to be an editor or do you think you’d like to pivot to VFX, or are you happy where you are?
I only hope to keep learning and doing more. I like the VFX editing, I like assisting and I like being creative. As far as cutting goes, I’d like to get on a cool series as a junior editor or at least start doing a few scenes to get better. I just want to keep advancing, I’d love to do some VR stuff.

What’s next for you project wise?
I’m on a Disney Show called Timmy Failure. I can’t say anything more at this point.

What advice do you have for other assistant editors trying to come up?
It’s going to take a lot longer than you think to become good at the job. Being the only assistant does not make you a qualified first assistant. It took me 10 years to get there. Also you never stop learning, so always be open to another approach. Everyone does things differently. With Murch on Tomorrowland, it was a whole new way of doing things that I had never seen before, so it was interesting to learn, although it was very intimidating at the start.


Jeremy Presner is an Emmy-nominated film and television editor residing in New York City. Twenty years ago, Warren was AE on his first film. Since then he has cut such diverse projects as Carrie, Stargate Atlantis, Love & Hip Hop and Breaking Amish.

VFX studio Electric Theatre Collective adds three to London team

London visual effects studio Electric Theatre Collective has added three to its production team: Elle Lockhart, Polly Durrance and Antonia Vlasto.

Lockhart brings with her extensive CG experience, joining from Touch Surgery where she ran the Johnson & Johnson account. Prior to that she worked at Analog as a VFX producer where she delivered three global campaigns for Nike. At Electric, she will serve as producer on Martini and Toyota.

Vlasto joins Electric working on clients such Mercedes, Tourism Ireland and Tui. She joins from 750MPH where, over a four-year period, she served as producer on Nike, Great Western Railway, VW and Amazon to name but a few.

At Electric, Polly Durrance will serve as producer on H&M, TK Maxx and Carphone Warehouse. She joins from Unit where she helped launched their in-house Design Collective, worked with clients such as Lush, Pepsi and Thatchers Cider. Prior to Unit Polly was at Big Buoy where she produced work for Jaguar Land Rover, giffgaff and Redbull.

Recent projects at the studio, which also has an office in Santa Monica, California, include Tourism Ireland Capture Your Heart and Honda Palindrome.

Main Image: (L-R) Elle Lockhart, Antonia Vlasto and Polly Durrance.

Rodeo VFX supe Arnaud Brisebois on the Fantastic Beasts sequel

By Randi Altman

Fantastic Beasts: Crimes of Grindelwald, directed by David Yates and written by J.K. Rowling, is a sequel to 2016’s Fantastic Beasts and Where to Find Them. It follows Newt Scamander (Eddie Redmayne) and a young Albus Dumbledore (Jude Law) as they attempt to take down the dark wizard Gellert Grindelwald (Johnny Depp).

Arnaud_Brisebois

As you can imagine, the film features a load of visual effects, and once again the team at Rodeo FX was called on to help. Their work included establishing the period in which the film is set and helping with the history of the Obscurus, Credence Barebone, and more.

Rodeo FX visual effects supervisor Arnaud Brisebois and team worked with the film’s VFX supervisors — Tim Burke and Christian Manz — to create digital environments, including detailed recreations of Paris in the 1920s and iconic wizarding locations like the Ministry of Magic.

Beyond these settings, the Montreal-based Brisebois was also in charge of creating the set pieces of the Obscurus’ destructive powers and a scene depicting its backstory. In all, they produced approximately 200 shots over a dozen sequences. While Brisebois visited the film’s set in Leavesden to get a better feel of the practical environments, he was not involved in principal photography.

Let’s find out more…

How early did you get involved, and how much input did you have?
Rodeo got involved in May 2017, at the time mainly working on pre-production creatures, design and concept art. I had a few calls with the film’s VFX supervisors, Tim Burke and Christian Manz, to discuss creatures and main directive lines for us to play with. From there we tried various ideas.
At that moment in pre-production, the essence of what the creatures were was clear, but their visual representation could really swing between extremes. That was the time to invent, study and propose directions for design.

Can you talk about creating the Ministry of Magic, which was partially practical, yes?
Correct, the London Ministry of Magic was indeed partially practically built. The partial set in this case meant a simple incurved corridor with a ceramic tiled wall. We still had to build the whole environment in CG in order to directly extend that practical set, but, most importantly, we extended the environment itself, with its immense circular atrium filled with thousands of busy offices.

For this build, we were provided with original Harry Potter set plans from production designer Stuart Craig, as well as plan revisions meant specifically for Crimes of Grindelwald. We also had access to LIDAR scans and cross-polarized photography from areas of the Harry Potter tour in Leavesden, which was extremely useful.

Every single architectural element was precisely built as individual units, and each unit composed of individual pieces. The single office variants were procedurally laid out on a flat grid over the set plan elevations and then wrapped as a cylinder using an expression.

The use of a procedural approach for this asset allowed for faster turnarounds and for changes to be made, even in the 11th hour. A crowd library was built to populate the offices and various areas of the Ministry, helping give it life and support the sense of scale.

So you were able to use assets from previous films?
What really links these movies together is production designer Stuart Craig. This is definitely his world, at least in visual terms. Also, as with all the Potter films, there are a large number of references and guidelines available for inspiration. This world has its own mythology, history and visual language. One does not need to look for long before finding a hint, something to link or ground a new effect in the wizarding world.

What about the scenes involving the Obscurus? Was any of the destruction it caused practical?
Apart from a few fans blowing a bit of wind on the actors, all destruction was full-frontal CG. A complex model of Irma’s house was built with precise architectural details required for its destruction. We also built a wide library of high-resolution hero debris, which was scattered on points and simulated for the very close-up shots. In the end, only the actors were preserved from live photography.

What was the most challenging sequence you worked on?
It was definitely Irma’s death. This sequence involved such a wide variety of effects — ranging from cloth and RBD levitation, tearing cloth, huge RBD simulations and, of course, the Obscurus itself, which is a very abstract and complex cloth setup driving flip simulations. The challenge also came from shot values, which meant everything we built or simulated had to hold up for tight close-ups, as well as wide shots.

Can you talk about the tools you used for VFX, management and review and approval?
All our tracking and review is done in Autodesk Shotgun. Artists worked up versions that they would then submit for dailies. All these submissions got in front of me at one point or another, and I then reviewed them and entered notes and directives to guide artists in the right direction.
For a project the size of Crimes of Grindelwald, over the course of 10 months, I reviewed and commented on approximately 6,000 versions for about 500 assets and 200 shots.

We are working on a Maya-based pipeline mainly, using it for modeling, rigging and shading. Zbrush is of course our main tool for organic modeling. We mostly use Mari and Substance Designer for textures. FX and CFX is handled in Houdini and our lighting pipeline is Katana based using Arnold as renderer. Our compositing pipeline is Nuke with a little use of Flame/Flare for very specific cases. We obviously have proprietary tools which help us boost these great softwares potential and offer custom solutions.

How did the workflow differ on this film from previous films?
It didn’t really differ. Working with the same team and the same crew, it really just felt like a continuation of our collaboration. These films are great to work on, not only because of their subject matter, but also thanks to the terrific people involved.

VFX Supervision: The Coens’ Western The Ballad of Buster Scruggs

By Randi Altman

The writing and directing duo of Joel and Ethan Coen have taken on the American Western with their new Netflix film, The Ballad of Buster Scruggs. This offering features six different vignettes that follow outlaws and settlers on the American frontier.

It stars the Coen brothers’ favorite Tim Blake Nelson as Buster, along with Liam Neeson, James Franco, Brenden Gleeson and many other familiar faces, even Tom Waits! It’s got dark humor and a ton of Coen quirkiness.

Alex Lemke (middle) on set with the Coen brothers.

For their visual effects needs, the filmmakers turned to New York-based East Side Effects co-founders and VFX supervisors Alexander Lemke and Michael Huber to help make things look authentic.

We reached out to visual effects supervisors Lemke and Huber to find out more about their process on the film and how they worked with these acclaimed filmmakers. East Side Effects created two-thirds of the visual effects in-house, while other houses, such as The Mill and Method, provided shots as well.

How many VFX shots were there in total?
Alexander Lemke: In the end, 704 shots had digital effects in them. This has to be a new record for the Coens. Joel at one point jokingly called it their “Marvel movie.”

How early did you get involved? Can you talk about that process?
Michael Huber: Alex and myself were first approached in January 2017 and had our first meetings shortly thereafter. We went through the script with the Coens and designed what we call a “VFX bible,” which outlined how we thought certain effects could be achieved. We then started collecting references from other films or real-life footage.

Did you do previs? 
Lemke: The Coens have been doing movies for so long in their own way that previs never really became an issue. For the Indian battles, we tried to interest them in the Ncam virtual camera system in combination with pre-generated assets, but that is not their way of doing a film.

The whole project was storyboarded by J. Todd Anderson, who has been their go-to storyboard guy since Raising Arizona. These storyboards gave a pretty good indication of what to expect, but there were still a lot of changes due to the nature of the project, such as weather and shooting with animals.

What were some of the challenges of the process and can you talk about creating the digital characters that were needed?
Huber: Every story had its own challenge, ranging from straightforward paintouts and continuity fixes to CG animals and complex head replacements using motion control technology. In order to keep the work as close to the directors as possible, we assembled a group of artists to serve as an extended in-house team, creating the majority of shots while also acting as a hub for external vendor work.

In addition, a color workflow using ACES and FilmLight Baselight was established to match VFX shots seamlessly to the dailies look established by cinematographer Bruno Delbonnel and senior colorist Peter Doyle. All VFX pulls were handled in-house.

Lemke: The Coens like to keep things in-camera as much as possible, so animals like the owl in “All Gold Canyon” or the dog in “Gal” were real. Very early on it was clear that some horse falls wouldn’t be possible as a practical stunt, so Joel and Ethan had a reel compiled with various digital horse stunts — including the “Battle of the Bastards” from Game of Thrones, which was done by Iloura (now Method). We liked that so much that we decided to just go for it and reach out to these guys, and we were thrilled when we got them on board for this. They did the “dog-hole!” horse falls in the “The Gal Who Got Rattled” segment, as well as the carriage horses in “Mortal Remains.”

Huber: For the deer in “All Gold Canyon,” the long-time plan was to shoot a real deer against bluescreen, but it became clear that we might not get the very specific actions Joel and Ethan wanted to see. They were constantly referring to the opening of Shane, which has this great shot of the titular character appearing through the antlers of a deer. So, it became more and more clear it would have to be a digital solution, and we were very happy to get The Mill in New York to work on that for us. Eventually, they would also handle all the other critters in the opening sequence.

Can you talk about Meal Ticket’s “artist” character, who is missing limbs?
Lemke: The “Wingless Thrush” — as he is referred to on a poster in the film — was a combined effort of the art department, special effects, costume design, VFX and, of course, actor Harry Melling’s incredible stamina. He was performing this poetry while standing in a hole in the ground with his hands behind his back, and went for it take after take, sometimes in the freezing cold.

Huber: It was clear that 98% of all shots would be painting out his arms and legs, so SFX supervisor Steve Cremin had to devise a way to cut holes into the set and his chair to make it appear he was resting on his stumps. Our costume designer, Mary Zophres, had the great idea of having him wear a regular shirt where the longs sleeves were just folded up, which helped with hiding his arms. He wasn’t wearing any blue garment, just black, which helped with getting any unnecessary color spill in the set.

Alex was on set to make sure we would shoot clean plates after each setup. Luckily, the Coen brothers’ approach to these shots was really focusing on Harry’s performance in long locked-off takes, so we didn’t have to deal with a lot of camera motion. We also helped Harry’s look by warping his shoulders closer to his body in some shots.

Was there a particular scene with this character that was most challenging or that you are most proud of?
Lemke: While most of the paintout shots were pretty straightforward — we just had to deal with the sheer amount of shots and edit changes — the most challenging parts are when Liam Neeson carries Harry in a backpack up the stairs in a brothel. He then puts him on the ground and eventually turns him away from the “action” that is about to happen.

We talked about different approaches early on. At some point, a rig was considered to help with him being carried up the stairs, but this would have meant an enormous amount of paint work, not to mention the setup time on a very tight shooting schedule. A CG head might have worked for the stairs, but for the long close up shots of Harry — both over a minute long, and only with very subtle facial expressions — it would have been cost prohibitive and maybe not successful in the end. So a head replacement seemed like the best solution, which comes with its own set of problems. In our case, shooting a head element of Harry that would match exactly what the dummy on Liam’s back and on the ground was doing in the production plates.

We came up with a very elaborate set up, where we would track the backpack and a dummy in the live-action photography in 3D Equalizer. We then reengineered this data into Kuper move files that would drive a motion control motion base combo.

Basically, Harry would sit on a computerized motion base that would do the turning motion so he could react to being pushed around. This happened while the motion control camera would take care of all the translations. This also meant our DP Bruno had to create animated lighting for the staircase shot to make the head element really sit in the plate.

We worked with Pacific Motion for the motion control. Mike Leben was our operator. For the NAC effects for the motion base, Nic Nicholson took care of this. Special thanks goes out to Christoph Gaudl for his camera and object tracking, Stefan Galleithner for taking on the task of converting all that data into something the camera and base would understand, and Kelly Chang and Mike Viscione for on-set Maya support.

Of course, you only get an element that works 80% of the way — the rest was laborious compositing work. Since we put the motion base to its speed limits on the staircase shot, we actually had to shoot it half speed and then speed it up in post. This meant a lot of warping/tracking was needed to make sure there was no slippage.

Michael Huber

The dummy we used for the live-action photography didn’t have any breathing movement in it, so we used parts of Harry’s bluescreen plates as a guideline of how his chest should move. These tricky tasks were expertly performed mainly by Danica Parry, Euna Kho and Sabrina Tenore.

Can you talk about how valuable it is being on set?
Huber: It is just valuable to be on set when the call sheet calls for a greenscreen, while we really need a bluescreen! But joking aside, Joel and Ethan were very happy to have someone there all the time during the main shoot in case something came up, which happened a lot because we were shooting outdoors so much and we were dependent on the weather.

For the opening shot of Buster riding through Monument Valley, they were thinking of a very specific view — something they had seen on a picture on the Internet. Through Google Maps and research, Alex was able to find out the exact location that picture was taken. So, on a weekend when we weren’t shooting, he packed up his family and drove up to the Valley to shoot photographs that would serve as the basis for the matte painting for the first shot of the film — instead of going there with a whole crew.

Another instance being on set helped would be the scene with Tom Waits in the tree — the backgrounds for these bluescreen shots were a mixture of B camera and Alex’s location photography while in Colorado. Same goes for the owl tree backgrounds.

What tools did East Side use on the film?
Huber: For software we called on Foundry Nuke (X & Studio), Boris FX Mocha Pro and Side Effects Houdini. For hardware we used HP and SuperMicro workstations running Linux. There was also proprietary software such as using Houdini digital assets for blood simulations.

We were using Autodesk Shotgun with a proprietary connection to Nuke that handled all our artist interaction and versioning, including automatically applying the correct Baselight grade when creating a version. This also allowed us to use the RV-Shotgun integration for reviewing.

Can you talk about the turnaround times and deadlines?
Lemke: Working on a Coen brothers film means you don’t have a lot of things you normally have to deal with — studio screenings, trailers, and such. At the same time, they insisted on working through the stories chronologically, so that meant that the later segments would come in late in the schedule. But, it is always a great experience working with filmmakers who have a clear vision and know what they are doing.