NBCUni 7.26

Category Archives: 3D

Roger and Big Machine merge, operate as Roger

Creative agency Roger and full-service production company Big Machine have merged — a move that will expand the creative capabilities for their respective agency, brand and entertainment clients. The studios will retain the Roger name and operate at Roger’s newly renovated facility in Los Angeles.

The combined management team includes CD Terence Lee, CD Dane Macbeth, EP Josh Libitsky, director Steve Petersen, CD Ken Carlson and Sean Owolo, who focuses on business development.

Roger now offers expanded talent and resources for projects that require branding, design, animation, VFX, VR/AR, live action and content development. Roger uses Adobe Creative Cloud for most of its workflows. The tools vary from project to project, but outside of the Adobe Suite, they also use Maxon Cinema4D, Autodesk Maya, Blackmagic DaVinci Resolve and Foundry Nuke.

Since the merger, the studio is already embarking on a number of projects, including major creative campaigns for Disney and Sony Pictures.

Roger’s new 6,500-square-foot studio includes four private offices, three editing suites, two conference rooms, an empty shooting space for greenscreen work, a kitchen and a lounge.

Technicolor adds Patrick Smith, Steffen Wild to prepro studio

Technicolor has added Patrick Smith to head its visualization department, partnering with filmmakers to help them realize their vision in a digital environment before they hit the set. By helping clients define lensing, set dimensions, asset placement and even precise on-set camera moves, Smith and his team will play a vital role in helping clients plan their shoots in the virtual environment in ways that feel completely natural and intuitive to them. He reports to Kerry Shea, who heads Technicolor’s Pre-Production Studio.

“By enabling clients to leverage the latest visualization technologies and techniques while using hardware similar to what they are already familiar with, Patrick and his team will empower filmmakers by ensuring their creative visions are clearly defined at the very start of their projects — and remain at the heart of everything they do from their first day on set to take their stories to the next level,” explains Shea. “Bringing visualization and the other areas of preproduction together under one roof removes redundancy from the filmmaking process which, in turn, reduces stress on the storytellers and allows them as much time as possible to focus on telling their story. Until now, the process of preproduction has been a divided and inefficient process involving different vendors and repeated steps. Bringing those worlds together and making it a seamless, start-to-finish process is a game changer.”

Smith has held a number of senior positions within the industry, including most recently as creative director/senior visualization supervisor at The Third Floor. He has worked on titles such as Bumblebee, Avengers: Infinity War, Spider-Man: Homecoming, Guardians of the Galaxy Vol. 2 and The Secret Life of Walter Mitty.

“Visualization used to involve deciding roughly what you plan to do on set. Today, you can plan out precisely how to achieve your vision on set down to the inch – from the exact camera lens to use, to exactly how much dolly track you’ll need, to precisely where to place your actors,” he says. “Visualization should be viewed as the director’s paint brush. It’s through the process of visualization that directors can visually explore and design their characters and breathe life into their story. It’s a sandbox where they can experiment, play and perfect their vision before the pressure of being on set.”

In other Technicolor news, last week the studio announced that Steffen Wild has joined as head of its virtual production department. “As head of virtual production, Wild will help drive the studio’s approach to efficient filmmaking by bringing previously separate departments together into a single pipeline,” says Shea. “We currently see what used to be separate departments merge together. For example, previz, techviz and postviz, which were all separate ways to find answers to production questions, are now in the process of collaborating together in virtual production.”

Wild has over 20 years of experience, including 10 years spearheading Jim Henson’s Creature Shop’s expending efforts in innovative animation technologies, virtual studio productions and new ways of visual storytelling. As SVP of digital puppetry and visual effects at the Creature Shop, Wild crafted new production techniques using proprietary game engine technologies. He brings with him in-depth knowledge of global and local VFX and animation production, rapid prototyping and cloud-based entertainment projects. In addition to his role in the development of next-generation cinematic technologies, he has set up VFX/animation studios in the US, China and southeast Europe.

Main Image: (L-R) Patrick Smith and Steffen Wild

NBCUni 7.26

An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.


Autodesk intros Bifrost for Maya at SIGGRAPH

At SIGGRAPH, Autodesk announced a new visual programming environment in Maya called Bifrost, which makes it possible for 3D artists and technical directors to create serious effects quickly and easily.

“Bifrost for Maya represents a major development milestone for Autodesk, giving artists powerful tools for building feature-quality VFX quickly,” says Chris Vienneau, senior director, Maya and Media & Entertainment Collection. “With visual programming at its core, Bifrost makes it possible for TDs to build custom effects that are reusable across shows. We’re also rolling out an array of ready-to-use graphs to make it easy for artists to get 90% of the way to a finished effect fast. Ultimately, we hope Bifrost empowers Maya artists to streamline the creation of anything from smoke, fire and fuzz to high-performance particle systems.”

Bifrost highlights include:

  • Ready-to-Use Graphs: Artists can quickly create state-of-the-art effects that meet today’s quality demands.
  • One Graph: In a single visual programming graph, users can combine nodes ranging from math operations to simulations.
  • Realistic Previews: Artists can see exactly how effects will look after lighting and rendering right in the Arnold Viewport in Maya.
  • Detailed Smoke, Fire and Explosions: New physically-based solvers for aerodynamics and combustion make it easy to create natural-looking fire effects.
  • The Material Point Method: The new MPM solver helps artists tackle realistic granular, cloth and fiber simulations.
  • High-Performance Particle System: A new particle system crafted entirely using visual programming adds power and scalability to particle workflows in Maya.
  • Artistic Effects with Volumes: Bifrost comes loaded with nodes that help artists convert between meshes, points and volumes to create artistic effects.
  • Flexible Instancing: High-performance, rendering-friendly instancing empowers users to create enormous complexity in their scenes.
  • Detailed Hair, Fur and Fuzz: Artists can now model things consisting of multiple fibers (or strands) procedurally.

Bifrost is available for download now and works with any version of Maya 2018 or later. It will also be included in the installer for Maya 2019.2 and later versions. Updates to Bifrost between Maya releases will be available for download from Autodesk AREA.

In addition to the release of Bifrost, Autodesk highlighted the latest versions of Shotgun, Arnold, Flame and 3ds Max. The company gave a tech preview of a new secure enterprise Shotgun that supports network segregation and customer-managed media isolation on AWS, making it possible for the largest studios to collaborate in a closed-network pipeline in the cloud. Shotgun Create, now out of beta, delivers a cloud-connected desktop experience, making it easier for artists and reviewers to see which tasks demand attention while providing a collaborative environment to review media and exchange feedback accurately and efficiently. Arnold 5.4 adds important updates to the GPU renderer, including OSL and OpenVDB support, while Flame 2020.1 introduces more uses of AI with new Sky Extraction tools and specialized image segmentation features. Also on display, the 3ds Max 2020.1 update features modernized procedural tools for 3D modeling.


Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”


Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.


Conductor boosts its cloud rendering with Amazon EC2

Conductor Technologies’ cloud rendering platform will now support Amazon Web Services (AWS) and Amazon Elastic Compute Cloud (Amazon EC2), bringing the virtual compute resources of AWS to Conductor customers. This new capability will provide content production studios working in visual effects, animation and immersive media access to new, secure, powerful resources that will allow them — according to the company — to quickly and economically scale render capacity. Amazon EC2 instances, including cost-effective Spot Instances, are expected to be available via Conductor this summer.

“Our goal has always been to ensure that Conductor users can easily access reliable, secure instances on a massive scale. AWS has the largest and most geographically diverse compute, and the AWS Thinkbox team, which is highly experienced in all facets of high-volume rendering, is dedicated to M&E content production, so working with them was a natural fit,” says Conductor CEO Mac Moore. “We’ve already been running hundreds of thousands of simultaneous cores through Conductor, and with AWS as our preferred cloud provider, I expect we’ll be over the million simultaneous core mark in no time.”

Simple to deploy and highly scalable, Conductor is equally effective as an off-the-shelf solution or customized to a studio’s needs through its API. Conductor’s intuitive UI and accessible analytics provide a wealth of insightful data for keeping studio budgets on track. Apps supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy. Additional software and plug-in support are in progress, and may be available upon request.

Some background on Conductor: it’s a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud. As the only rendering service that is scalable to meet the exact needs of even the largest studios, Conductor easily integrates into existing workflows, features an open architecture for customization, provides data insights and can implement controls over usage to ensure budgets and timelines stay on track.


Technicolor opens prepro studio in LA

Technicolor is opening a new studio in Los Angeles dedicated to creating a seamless pipeline for feature projects — from concept art and visualization through virtual production, production and into final VFX.

As new distribution models increase the demand for content, Technicolor Pre-Production will provide the tools, the talent and the space for creatives to collaborate from day one of their project – from helping set the vision at the start of a job to ensuring that the vision carries through to production and VFX. The result is a more efficient filmmaking process.

Technicolor Pre-Production studio is headed by Kerry Shea, an industry veteran with over 20 years of experience. She is no stranger to this work, having held executive positions at Method Studios, The Third Floor, Digital Domain, The Jim Henson Company, DreamWorks Animation and Sony Pictures Imageworks.

Kerry Shea

Credited on more than 60 feature films including The Jungle Book, Pirates of the Caribbean: Dead Men Tell No Tales and Guardians of the Galaxy Vol. 2, Shea has an extensive background in VFX and post production, as well as live action, animatronics and creature effects.

While the Pre-Production studio stands apart from Technicolor’s visual effects studios — MPC Film, Mill Film, MR. X and Technicolor VFX — it can work seamlessly in conjunction with one or any combination of them.

The Technicolor Pre-Production Studio will comprise of key departments:
– The Business Development Department will work with clients, from project budgeting to consulting on VFX workflows, to help plan and prepare projects for a smooth transition into VFX.
– The VFX Supervisors Department will offer creative supervision across all aspects of VFX on client projects, whether delivered by Technicolor’s studios or third-party vendors.
– The Art Department will work with clients to understand their vision – including characters, props, technologies, and environments – creating artwork that delivers on that vision and sets the tone for the rest of the project.
– The Virtual Production Department will partner with filmmakers to bridge the gap between them and VFX through the production pipeline. Working on the ground and on location, the department will deliver a fully integrated pipeline and shooting services with the flexibility of a small, manageable team — allowing critical players in the filmmaking process to collaborate, view and manipulate media assets and scenes across multiple locations as the production process unfolds.
– The Visualization Department will deliver visualizations that will assist in achieving on screen exactly what clients envisioned.

“With the advancements of tools and technologies, such as virtual production, filmmaking has reached an inflection point, one in which storytellers can redefine what is possible on-set and beyond,” says Shea. “I am passionate about the increasing role and influence that the tools and craft of visual effects can have on the production pipeline and the even more important role in creating more streamlined and efficient workflows that create memorable stories.”


EP Nick Litwinko leads Nice Shoes’ new long-form VFX arm

NYC-based creative studio Nice Shoes has hired executive producer Nick Litwinko to lead its new film and episodic VFX division. Litwinko, who has built a career on infusing a serial entrepreneur approach to the development of creative studios, will grow the division, recruiting talent to bring a boutique, collaborative approach to visual effects for long-form entertainment projects. The division will focus on feature film and episodic projects.

Since coming on board with Nice Shoes, Litwinko and his team already have three long-form projects underway and will continue working to sign on new talent.

Litwinko launched his career at MTV during the height of its popularity, working as a senior producer for MTV Promos/Animation before stepping up as executive producer/director for MTV Commercials. His decade-long tenure led him to launch his own company, Rogue Creative, where he served dual roles as EP and director and oversaw a wide range of animated, live-action and VFX-driven branded campaigns. He was later named senior producer for Psyop New York before launching the New York office of Blind. He moved on to join the team at First Avenue Machine as executive producer/head of production. He was then recruited to join Shooters Inc. as managing director, leading a strategic rebrand, building the company’s NYC offices and playing an instrumental part in the rebrand to Alkemy X.

Behind the Title: Neko founder Lirit Rosenzweig Topaz

NAME: Lirit Rosenzweig Topaz

COMPANY: Burbank’s Neko Productions

CAN YOU DESCRIBE YOUR COMPANY?
We are an animation studio working on games, TV, film, digital, AR, VR and promotional projects in a variety of styles, including super-cartoony and hyper-realistic CG and 2D. We believe in producing the best product for the budget, and giving our clients and partners peace of mind.

WHAT’S YOUR JOB TITLE?
Founder/Executive Producer

WHAT DOES THAT ENTAIL?
I established the company and built it from scratch. I am the face of the company and the force behind it. I am in touch with our clients and potential clients to make sure all are getting the best service possible.

Dr. Ruth doc

I am a part of the hiring process, making sure our team meets the standards of creativity, communication ability, responsibility and humanness. It is important for me to make sure all of our team members are great human beings, as well as being amazing and talented artists. I oversee all projects and make sure the machine is working smoothly to everyone’s satisfaction.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I am always looking at the big picture, from the macro to the micro, as well. I need to be aware of so many of the smaller details making sure everything is running smoothly for both sides, employees and clients.

WHAT HAVE YOU LEARNED OVER THE YEARS ABOUT RUNNING A BUSINESS?
I have learned that it is a roller coaster and one should enjoy the ride, and that one day doesn’t look like another day. I learned that if you are true to yourself, stick to your objectives and listen to your inner voice while doing a great job, things will work out. I always remember we are all human beings; you can succeed as a business person and have people and clients love working with you at the same time.

A LOT OF IT MUST BE ABOUT TRYING TO KEEP EMPLOYEES AND CLIENTS HAPPY. HOW DO YOU BALANCE THAT?
For sure! That is the key for everything. When employees are happy, they give their heart and soul. As a result, the workplace becomes a place they appreciate, not just a place they need to go to earn a living. Happy clients mean that you did your job well. I balance it by checking in with my team to make sure all is well by asking them to share with me any concerns they may have. At the end of the day, when the team is happy, they do a good job, and that results in satisfied clients.

WHAT’S YOUR FAVORITE PART OF THE JOB?
It is important for me that everybody comes to work with a smile on their face and to be a united team with the goal to create great projects. This usually results in their thinking out of the box and looking for ways to be efficient, to push the envelope and to make sure creativity is always at the highest level. Working on projects similar to ones we did in the past, but also to work on projects and styles we haven’t done before.

Dr. Ruth doc

I like the fact that I am a woman running a company. Being a woman allows me to juggle well, be on top of a few things at the same time and still be caring and loving.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I have two. One is the beginning of the day when I know I have a full day ahead of me to create work, influence, achieve and do many things. Two is the evening, when I am back home with my family.

CAN YOU NAME SOME RECENT CLIENTS?
Sega, Wayforward and the recent Ask Dr Ruth documentary.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My iPhone, my iPad and my computer.

Bipolar Studio gives flight to Uber Air campaign

Tech company Uber has announced their latest transportation offering — an aerial ride-sharing service. With plans to begin service within cities as soon as 2023, Uber Elevate launched their marketing campaign today at the Uber Elevate Summit. Uber Elevate picked LA-based creative and production boutique Bipolar Studio to create their integrated campaign, which includes a centerpiece film, experiential VR installation, print stills and social content.

The campaign’s centerpiece film Airborne includes stunning imagery that is 100 percent CGI. Beginning on an aerial mass transit platform at Mission Bay HQ, the flight travels across the city of San Francisco, above landmarks like where the Warriors play and Transamerica Tower. Lights blink on the ground below and buildings appear as silhouettes in the far background. The Uber Air flight lands in Santa Clara on Uber’s skytower with a total travel time of 18 minutes — compared to an hour or more driving through rush hour traffic. Multi-floor docking will allow Uber Air to land up to 1000 eVTOLs (those futuristic-looking vehicles that hover, take off and land vertically) per hour.

At the Uber Elevate Summit, attendees had the opportunity to experience a full flight inside a built-out physical cabin via a high-fidelity four-minute VR installation. After the Summit, the installation will travel to Uber events globally. Still images and social media content will further extend the campaign’s reach.

Uber Elevates head of design, John Badalamenti, explains, “We worked shoulder-to-shoulder with Bipolar Studio to create an entirely photoreal VR flight experience, detailed at a high level of accuracy from the physics of flight and advanced flight patterns, down to the dust on the windows. This work represents a powerful milestone in communicating our vision through immersive storytelling and creates a foundation for design iteration that advances our perspective on the rider experience. Bipolar took things a step beyond that as well, creating Airborne, our centerpiece 2D film, enabling future Uber Air passengers to take in the breadth and novelty of the journey outside the cabin from the perspective of the skyways.”

Bipolar developed a bespoke AI-fueled pipeline that could capture, manage and process miles and miles of actual data, then faithfully mirror the real terrain, buildings, traffic and scattered people in cities. They then re-used the all-digital assets, which gave them full freedom to digitally scout the city for locations for “Airborne.” Shooting the spot, as with live-action production, they were able to place the CG camera anywhere in the digital city to capture the aircraft. This gave the team a lot of room to play.

For the animation work, they built a new system through Side Effects Houdini where the flight of the vehicle wasn’t animated but rather simulated with real physics. The team coded a custom plugin to be able to punch in a number for the speed of the aircraft, its weight, and direction, then have AI do everything else. This allowed them to see it turn on the flight path, respond to wind turbulence and even oscillate when taking off. It also allowed them to easily iterate, change variables and get precise dynamics. They could then watch the simulations play out and see everything in realtime.

City Buildings
To bring this to life, Bipolor had to entirely digitize San Francisco. They spent a lot of time creating a pipeline and built the entire city with miles and miles of actual data that matched the terrain and buildings precisely. They then detailed the buildings and used AI to generate moving traffic — and even people, if you can spot them — to fill the streets. Some of the areas required a LIDAR scan for rebuilding. The end result is an incredibly detailed digital recreation of San Francisco. Each of the houses is a full model with windows, walls and doors. Each of the lights in the distance is a car. Even Alcatraz is there. They took the same approach to Santa Clara.

Data Management
Bipolar rendered out 32-bit EXRs in 4K, with each frame having multiple layers for maximum control by the client in the comp stage. That gave them a ton of data and raw number of files to deal with. Thankfully, it wasn’t the studio’s first time dealing with massive amounts of data — their internal infrastructure is already setup to handle a high volume of data being worked on simultaneously. They were also able to use the SSDs on their servers, in certain cases, for a faster time in rendering comps and pre-comps.

Review: Red Giant’s VFX Suite plugins

By Brady Betzel

If you have ever watched After Effects tutorials, you are bound to have seen the people who make up Red Giant. There is Aharon Rabinowitz, who you might mistake for a professional voiceover talent; Seth Worley, who can combine a pithy sense of humor and over-the-top creativity seamlessly; and my latest man-crush Daniel Hashimoto, better known as “Action Movie Dad” of Action Movie Kid.

In these videos, these talented pros show off some amazing things they created using Red Giant’s plugin offerings, such as the Trapcode Suite, the Magic Bullet Suite, Universe and others.

Now, Red Giant is trying to improve your visual effects workflow even further with the new VFX Suite for Adobe After Effects (although some work in Adobe Premiere as well).

The new VFX Suite is a compositing focused toolkit that will complement many aspects of your work, from green screen keying to motion graphics compositing with tools such as Video Copilot’s Element 3D. Whether you want to seamlessly composite light and atmospheric fog with fewer pre-composites, add a reflection to an object easily or even just have a better greenscreen keyer, the VFX Suite will help.

The VFX Suite includes Supercomp, Primatte Keyer 6, King Pin Tracker, Spot Clone Tracker, Optical Glow; Chromatic Displacement, Knoll Light Factory 3.1, Shadow and Reflection. The VFX Suite is priced at $999 unless you qualify for the academic discount, which means you can get it for $499.

In this review, I will go over each of the plugins within the VFX Suite. Up first will be Primatte Keyer 6.

Overall, I love Red Giant’s GUIs. They seem to be a little more intuitive, allowing me to work more “creatively” as opposed to spending time figuring out technical issues.

I asked Red Giant what makes VFX Suite so powerful and Rabinowitz, head of marketing for Red Giant and general post production wizard, shared this: “Red Giant has been helping VFX artists solve compositing challenges for over 15 years. For VFX suite, we looked at those challenges with fresh eyes and built new tools to solve them with new technologies. Most of these tools are built entirely from scratch. In the case of Primatte Keyer, we further enhanced the UI and sped it up dramatically with GPU acceleration. Primatte Keyer 6 becomes even more powerful when you combine the keying results with Supercomp, which quickly turns your keyed footage into beautifully comped footage.”

Primatte Keyer 6
Primatte is a chromakey/single-color keying technology used in tons of movies and television shows. I got familiar with Primatte when BorisFX included it in its Continuum suite of plugins. Once I used Primatte and learned the intricacies of extracting detail from hair and even just using their auto-analyze function, I never looked back. On occasion, Primatte needs a little help from others, like Keylight, but I can usually pull easy and tough keys all within one or two instances of Primatte.

If you haven’t used Primatte before, you essentially pick your key color by drawing a line or rectangle around the color, adjust the detail and opacity of the matte, and — boom — you’re done. With Primatte 6 you now also get Core Matte, a new feature that draws an inside mask automatically while allowing you to refine the edges — this is a real time-saver when doing hundreds of interview greenscreen keys, especially when someone decides to wear a reflective necklace or piece of jewelry that usually requires an extra mask and tracking. Primatte 6 also adds GPU optimization, gaining even more preview and rendering speed than previous versions.

Supercomp
If you are an editor like me — who knows enough to be dangerous when compositing and working within After Effects — sometimes you just want (or need) a simpler interface without having to figure out all the expressions, layer order, effects and compositing modes to get something to look right. And if you are an Avid Media Composer user, you might have encountered the Paint Effect Tool, which is one of those one-for-all plugins. You can paint, sharpen, blur and much more from inside one tool, much like Supercomp. Think of the Supercomp interface as a Colorista or Magic Bullet Looks-type interface, where you can work with composite effects such as fog, glow, lights, matte chokers, edge blend and more inside of one interface with much less pre-composing.

The effects are all GPU-accelerated and are context-aware. Supercomp is a great tool to use with your results from the Primatte Keyer, adding in atmosphere and light wraps quickly and easily inside one plugin instead of multiple.

King Pin Tracker and Spot Clone Tracker
As an online editor, I am often tasked with sign replacements, paint-out of crew or cameras in shots, as well as other clean-ups. If I can’t accomplish what I want with BorisFX Continuum while using Mocha inside of Media Composer or Blackmagic’s DaVinci Resolve, I will jump over to After Effects and try my hand there. I don’t practice as much corner pinning as I would like, so I often forget the intricacies when tracking in Mocha and copying Corner Pin or Transform Data to After Effects. This is where the new King Pin Tracker can ease any difficulties, especially when performing corner pinning on relatively simple objects but still need to keyframe positions or perform a planar track without using multiple plugins or applications.

The Spot Clone Tracker is exactly what is says it is. Much like Resolve’s Patch Replace, Spot Clone Tracker allows you to track one area while replacing that same area with another area from the screen. In addition, Spot Clone Tracker has options to flip vertical, flip horizontal, add noise, and adjust brightness and color values. For such a seemingly simple tool, the Spot Clone Tracker is the darkhorse in this race. You’d be surprised how many clone and paint tools don’t have adjustments, like flipping and flopping or brightness changes. This is a great tool for quick dead-pixel fixes and painting out GoPros when you don’t need to mask anything out. (Although there is an option to “Respect Alpha.”)

Optical Glow and Knoll Light Factory 3.1
Have you ever been in an editing session that needed police lights amplified or a nice glow on some text but the stock plugins just couldn’t get it right? Optical Glow will solve this problem. In another amazing, simple-yet-powerful Red Giant plugin, Optical Glow can be applied and gamma-adjusted for video, log and linear levels right off the bat.

From there you can pick an inner tint, outer tint and overall glow color via the Colorize tool and set the vibrance. I really love the Falloff, Highlight Rolloff, and Highlights Only functions, which allow you to fine-tune the glow and just how much it shows what it affects. It’s so simple that it is hard to mess up, but the results speak for themselves and render out quicker than with other glow plugins I am using.

Knoll Light Factory has been newly GPU-accelerated in Version 3.1 to decrease render times when using its more than 200 presets or when customizing your own lens flares. Optical Glow and Knoll Light Factory really complement each other.

Chromatic Displacement
Since watching an Andrew Kramer tutorial covering displacement, I’ve always wanted to make a video that showed huge seismic blasts but didn’t really want to put the time into properly making chromatic displacement. Lucky for me, Red Giant has introduced Chromatic Displacement! Whether you want to make rain drops appear on the camera lens or add a seismic blast from a phaser, Chromatic Displacement will allow you to offset your background with a glass-, mirror- or even heatwave-like appearance quickly. Simply choose the layer you want to displace from and adjust parameters such as displacement amount, spread and spread chroma, and whether you want to render using the CPU or GPU.

Shadow and ReflectionRed Giant packs Shadow and Reflection plugins into the VFX Suite as well. The Shadow plugin not only makes it easy to create shadows in front of or behind an object based on alpha channel or brightness, but, best of all, it gives you an easy way to identify the point where the shadow should bend. The Shadow Bend option lets you identify where the bend exists, what color the bend axis should be, the type of seam and seam the size, and even allows for motion blur.

The Reflection plugin is very similar to the Shadow plugin and produces quick and awesome reflections without any After Effects wizardry. Just like Shadow, the Reflection plugin allows you to identify a bend. Plus, you can adjust the softness of the reflection quickly and easily.

Summing Up
In the end, Red Giant always delivers great and useful plugins. VFX Suite is no different, and the only downside some might point to is the cost. While $999 is expensive, if compositing is a large portion of your business, the efficiency you gain might outweigh the cost.

Much like Shooter Suite does for online editors, Trapcode Suite does for VFX masters and Universe does for jacks of all trades, VFX Suite will take all of your ideas and help them blend seamlessly into your work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Quick Chat: Sinking Ship’s Matt Bishop on live-action/CG series

By Randi Altman

Toronto’s Sinking Ship Entertainment is a production, distribution and interactive company specializing in children’s live-action and CGI-blended programming. The company has 13 Daytime Emmys and a variety of other international awards on its proverbial mantel. Sinking Ship has over 175 employees across all its divisions, including its VFX and interactive studio.

Matt Bishop

Needless to say, the company has a lot going on. We decided to reach out to Matt Bishop, founding partner at Sinking Ship, to find out more.

Sinking Ship produces, creates visual effects and posts its own content, but are you also open to outside projects?
Yes, we do work in co-production with other companies or contract our post production service to shows that are looking for cutting-edge VFX.

Have you always created your own content?
Sinking Ship has developed a number of shows and feature films, as well as worked in co-production with production companies around the world.

What came first, your post or your production services? Or were they introduced in tandem?
Both sides of company evolved together as a way to push our creative visions. We started acquiring equipment on our first series in 2004, and we always look for new ways to push the technology.

Can you mention some of your most recent projects?
Some of our current projects include Dino Dana (Season 4), Dino Dana: The Movie, Endlings and Odd Squad Mobile Unit.

What is your typical path getting content from set to post?
We have been working with Red cameras for years, and we were the first company in Canada to shoot in 4K over a decade ago. We shoot a lot of content, so we create backups in the field before the media is sent to the studio.

Dino Dana

You work with a lot of data. How do you manage and keep all of that secure?
Backups, lots of backups. We use a massive LTO-7 tape robot and we have over a 2PB of backup storage on top of that. We recently added Qumulo to our workflow to ensure the most secure method possible.

What do you use for your VFX work? What about your other post tools?
We use a wide range of software, but our main tools in our creature department are Pixologic Zbrush and Foundry Mari, with all animation happening inside Autodesk Maya.

We also have a large renderfarm to handle the amount of shots, and our render engine of choice is Arnold, which is now an Autodesk project.  In post we use an Adobe Creative Cloud pipeline with 4K HDR color grading happening in DaVinci Resolve. Qumulo is going to be a welcome addition as we continue to grow and our outputs become more complex.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Axis provides 1,000 VFX shots for the TV series Happy!

UK-based animation and visual effects house Axis Studios has delivered 1,000 shots across 10 episodes on the second series of the UCP-produced hit Syfy show Happy!.

Based on Grant Morrison and Darick Robertson’s graphic novel, Happy! follows alcoholic ex-cop turned hitman Nick Sax (Christopher Meloni), who teams up with imaginary unicorn Happy (voiced by Patton Oswalt). In the second season, the action moves from Christmastime to “the biggest holiday rebranding of all time” and a plot to “make Easter great again,” courtesy of last season’s malevolent child-kidnapper, Sonny Shine (Christopher Fitzgerald).

Axis Studios, working across its three creative sites in Glasgow, Bristol, and London, collaborated with executive producer and director Brian Taylor and showrunner Patrick Macmanus to raise the bar on the animation of the fully CG character. The studio also worked on a host of supporting characters, including a “chain-smoking man-baby,” a gimp-like Easter Bunny and even a Jeff Goldblum-shaped cloud. Alongside the extensive animation work, the team’s VFX workload greatly increased from the first season — including two additional episodes, creature work, matte painting, cloud simulations, asset building and extensive effects and clean-up work.

Building on the success of the first season, the 100-person team of artists further developed the animation of the lead character, Happy!, improving the rig, giving more nuanced emotions and continually working to integrate him more in the real-world environments.

RPS editors talk workflow, creativity and Michelob Ultra’s Robots

By Randi Altman

Rock Paper Scissors (RPS) is a veteran editing house specializing in commercials, music videos and feature films. Founded by Oscar-winning editor Angus Wall (The Social Network, The Girl With the Dragon Tattoo), RPS has a New York office as well as a main Santa Monica location that it shares with sister companies A52, Elastic and Jax.

We recently reached out to RPS editor Biff Butler and his assistant editor Alyssa Oh (both Adobe Premiere users) to find out about how they work, their editing philosophy and their collaboration on the Michelob Ultra Robots spot that premiered during this year’s Super Bowl.

Let’s find out more about their process…

Rock Paper Scissors, Santa Monica

What does your job entail?
Biff Butler: Simply put, someone hands us footage (and a script) and we make something out of it. The job is to act as cheerleader for those who have been carrying the weight of a project for weeks, maybe months, and have just emerged from a potentially arduous shoot.

Their job is to then sell the work that we do to their clients, so I must hold onto and protect their vision, maintaining that initial enthusiasm they had. If the agency has written the menu, and the client has ordered the meal, then a director is the farmer and the editor the cook.

I frequently must remind myself that although I might have been hired because of my taste, I am still responsible for feeding others. Being of service to someone else’s creative vision is the name of the game.

What’s your workflow like?
Alyssa Oh: At the start of the project, I receive the footage from production and organize it to Biff’s specs. Once it’s organized, I pass it off and he watches all the footage and assembles an edit. Once we get deeper into the project, he may seek my help in other aspects of the edit, including sound design, pulling music, creating graphics, temporary visual effects and creating animations. At the end of the project, I prep the edits for finishing color, mix, and conform.

What would surprise people about being an editor?
Oh: When I started, I associated editorial with “footage.” It surprised me that, aside from editing, we play a large part in decision-making for music and developing sound design.

Butler: I’ve heard the editor described as the final writer in the process. A script can be written and rewritten, but a lot happens in the edit room once shots are on a screen. The reality of seeing what actually fits within the allotted time that the format allows for can shape decisions as can the ever-evolving needs of the client in question. Another aspect we get involved with is the music — it’s often the final ingredient to be considered, despite how important a role it plays.

Robots

What do you enjoy the most about your job?
Oh: By far, my favorite part is the people that I work with. We spend so much time together; I think it’s important to not just get along, but to also develop close relationships. I’m so grateful to work with people who I look forward to spending the day with.

At RPS, I’ve gained so many great friendships over the years and learn a lot from everyone around me —- not just in the aspect of editorial, but also from the people at companies that work alongside us — A52, Elastic and Jax.

Butler: At the risk of sounding corny, what turns me on most is collaboration and connection with other creative talents. It’s a stark contrast to the beginning of the job, which I also very much adore — when it’s just me and my laptop, watching footage and judging shots.

Usually we get a couple days to put something together on our own, which can be a peaceful time of exploration and discovery. This is when I get to formulate my own opinions and points of view on the material, which is good to establish but also is something I must be ready to let go of… or at least be flexible with. Once the team gets involved in the room — be it the agency or the director — the real work begins.

As I said before, being of service to those who have trusted me with their footage and ideas is truly an honorable endeavor. And it’s not just those who hire us, but also talents we get to join forces with on the audio/music side, effects, etc. On second thought, the free supply of sparkly water we have on tap is probably my favorite part. It’s all pretty great.

What’s the hardest part of the job?
Oh: For me, the hardest part of our job are the “peaks and valleys.” In other words, we don’t have a set schedule, and with each project, our work hours will vary.

Robots

Butler: I could complain about the late nights or long weekends or unpredictable schedules, but those are just a result of being employed, so I count myself fortunate that I even get to moan about that stuff. Perhaps one of the trickiest parts is in dealing with egos, both theirs and mine.

Inevitably, I serve as mediator between a creative agency and the director they hired, and the client who is paying for this whole project. Throw in the mix my own sense of ownership that develops, and there’s a silly heap of egos to manage. It’s a joy, but not everyone can be fully satisfied all the time.

If you couldn’t edit for a living, what would you do?
Oh: I think I would definitely be working in a creative field or doing something that’s hands-on (I still hope to own a pottery studio someday). I’ve always had a fondness for teaching and working with kids, so perhaps I’d do something in the teaching field.

Butler: I would be pursuing a career in directing commercials and documentaries.

Did you know from a young age that you would be involved in this industry?
Oh: In all honesty, I didn’t know that this would be my path. Originally, I wanted to go into
broadcast, specifically sports broadcasting. I had an interest in television production since
high school and learned a bit about editing along the way.

However, I had applied to work at RPS as a production assistant shortly after graduating and quickly gained interest in editing and never looked back!
Butler : I vividly recall seeing the movie Se7en in the cinema and being shell-shocked by the opening title sequence. The feeling I was left with was so raw and unfiltered, I remember thinking, “That is what I want to do.” I wasn’t even 100 percent sure what that was. I knew I wanted to put things together! It wasn’t even so much a mission to tell stories, but to evoke emotion — although storytelling is most often the way to get there.

Robots

At the same time, I was a kid who grew up under the spell of some very effective marketing campaigns — from Nike, Jordan, Gatorade — and knew that advertising was a field I would be interesting in exploring when it came time to find a real job.

As luck would have it, in 2005 I found myself living in Los Angeles after the rock band I was in broke up, and I walked over to a nearby office an old friend of mine had worked at, looking for a job. She’d told me it was a place where editors worked. Turns out, that place was where many of my favorite ads were edited, and it was founded by the guy who put together that Se7en title sequence. That place was Rock Paper Scissors, and it’s been my home ever since.

Can you guys talk about the Michelob Ultra Robots spot that first aired during the Super Bowl earlier this year? What was the process like?
Butler: The process involved a lot of trust, as we were all looking at frames that didn’t have any of the robots in — they were still being created in CG — so when presenting edits, we would have words floating on screen reading “Robot Here” or “Robot Runs Faster Now.”

It says a lot about the agency in that it could hold the client’s hand through our rough edit and have them buy off on what looked like a fairly empty edit. Working with director Dante Ariola at the start of the edit helped to establish the correct rhythm and intention of what would need to be conveyed in each shot. Holding on to those early decisions was paramount, although we clearly had enough human performances to rest are hats on too.

Was there a particular cut that was more challenging than the others?
Butler: The final shot of the spot was a battle I lost. I’m happy with the work, especially the quality of human reactions shown throughout. I’m also keen on the spot’s simplicity. However, I had a different view of how the final shot would play out — a closer shot would have depicted more emotion and yearning in the robot’s face, whereas where we landed left the robot feeling more defeated — but you can’t win them all.

Robots

Did you feel extra stress knowing that the Michelob spot would air during the Super Bowl?
Butler: Not at all. I like knowing that people will see the work and having a firm airdate reduces the likelihood that a client can hem and haw until the wheels fall off. Thankfully there wasn’t enough time for much to go wrong!

You’ve already talked about doing more than just editing. What are you often asked to do in addition to just editing?
Butler: Editors are really also music supervisors. There can be a strategy to it, also knowing when to present a track you really want to sell through. But really, it’s that level of trust between myself and the team that can lead to some good discoveries. As I mentioned before, we are often tasked with just providing a safe and nurturing environment for people to create.

Truly, anybody can sit and hit copy and paste all day. I think it’s my job to hold on to that initial seed or idea or vision, and protect it through the final stages of post production. This includes ensuring the color correction, finishing and sound mix all reflect intentions established days or weeks ahead when we were still fresh enough in our thinking to be acting on instinct.

I believe that as creative professionals, we are who we are because of our instincts, but as a job drags on and on, we are forced to act more with our heads than our hearts. There is a stamina that is required, making sure that what ends up on the TV is representative of what was initially coming out of that instinctual artistic expression.

Does your editing hat change depending on the type of project you are cutting?
Butler: No, not really. An edit is an edit. All sessions should involve laughter and seriousness and focus and moments to unwind and goof off. Perhaps the format will determine the metaphorical hat, or to be more specific, the tempo.

Selecting shots for a 30- or 60-second commercial is very different than chasing moments for a documentary or long-form narrative. I’ll often remind myself to literally breathe slower when I know a shot needs to be long, and the efficiency with which I am telling a story is of less importance than the need to be absorbed in a moment.

Can you name some of your favorite technology?
Oh: My iPhone and all the apps that come with it; my Kindle, which allows me to be as indecisive as I want when it comes to picking a book and traveling; my laptop; and noise-cancelling headphones!

Butler: The carbonation of water, wireless earphones and tiny solid-state hard drives.

Zoic in growth mode, adds VFX supervisor Wanstreet, ups Overstrom

VFX house Zoic Studios has made changes to its creative team, adding VFX supervisor Chad Wanstreet to its Culver City studio and promoting Nate Overstrom to creative director in its New York studio.

Wanstreet has nearly 15 years of experience in visual effects, working across series, feature film, commercial and video game projects. He comes to Zoic from FuseFX, where he worked on television series including NBC’s Timeless, Amazon Prime’s The Tick, Marvel Agents of S.H.I.E.L.D. for ABC and Starz’s Emmy-winning series Black Sails.

Overstrom has spent over 15 years of his career with Zoic, working across the Culver City and New York City studios, earning two Emmy nominations and working on top series including Banshee, Maniac and Iron Fist. He is currently the VFX supervisor on Cinemax’s Warrior.

The growth of the creative department is accompanied by the promotion of several Zoic lead artists to VFX supervisors, with Andrew Bardusk, Matt Bramante, Tim Hanson and Billy Spradlin stepping up to lead teams on a wide range of episodic work. Bardusk just wrapped Season 4 of DC’s Legends of Tomorrow, Bramante just wrapped Noah Hawley’s upcoming feature film Lucy in the Sky, Hanson just completed Season 2 of Marvel’s Cloak & Dagger, and Spradlin just wrapped Season 7 of CW’s Arrow.

This news comes on the heels of a busy start of the year for Zoic across all divisions, including the recent announcement of the company’s second development deal — optioning New York Times best-selling author Michael Johnston’s fantasy novel Soleri for feature film and television adaptation. Zoic also added Daniel Cohen as executive producer, episodic and series in New York City, and Lauren F. Ellis as executive producer, episodic and series in Culver City.

Main Image Caption: (L-R) Chad Wanstreet and Nate Overstrom

UK’s Jellyfish adds virtual animation studio and Kevin Spruce

London-based visual effects and animation studio Jellyfish Pictures is opening of a new virtual animation facility in Sheffield. The new site is the company’s fifth studio in the UK, in addition to its established studios in Fitzrovia, Central London; Brixton; South London; and Oval, South London. This addition is no surprise considering Jellyfish created one of Europe’s first virtual VFX studios back in 2017.

With no hardware housed onsite, Jellyfish Pictures’ Sheffield studio — situated in the city center within the Cooper Project Complex — will operate in a completely PC-over-IP environment. With all technology and pipeline housed in a centrally-based co-location, the studio is able to virtualize its distributed workstations through Teradici’s remote visualization solution, allowing for total flexibility and scalability.

The Sheffield site will sit on the same logical LAN as the other four studios, providing access to the company’s software-defined storage (SDS) from Pixit Media, enabling remote collaboration and support for flexible working practices. With the rest of Jellyfish Pictures’ studios all TPN-accredited, the Sheffield studio will follow in their footsteps, using Pixit Media’s container solution within PixStor 5.

The innovative studio will be headed up by Jellyfish Pictures’ newest appointment, animation director Kevin Spruce. With a career spanning over 30 years, Spruce joins Jellyfish from Framestore, where he oversaw a team of 120 as the company’s head of animation. During his time at Framestore, Spruce worked as animation supervisor on feature films such as Fantastic Beasts and Where to Find Them, The Legend of Tarzan and Guardians of the Galaxy. Prior to his 17-year stint at Framestore, Spruce held positions at Canadian animation company, Bardel Entertainment and Spielberg-helmed feature animation studio Amblimation.

Jellyfish Pictures’ northern presence will start off with a small team of animators working on the company’s original animation projects, with a view to expand its team and set up with a large feature animation project by the end of the year.

“We have multiple projects coming up that will demand crewing up with the very best talent very quickly,” reports Phil Dobree, CEO of Jellyfish Pictures. “Casting off the constraints of infrastructure, which traditionally has been the industry’s way of working, means we are not limited to the London talent pool and can easily scale up in a more efficient and economical way than ever before. We all know London, and more specifically Soho, is an expensive place to play, both for employees working here and for the companies operating here. Technology is enabling us to expand our horizon across the UK and beyond, as well as offer talent a way out of living in the big city.”

For Spruce, the move made perfect sense: “After 30 years working in and around Soho, it was time for me to move north and settle in Sheffield to achieve a better work life balance with family. After speaking with Phil, I was excited to discover he was interested in expanding his remote operation beyond London. With what technology can offer now, the next logical step is to bring the work to people rather than always expecting them to move south.

“As animation director for Jellyfish Pictures Sheffield, it’s my intention to recruit a creative team here to strengthen the company’s capacity to handle the expanding slate of work currently in-house and beyond. I am very excited to be part of this new venture north with Jellyfish. It’s a vision of how creative companies can grow in new ways and access talent pools farther afield.”

 

Amazon’s Good Omens: VFX supervisor Jean-Claude Deguara

By Randi Altman

Good versus evil. It’s a story that’s been told time and time again, but Amazon’s Good Omens turns that trope on its head a bit. With Armageddon approaching, two unlikely heroes and centuries-long frenemies— an angel (Michael Sheen) and demon (David Tennant) — team up to try to fight off the end of the world. Think buddy movie, but with the fate of the world at stake.

In addition to Tennant and Sheen, the Good Omens cast is enviable — featuring Jon Hamm, Michael McKean, Benedict Cumberbatch and Nick Offerman, just to name a few. The series is based on the 1990 book by Terry Pratchett and Neil Gaiman.

Jean-Claude Degaura

As you can imagine, this six-part end-of-days story features a variety of visual effects, from creatures to environments to particle effects and fire. London’s Milk was called on to provide 650 visual effects shots, and its co-founder Jean-Claude Deguara supervised all.

He was also able to talk directly with Gaiman, which he says was a huge help. “Having access to Neil Gaiman as the author of Good Omens was just brilliant, as it meant we were able to ask detailed questions to get a more detailed brief when creating the VFX and receive such insightful creative feedback on our work. There was never a question that couldn’t be answered. You don’t often get that level of detail when you’re developing the VFX.”

Let’s find out more about Deguara’s process and the shots in the show as he walks us through his collaboration and creating some very distinctive characters.

Can you talk about how early you got involved on Good Omens?
We were involved right at the beginning, pre-script. It’s always the best scenario for VFX to be involved at the start, to maximize planning time. We spent time with director Douglas Mackinnon, breaking down all six scripts to plan the VFX methodology — working out and refining how to best use VFX to support the storytelling. In fact, we stuck to most of what we envisioned and we continued to work closely with him throughout the project.

How did getting involved when you did help the process?
With the sheer volume and variety of work — 650 shots, a five-month post production turnaround and a crew of 60 — the planning and development time in preproduction was essential. The incredibly wide range of work spanned multiple creatures, environments and effects work.

Having constant access to Neil as author and showrunner was brilliant as we could ask for clarification and more details from him directly when creating the VFX and receive immediate creative feedback. And it was invaluable to have Douglas working with us to translate Neil’s vision in words onto the screen and plan out what was workable. It also meant I was able to show them concepts the team were developing back in the studio while we were on set in South Africa. It was a very collaborative process.

It was important to have strong crew across all VFX disciplines as they worked together on multiple sequences at the same time. So you’re starting in tracking on one, in effects on another and compositing and finishing everything off on another. It was a big logistical challenge, but certainly the kind that we relish and are well versed in at Milk.

Did you do previs? If so, how did that help and what did you use?
We only used previs to work out how to technically achieve certain shots or to sell an idea to Douglas and Neil. It was generally very simple, using gray scale animation with basic geometry. We used it to do a quick layout of how to rescale the dog to be a bigger hellhound, for example.

You were on set supervising… can you talk about how that helped?
It was a fast-moving production with multiple locations in the UK over about six months, followed by three months in South Africa. It was crucial for the volume and variety of VFX work required on Good Omens that I was across all the planning and execution of filming for our shots.

Being on set allowed me to help solve various problems as we went along. I could also show Neil and Douglas various concepts that were being developed back in the studio, so that we could move forward more quickly with creative development of the key sequences, particularly the challenging ones such as Satan and the Bentley.

What were the crucial things to ensure during the shoot?
Making sure all the preparation was done meticulously for each shot — given the large volume and variety of the environments and sets. I worked very closely with Douglas on the shoot so we could have discussions to problem-solve where needed and find creative solutions.

Can you point to an example?
We had multiple options for shots involving the Bentley, so our advance planning and discussions with Douglas involved pulling out all the car sequences in the series scripts and creating a “mini script” specifically for the Bentley. This enabled us to plan which assets (the real car, the art department’s interior car shell or the CG car) were required and when.

You provided 650 VFX shots. Can you describe the types of effects?
We created everything from creatures (Satan exploding up out of the ground; a kraken; the hellhound; a demon and a snake) to environments (heaven – a penthouse with views of major world landmarks, a busy Soho street); feathered wings for Michael Sheen’s angel Aziraphale and David Tennant’s demon Crowley, and a CG Bentley in which Tennant’s Crowley hurtles around London.

We also had a large effects team working on a whole range of effects over the six episodes — from setting the M25 and the Bentley on fire to a flaming sword to a call center filled with maggots to a sequence in which Crowley (Tennant) travels through the internet at high speed.

Despite the fantasy nature of the subject matter, it was important to Gaiman that the CG elements did not stand out too much. We needed to ensure the worlds and characters were always kept grounded in reality. A good example is how we approached heaven and hell. These key locations are essentially based around an office block. Nothing too fantastical, but they are, as you would expect, completely different and deliberately so.

Hell is the basement, which was shot in a disused abattoir in South Africa, whilst heaven is a full CG environment located in the penthouse with a panoramic view over a cityscape featuring landmarks such as the Eiffel Tower, The Shard and the Pyramids.

You created many CG creatures. Can you talk about the challenges of that and how you accomplished them?
Many of the main VFX features, such as Satan (voiced by Benedict Cumberbatch), appear only once in the six-part series as the story moves swiftly toward the apocalypse. So we had to strike a careful balance between delivering impact yet ensuring they were immediately recognizable and grounded in reality. Given our fast five-month post- turnaround, we had our key teams working concurrently on creatures such as a kraken; the hellhound; a small, portly demon called Usher who meets his demise in a bath of holy water; and the infamous snake in the Garden of Eden.

We have incorporated Ziva VFX into our pipeline, which ensured our rigging and modeling teams maximized the development and build phases in the timeframe. For example, the muscle, fat and skin simulations are all solved on the renderfarm; the animators can publish a scene and then review the creature effects in dailies the next day.

We use our proprietary software CreatureTools for rigging all our creatures. It is a modular rigging package, which allows us to very quickly build animation rigs for previs or blocking and we build our deformation muscle and fat rigs in Ziva VFX. It means the animators can start work quickly and there is a lot of consistency between the rigs.

Can you talk about the kraken?
The kraken pays homage to Ray Harryhausen and his work on Clash of the Titans. Our team worked to create the immense scale of the kraken and take water simulations to the next level. The top half of the kraken body comes up out of the water and we used a complex ocean/water simulation system that was originally developed for our ocean work on the feature film Adrift.

Can you dig in a bit more about Satan?
Near the climax of Good Omens, Aziraphale, Crowley and Adam witness the arrival of Satan. In the early development phase, we were briefed to highlight Satan’s enormous size (about 400 feet) without making him too comical. He needed to have instant impact given that he appears on screen for just this one long sequence and we don’t see him again.

Our first concept was pretty scary, but Neil wanted him simpler and more immediately recognizable. Our concept artist created a horned crown, which along with his large, muscled, red body delivered the look Neil had envisioned.

We built the basic model, and when Cumberbatch was cast, the modeling team introduced some of his facial characteristics into Satan’s FACS-based blend shape set. Video reference of the actor’s voice performance, captured on a camera phone, helped inform the final keyframe animation. The final Satan was a full Ziva VFX build, complete with skeleton, muscles, fat and skin. The team set up the muscle scene and fat scene in a path to an Alembic cache of the skeleton so that they ended up with a blended mesh of Satan with all the muscle detail on it.

We then did another skin pass on the face to add extra wrinkles and loosen things up. A key challenge for our animation team — lead by Joe Tarrant — lay in animating a creature of the immense scale of Satan. They needed to ensure the balance and timing of his movements felt absolutely realistic.

Our effects team — lead by James Reid — layered multiple effects simulations to shatter the airfield tarmac and generate clouds of smoke and dust, optimizing setups so that only those particles visible on camera were simulated. The challenge was maintaining a focus on the enormous size and impact of Satan while still showing the explosion of the concrete, smoke and rubble as he emerges.

Extrapolating from live-action plates shot at an airbase, the VFX team built a CG environment and inserted live action of the performers into otherwise fully digital shots of the gigantic red-skinned devil bursting out of the ground.

And the hellhound?
Beelzebub (Anna Maxwell Martin) sends the antichrist (a boy named Adam) a giant hellhound. By giving the giant beast a scary name, Adam will set Armageddon in motion. In reality, Adam really just wants a loveable pet and transforms the hellhound into a miniature hound called, simply, Dog.

A Great Dane performed as the hellhound, photographed in a forest location while a grip kept pace with a small square of bluescreen. The Milk team tracked the live action and performed a digital head and neck replacement. Sam Lucas modeled the head in Autodesk Maya, matching the real dog’s anatomy before stretching its features into grotesquery. A final round of sculpting followed in Pixologic ZBrush, with artists refining 40-odd blend shapes for facial expression.

Once our rigging team got the first iteration of the blend shapes, they passed the asset off to animation for feedback. They then added an extra level of tweaking around the lips. In the creature effects phase, they used Ziva VFX to add soft body jiggle around the bottom of the lips and jowls.

What about creating the demon Usher?
One of our favorite characters was the small, rotund, quirky demon creature called Usher. He is a fully rigged CG character. Our team took a fully concepted image and adapted it to the performance and physicality of the actor. To get the weight of Usher’s rotund body, the rigging team — lead by Neil Roche — used Ziva VFX to run a soft body simulation on the fatty parts of the creature, which gave him a realistic jiggle. They then added a skin simulation using Ziva’s cloth solver to give an extra layer of wrinkling across Usher’s skin. Finally they used nCloth in Maya to simulate his sash and medals.

Was one more challenging/rewarding than the others?
Satan, because of his huge scale and the integrated effects.

Out of all of the effects, can you talk about your favorite?
The CG Bentley without a doubt! The digital Bentley featured in scenes showing the car tearing around London and the countryside at 90 miles per hour. Ultimately, Crowley drives through hell fire on the M25, it catches fire and burns continuously as he heads toward the site of Armageddon. The production located a real Bentley 3.5 Derby Coupe Thrupp & Maberly 1934, which we photo scanned and modeled in intricate detail. We introduced subtle imperfections to the body panels, ensuring the CG Bentley had the same handcrafted appearance as the real thing and would hold up in full-screen shots, including continuous transitions from the street through a window to the actors in an interior replica car.

In order to get the high speed required, we shot plates on location from multiple cameras, including on a motorbike to achieve the high-speed bursts. Later, production filled the car with smoke and our effects team added CG fire and burning textures to the exterior of our CG car, which intensified as he continued his journey.

You’ve talked about the tight post turnaround? How did you show the client shots for approval?
Given the volume and wide range of work required, we were working on a range of sequences concurrently to maximize the short post window — and align our teams when they were working on similar types of shot.

We had constant access to Neil and Douglas throughout the post period, which was crucial for approvals and feedback as we developed key assets and delivered key sequences. Neil and Douglas would visit Milk regularly for reviews toward delivery of the project.

What tools did you use for the VFX?
Amazon (AWS) for cloud rendering, Ziva for creature rigging, Maya, Nuke, Houdini for effects and Arnold for rendering.

What haven’t I asked that is important to touch on?
Our work on Soho, in which Michael Sheen’s Aziraphale bookshop is situated. Production designer Michael Ralph created a set based on Soho’s Berwick Street, comprising a two-block street exterior constructed up to the top of the first story, with the complete bookshop — inside and out — standing on the corner.

Four 20-x-20-foot mobile greenscreens helped our environment team complete the upper levels of the buildings and extend the road into the far distance. We photo scanned both the set and the original Berwick Street location, combining the reference to build digital assets capturing the district’s unique flavor for scenes during both day and nighttime.


Before and After: Soho

Mackinnon wanted crowds of people moving around constantly, so on shooting days crowds of extras thronged the main section of street and a steady stream of vehicles turned in from a junction part way down. Areas outside this central zone remained empty, enabling us to drop in digital people and traffic without having to do takeovers from live-action performers and cars. Milk had a 1,000-frame cycle of cars and people that it dropped into every scene. We kept the real cars always pulling in round the corner and devised it so there was always a bit of gridlock going on at the back.

And finally, we relished the opportunity to bring to life Neil Gaiman and Douglas Mackinnon’s awesome apocalyptic vision for Good Omens. It’s not often you get to create VFX in a comedy context. For example, the stuff inside the antichrist’s head: whatever he thinks of becomes reality. However, for a 12-year-old child, this means reality is rather offbeat.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Behind the Title: Ntropic Flame artist Amanda Amalfi

NAME: Amanda Amalfi

COMPANY: Ntropic (@ntropic)

CAN YOU DESCRIBE YOUR COMPANY?
Ntropic is a content creator producing work for commercials, music videos and feature films as well as crafting experiential and interactive VR and AR media. We have offices in San Francisco, Los Angeles, New York City and London. Some of the services we provide include design, VFX, animation, color, editing, color grading and finishing.

WHAT’S YOUR JOB TITLE?
Senior Flame Artist

WHAT DOES THAT ENTAIL?
Being a senior Flame artist involves a variety of tasks that really span the duration of a project. From communicating with directors, agencies and production teams to helping plan out any visual effects that might be in a project (also being a VFX supervisor on set) to the actual post process of the job.

Amanda worked on this lipstick branding video for the makeup brand Morphe.

It involves client and team management (as you are often also the 2D lead on a project) and calls for a thorough working knowledge of the Flame itself, both in timeline management and that little thing called compositing. The compositing could cross multiple disciplines — greenscreen keying, 3D compositing, set extension and beauty cleanup to name a few. And it helps greatly to have a good eye for color and to be extremely detail-oriented.

WHAT MIGHT SURPRISE PEOPLE ABOUT YOUR ROLE?
How much it entails. Since this is usually a position that exists in a commercial house, we don’t have as many specialties as there would be in the film world.

WHAT’S YOUR FAVORITE PART OF THE JOB?
First is the artwork. I like that we get to work intimately with the client in the room to set looks. It’s often a very challenging position to be in — having to create something immediately — but the challenge is something that can be very fun and rewarding. Second, I enjoy being the overarching VFX eye on the project; being involved from the outset and seeing the project through to delivery.

WHAT’S YOUR LEAST FAVORITE?
We’re often meeting tight deadlines, so the hours can be unpredictable. But the best work happens when the project team and clients are all in it together until the last minute.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
The evening. I’ve never been a morning person so I generally like the time right before we leave for the day, when most of the office is wrapping up and it gets a bit quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably a tactile art form. Sometimes I have the urge to create something that is tangible, not viewed through an electronic device — a painting or a ceramic vase, something like that.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved films that were animated and/or used 3D elements growing up and wanted to know how they were made. So I decided to go to a college that had a computer art program with connections in the industry and was able to get my first job as a Flame assistant in between my junior and senior years of college.

ANA Airlines

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Most recently I worked on a campaign for ANA Airlines. It was a fun, creative challenge on set and in post production. Before that I worked on a very interesting project for Facebook’s F8 conference featuring its AR functionality and helped create a lipstick branding video for the makeup brand Morphe.

IS THERE A PROJECT THAT YOU ARE MOST PROUD OF?
I worked on a spot for Vaseline that was a “through the ages” concept and we had to create looks that would read as from 1880s, 1900, 1940s, 1970s and present day, in locations that varied from the Arctic to the building of the Brooklyn Bridge to a boxing ring. To start we sent the digitally shot footage with our 3D and comps to a printing house and had it printed and re-digitized. This worked perfectly for the ’70s-era look. Then we did additional work to age it further to the other eras — though my favorite was the Arctic turn-of-the-century look.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Flame… first and foremost. It really is the most inclusive software — I can grade, track, comp, paint and deliver all in one program. My monitors — the 4K Eizo and color-calibrated broadcast monitor, are also essential.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Mostly Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? 
I generally have music on with clients, so I will put on some relaxing music. If I’m not with clients, I listen to podcasts. I love How Did This Get Made and Conan O’Brien Needs a Friend.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Hiking and cooking are two great de-stressors for me. I love being in nature and working out and then going home and making a delicious meal.

NYC’s The-Artery expands to larger space in Chelsea

The-Artery has expanded and moved into a new 7,500-square-foot space in Manhattan’s Chelsea neighborhood. Founded by chief creative officer Vico Sharabani, The-Artery will use this extra space while providing visual effects, post supervision, offline editorial, live action and experience design and development across multiple platforms.

According to Sharabani, the new space is not only a response to the studio’s growth, but allows The-Artery to foster better collaboration and reinforce its relationships with clients and creative partners. “As a creative studio, we recognize how important it is for our artists, producers and clients to be working in a space that is comfortable and supportive of our creative process,” he says. “The extraordinary layout of this new space, the size, the lighting and even our location, allows us to provide our clients with key capabilities and plays an important part in promoting our mission moving forward.”

Recent The-Artery projects include 2018’s VR-enabled production for Mercedez-Benz, their work on Under Armour’s “Rush” campaign and Beyonce’s Coachella documentary, Homecoming.

They have also worked on feature films like Netflix’s Beasts of No Nation, Wes Anderson’s Oscar-winning Grand Budapest Hotel and the crime caper Ocean’s 8.

The-Artery’s new studio features a variety of software including Flame, Houdini, Cinema 4D, 3ds Max, Maya, the Adobe Creative Cloud suite of tools, Avid Media Composer, Shotgun for review and approval and more.

The-Artery features a veteran team of talented team of artists and creative collaborators, including a recent addition — editor and former Mad River Post owner Michael Elliot. “Whether they are agencies, commercial and film directors or studios, our clients always work directly with our creative directors and artists, collaborating closely throughout a project,” says Sharabani.

Main Image: Vico Sharabani (far right) and team in their new space.

Phosphene’s visual effects for Showtime’s Escape at Dannemora

By Randi Altman

The Showtime limited series Escape at Dannemora is based on the true story of two inmates (David Sweat and Richard Matt) who escape from an Upstate New York prison. They were aided by Tilly, a female prison employee, whose husband also worked at Clinton Correctional Facility. She helped run the tailor shop where both men worked and had an intimate relationship with both men.

Matt Griffin

As we approach Emmy season, we thought it was a good time to reach out to the studio that provided visual effects for the Ben Stiller-directed miniseries, which was nominated for a Golden Globe for best television limited series or movie. Escape at Dannemora stars Patricia Arquette, Benicio Del Toro and Paul Dano.

New York City-based Phosphene was called on to create a variety of visual effects, including turning five different locations into the Clinton Correctional Facility, the maximum security prison where the escape took place. The series was also nominated for an Emmy for its Outstanding Visual Effects in A Supporting Role.

We recently spoke with VFX producer Matt Griffin and VFX supervisor Djuna Wahlrab to find out more.

How early did you guys get involved in the project? Were there already set plans for the types of VFX needed? How much input did Phosphene have?
Matt Griffin: There were key sequences that were discussed with us very early on. The most crucial among them were Sweat’s Run, which was a nine-minute “oner” that opened Episode 5; the gruesome death scene of Broome County Sheriff’s Deputy Kevin Tarsia and an ambitious crane shot that revealed the North Yard in the prison.

Djuna Wahlrab

What were the needs of the filmmakers and how did your studio fill that need?
Were you on set supervising?
Griffin: Ben Stiller and the writers had a very clear vision for these challenging sequences, and therefore had a very realistic understanding of how ambitious the VFX would be. They got us involved right at the start so we could be as collaborative as possible with production in preparing the methodology for execution.

In that same spirit, they had us supervise the majority of the shoot, which positioned us to be involved as the natural shifts and adjustments of production arose day to day. It was amazing to be creative problem solvers with the whole team and not just reacting to what happened once in post.

I know that creating the prison was a big part — taking pieces of a few different prisons to make one?
Djuna Wahlrab: Clinton Correctional is a functioning prison, so we couldn’t shoot the whole series within its premises — instead we filmed in five different locations. We shot at a decommissioned prison in Pittsburgh, the prison’s tailor shop was staged in an old warehouse in Brooklyn, and the Honor Block (where our characters were housed) and parts of the prison bowels were built on a stage in Queens. Remaining pieces under the prison were shot in Yonkers, New York in an active water treatment plant. Working closely with production designer Mark Ricker, we tackled the continuity across all these locations.

The upper courts overlook the town.

We knew the main guard tower visible from the outside of Clinton Correctional was crucial, so we always planned to carry that through to Pittsburgh. Scenes taking place just inside the prison wall were also shot in Pittsburgh, and it was not as long as Clinton so we extended the depth of those shots.

While the surrounding mountainside terrain is on beautiful display from the North Yard, it’s also felt from the ground among the buildings within the prison. When looking down the length of the streets, you can see the sloping side of the mountain just over the wall. These scenes were filmed in Pittsburgh, so what you see beyond those walls is actually a bustling hilly city with water towers and electric lines and highways, so we had to adjust to match the real location.

Can you talk about the shot that had David Sweat crawling through pipes in the basement of the prison?
Wahlrab: For what we call Sweat’s Run — because we were creating a “oner” out of 17 discrete pieces — preproduction was crucial. The previs went far beyond a compositional guide. Using blueprints from three different locations and plans for the eventual stage set, orthographic views were created with extremely detailed planning for camera rigging and hand-off points. Drawing on this early presentation, Matt Pebler and the camera department custom-built many of the rigs required for our constricted spaces and meticulous overlapping sections.

The previs was a common language for all departments at the start, but as each piece of the run was filmed, the previs was updated with completed runs and the requirements would shift. Shooting one piece of the run would instantly lock in requirements for the other connecting pieces, and we’d have to determine a more precise plan moving forward from that point. It took a high level of collaboration and flexibility from all departments to constantly narrow the margin for what level of precision was required from everyone.

Sweat preparing for escape.

Can you talk about the scene where Sweat runs over the sheriff’s deputy Tarsia?
Wahlrab: Special effects had built a rig for a partial car that would be safe to “run over” a stunt man. A shell of a vehicle was suspended from an arm off a rigged tactical truck, so that they moved in parallel. Sweat’s stunt car floated a few feet off the ground. The shell had a roof, windows, a windshield, a hood and a driver’s seat. Below that the sides, grill and wheels of the car were constructed of a soft foam. The stunt man for Tarsia was rigged with wires so they could control his drag beneath the car.

In this way, we were able to get the broad strokes of the stunt in-camera. Though the car needed to be almost completely replaced with CG, its structure took the first steps to inform the appropriate environmental re-lighting needed for the scene. The impact moment was a particular challenge because, of course, the foam grill completely gave way to Tarsia’s body. We had to simulate the cracking of the bumper and the stamp of the blood from Tarsia’s wounds. We also had to reimagine how Tarsia’s body would have moved with this rigid impact.

Tarsia’s death: Replaced stunt car, added blood and re-animated the victim.

For Tarsia himself, in addition to augmenting the chosen take, we used alt takes from the shoot for various parts of the body to recreate a Tarsia with more appropriate physical reactions to the trauma we were simulating. There was also a considerable amount of hand painting this animation to help it all mesh together. We added blood on the wheels, smok,  and animated pieces of the broken bumper, all of which helped to ground Tarsia in the space.

You also made the characters look younger. Can you talk about what tools you used for this particular effect?
Wahlrab: Our goal was to support this jump in time, but not distract by going too far. Early on, we did tests where we really studied the face of each actor. From this research, we determined targeted areas for augmentation, and the approach really ended up being quite tailored for each character.

We broke down the individual regions of the face. First, we targeted wrinkles with tailored defocusing. Second, we reshaped recessed portions of the face, mostly with selective grading. In some cases, we retextured the skin on top of this work. At the end of all of this, we had to reintegrate this into the grainy 16mm footage.

Can you talk about all the tools you used?
Griffin: At Phosphene, we use Foundry Nuke Studio and Autodesk 3ds Max. For additional support, we rely on Mocha Pro, 3DEqualizer and PF Track, among many others.


Added snow, cook fire smoke and inmates to upper tier.

Any other VFX sequences that you can talk about?
Wahlrab: As with any project, weather continuity was a challenge. Our prison was represented by five locations, but it took many more than that to fill out the lives of Tilly and Lyle beyond their workplace. Because we shot a few scenes early on with snow, we were locked into that reality in every single location going forward. The special FX team would give us practical snow in the areas with the most interaction, and we were charged with filling out much of the middle and background. For the most part, we relied on photography, building custom digital matte paintings for each shot. We spent a lot of time upstate in the winter, so I found myself pulling off the road in random places in search of different kinds of snow coverage. It became an obsession, figuring out the best way to shoot the same patch of snow from enough angles to cover my needs for different shots, at different times of day, not entirely knowing where we’d need to use it.

What was the most challenging shots?
Wahlrab: Probably the most challenging location to shoot was the North Yard within the prison. Clinton Correctional is a real prison in Dannemora, New York. It’s about 20 miles south of the Canadian border, set into the side of this hill in what is really a beautiful part of the country.This was the inmates outdoor space, divided into terraces overlooking the whole town of Dannemora and the valley beyond. Though the production value of shooting in an active prison was amazing, it also presented quite a few logistical challenges. For safety (ours as well as the prisoners), the gear allowed in was quite restricted. Many of the tools I rely on had to be left behind. Then, load-in required a military grade inspection by the COs, who examined every piece of our equipment before it could enter or exit. The crew was afforded no special privileges for entering the prison and we were shuffled through the standard intake. It was time consuming, and very much limited how long we’d be able to shoot that day once inside.


Before and After: Cooking fires in the upper courts.

Production did the math and balanced the crew and cast load-in with the coverage required. We had 150 background extras for the yard, but in reality, the average number of inmates, even on the coldest of days, was 300. Also, we needed the yard to have snow on the ground for continuity. Unfortunately it was an unseasonably warm day, and after the first few hours, the special effects snow that was painstakingly created and placed during the night was completely melted. Special effects was also charged with creating cook fire for the stoves in each court, but they could only bring in so much fuel. Our challenge was clear — fill out the background inmate population, add snow and cook fire smoke… everywhere.

The biggest challenge in this location was the shot Ben conceived of that would reveal of the enormity of the North Yard. It was this massive crane shot that began at the lowest part of the yard and panned to the upper courts. It slowly pulls out and cranes up to reveal the entire outdoor space. It’s really a beautiful way to introduce us to the North Yard, revealing one terraced level at a time until you have the whole space in view. It’s one of my favorite moments in the show.

Some shots outside the prison involved set extensions.

There’s this subtext about the North Yard and its influence on Sweat and Matt. Out in the yard, the inmates have a bit more autonomy. With good behavior, they have some ownership over the courts and are given the opportunity to curate these spaces. Some garden, many cook meals, and our characters draw and paint. For those lucky enough to be in the upper courts, they have this beautiful view beyond the walls of the prison, and you can almost forget you are locked up.

I think we’re meant to wonder, was it this autonomy or this daily reminder of the outside world beyond the prison walls that fueled their intense devotion to the escape? This location is a huge story piece, and I don’t think it would have been possible to truly render the scale of it all without the support of visual effects.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Sydney’s Fin creates CG robot for Netflix film I Am Mother

Fin Design + Effects, an Australian-based post production house with studios in Melbourne and Sydney, brings its VFX and visual storytelling expertise to the upcoming Netflix film I Am Mother. Directed by Grant Sputore, the post-apocalyptic film stars Hilary Swank, Rose Byrne and Clara Rugaard.

In I Am Mother, a teenage girl (Rugaard) is raised underground by the robot “Mother” (voiced by Byrne), designed to repopulate the earth following an extinction event. But their unique bond is threatened when an inexplicable stranger (Swank) arrives with alarming news.

Working closely with the director, Fin Design’s Sydney office built a CG version of the AI robot Mother to be used interchangeably with the practical robot suit built by New Zealand’s Weta Workshop. Fin was involved from the early stages of the process to help develop the look of Mother, completing extensive design work and testing, which then fed back into the practical suit.

In total, Fin produced over 220 VFX shots, including the creation of a menacing droid army as well as general enhancements to the environments and bunker where this post-apocalyptic story takes place.

According to Fin Australia’s managing director, Chris Spry, “Grant was keen on creating an homage of sorts to old-school science-fiction films and embracing practical filmmaking techniques, so we worked with him to formulate the best approach that would still achieve the wow factor — seamlessly combining CG and practical effects. We created an exact CG copy of the suit, visualizing high-action moments such as running, or big stunt scenes that the suit couldn’t perform in real life, which ultimately accounted for around 80 shots.”

Director Sputore on working with Fin: “They offer suggestions and bust expectations. In particular, they delivered visual effects magic with our CG Mother, one minute having her thunder down bunker corridors and in the next moment speed-folding intricate origami creations. For the most part, the robot at the center of our film was achieved practically. But in those handful of moments where a practical solution wasn’t possible, it was paramount that the audience was not be bumped from the film by a sudden transition to a VFX version of one of our central characters. In the end, even I can’t tell which shots of Mother are CG and which are practical, and, crucially, neither can the audience.”

To create the CG replica, the Fin team paid meticulous attention to detail, ensuring the material, shaders and textures perfectly matched photographs and laser scans of the practical suit. The real challenge, however, was in interpreting the nuances of the movements.

“Precision was key,” explains VFX supervisor Jonathan Dearing. “There are many shots cutting rapidly between the real suit and CG suit, so any inconsistencies would be under a spotlight. It wasn’t just about creating a perfect CG replica but also interpreting the limitations of the suit. CG can actually depict a more seamless movement, but to make it truly identical, we needed to mimic the body language and nuances of the actor in the suit [Luke Hawker]. We did a character study of Luke and rigged it to build a CG version of the suit that could mimic him precisely.”

Fin finessed its robust automation pipeline for this project. Built to ensure greater efficiency, the system allows animators to push their work through lighting and comp at the click of a button. For example, if a shot didn’t have a specific light rig made for it, animators could automatically apply a generic light rig that suits the whole film. This tightly controlled system meant that Fin could have one lighter and one animator working on 200 shots without compromising on quality.

The studio used Autodesk Maya, Side Effects Houdini, Foundry Nuke and Redshift on this project.

I Am Mother premiered at the 2019 Sundance Film Festival and is set to stream on Netflix on June 7.

Wacom updates its Intuos Pro Small tablet

Wacom has introduced a new Intuos Pro pen and touch tablet Small model to its advanced line of creative pen tablets. The new Intuos Pro Small joins the Intuos Pro Medium and Intuos Pro Large sizes already available.

Featuring Wacom’s precise Pro Pen 2 technology with over 8K pen pressure levels, pen tilt sensitivity, natural pen-on-paper feel and battery-free performance, artists now can choose the size — small, medium or large — that best fits their way of working.

The new small size features the same tablet working area as the previous model of Intuos Pro Small and targets on-the go creatives, whose Wacom tablet and PC or Mac laptops are always nearby. The space-saving tablet’s small footprint, wireless connectivity and battery-free pen technology that never needs charging makes setting makes working anywhere easy.

Built for pros, all three sizes of Intuos Pro tablets feature a TouchRing and ExpressKeys, six on the Small and eight on the Medium and Large, for the creation of customized shortcuts to speed up the creative workflow. In addition, incorporating both pen and touch on the tablet allows users to explore new ways to navigate and helps the whole creative experience become more interactive. The slim tablets, also feature a durable anodized aluminum back case and come with a desktop pen stand containing 10 replacement pen nibs.

The Wacom Pro Pen 2 features Wacom’s most advanced creative pen technology to date, with four times the pressure sensitivity as the original Pro Pen. 8,192 levels of pressure, tilt recognition and lag-free tracking effectively emulate working with traditional media by offering a natural drawing experience. Additionally, the pen’s customizable side switch allows one to easily access commonly used shortcuts, greatly speeding production.

Wacom offers two helpful accessory pens (purchased separately). The Pro Pen 3D, features a third button which can be set to perform typical 3D tasks such as tumbling objects in commonly used 3D creative apps. The newly released Pro Pen slim, supports some artists ergonomic preferences for a slimmer pen with a more pencil-like feel. Both are compatible with the Intuos Pro family and can help customize and speed the creative experience.

Intuos Pro is Bluetooth-enabled and compatible with Macs and PCs. All three sizes come with the Wacom Pro Pen 2, pen stand and feature ExpressKeys, TouchRing and multi-touch gesture control. The Intuos Pro Small ($249.95), Intuos Pro Medium ($379.95) and Intuos Pro Large ($499.95) are available now.

2 Chainz’s 2 Dolla Bill gets VFX from Timber

Santa Monica’s Timber, known for its VMA-winning work on the Kendrick Lamar music video “Humble, provided visual effects and post production for the latest music video from 2 Chainz, featuring E-40 and Lil Wayne — 2 Dolla Bill.

The video begins with a group of people in a living room with the artist singing, “I’m rare” while holding a steak. It transitions to a poker game where the song continues with “I’m rare, like a two dollar bill.” We then see a two dollar bill with Thomas Jefferson singing the phrase as well. The video takes us back to the living room, the poker game, an operating room, a kitchen and other random locations.

Artists at collaborating company Kevin provided 2D visual effects for the music video, including the scene with the third eye.

According to Timber creative director/partner Kevin Lau, “The main challenge for this project was the schedule. It was a quick turnaround initially, so it was great to be able to work in tandem with offline to get ahead of the schedule. This also allowed us to work closely with the director and implement some his requests to enhance the video after it was shot.”

Timber got involved early on in the project and was on set while they shot the piece. The studio called on Autodesk Flame for clean-up, compositing and enhancement work, as well as the animation of the talking money.

Lau was happy Timber got the chance to be on set. “It was very useful to have a VFX supervisor on set for this project, given the schedule and scope of work. We were able to flag any concerns/issues right away so they didn’t become bigger problems in post.”

Arcade Edit’s Geoff Hounsell edited the piece. Daniel de Vue from A52 provided the color grade.

 

Marvel Studios’ Victoria Alonso to keynote SIGGRAPH 2019

Marvel Studios executive VP of production Victoria Alonso has been name keynote speaker for SIGGRAPH 2019, which will run from July 28 through August 1 in downtown Los Angeles. Registration is now open. The annual SIGGRAPH conference is a melting pot for researchers, artists and technologists, among other professionals.

“Victoria is the ultimate symbol of where the computer graphics industry is headed and a true visionary for inclusivity,” says SIGGRAPH 2019 conference chair Mikki Rose. “Her outlook reflects the future I envision for computer graphics and for SIGGRAPH. I am thrilled to have her keynote this summer’s conference and cannot wait to hear more of her story.”

One of few women in Hollywood to hold such a prominent title, Alonso’s dedication to the industry has been admired for a long time, leading to multiple awards and honors, including the 2015 New York Women in Film & Television Muse Award for Outstanding Vision and Achievement, the Advanced Imaging Society’s first female Harold Lloyd Award recipient, and the 2017 VES Visionary Award (another female first). A native of Buenos Aires, her career began in visual effects and included a four-year stint at Digital Domain.

Alonso’s film credits include productions such as Ridley Scott’s Kingdom of Heaven, Tim Burton’s Big Fish, Andrew Adamson’s Shrek, and numerous Marvel titles — Iron Man, Iron Man 2, Thor, Captain America: The First Avenger, Iron Man 3, Captain America: The Winter Soldier, Captain America: Civil War, Thor: The Dark World, Avengers: Age of Ultron, Ant-Man, Guardians of the Galaxy, Doctor Strange, Guardians of the Galaxy Vol. 2, Spider-Man: Homecoming, Thor: Ragnarok, Black Panther, Avengers: Infinity War, Ant-Man and the Wasp and, most recently, Captain Marvel.

“I’ve been attending SIGGRAPH since before there was a line at the ladies’ room,” says Alonso. “I’m very much looking forward to having a candid conversation about the state of visual effects, diversity and representation in our industry.”

She adds, “At Marvel Studios, we have always tried to push boundaries with both our storytelling and our visual effects. Bringing our work to SIGGRAPH each year offers us the opportunity to help shape the future of filmmaking.”

The 2019 keynote session will be presented as a fireside chat, allowing attendees the opportunity to hear Alonso discuss her life and career in an intimate setting.

Review: Maxon Cinema 4D Release 20

By Brady Betzel

Last August, Maxon made available its Cinema 4D Release 20. From the new node-based Material Editor to the all new console used to debug and develop scripts, Maxon has really upped the ante.

At the recent NAB show, Maxon announced that they acquired Redshift Rendering Technologies, the makers of the Redshift rendering engine. This acquisition will hopefully tie in an industry standard GPU-based rendering engine inside of Cinema 4D R20’s workflow and speed up rendering. As of now there is still the same licensing fees attached to Redshift as there were before the acquisition: Node-Locked is $500 and Floating is $600.

Digging In
The first update to Cinema 4D R20 that I wanted to touch on is the new node-based Material Editor. If you are familiar with Blackmagic’s DaVinci Resolve or Nuke’s applications, then you have seen how nodes work. I love how nodes work, allowing the user to not only layer up effects — or in Cinema 4D R20’s case — diffusion to camera distance. There are over 150 nodes inside of the material editor to build textures with.

One small change that I noticed inside of the updated Material Editor was the new gradient settings. When you are working with gradient knots you can now select multiple knots at once and then right click and double the selected knots, invert the knots, select different knot interpolations (including stepped, smooth, cubic, linear, and blend) and even distribute the knots to clean up your pattern. A real nice and convenient update to gradient workflows.

In Cinema 4D R20, not only can you add new nodes from the search menu, but you can also click the node dots in the Basic properties window and route nodes through there. When you are happy with your materials made in the node editor, you can save them as assets in the scene file or even compress them in a .zip file to share with others.

In a related update category, Cinema 4D Release 20 has introduced the Uber Material. In simple terms (and I mean real simple), the Uber Material is a node-based material that is different from standard or physical materials because it can be edited inside of the Attribute Manager or Material Editor but retain the properties available in the Node Editor.

The Camera Tracking and 2D Camera View has been updated. While the Camera Tracking mode has been improved, the new 2D Camera View mode has combined the Film Move mode with the Film Zoom mode. Adding the ability to use standard shortcuts to move around a scene instead of messing with the Film Offset or Focal Length in the Camera Object Properties dialogue. For someone like me who isn’t a certified pro in Cinema 4D, these little shortcuts really make me feel at home. Much more like apps I’m used to such as Mocha Pro or After Effects. Maxon has also improved the 2D tracking algorithm for much tighter tracks as well as added virtual keyframes. The virtual keyframes are an extreme help when you don’t have time for minute adjustments.

Volume Modeling
What seems to be one of the largest updates in Cinema 4D R20 is the addition of Volume Modeling with the OpenVDB-based Volume Builder. According to www.openvdb.org, “OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids,” developed by Ken Museth at DreamWorks Animation. It uses 3D pixels called voxels instead of polygons. When using the Volume Builder you can combine multiple polygon and primitive objects using Boolean operations: Union, Subtract or Intersect. Furthermore you can smooth your volume using multiple techniques, including one that made me do some extra Google work: Laplacian Flow.

Fields
When going down the voxel rabbit hole in Cinema 4D R20, you will run into another new update: Fields. Prior to Cinema 4D R20, we would use Effectors to affect strength values of an object. You would stack and animate multiple effectors to achieve different results. In Cinema 4D R20, under the Falloff tab you will now see a Fields list along with the types of Field Objects to choose from.

Imagine you make a MoGraph object that you want its opacity to be controlled by a box object moving through your MoGraph but also physically modified by a capsule poking through. You can combine these different field object effectors by using compositing functions in the Fields list. In addition you can animate or alter these new fields straight away in the Objects window.

Summing Up
Cinema 4D Release 20 has some amazing updates that will greatly improve efficiency and quality of your work. From tracking updates to field updates, there are plenty of exciting tools to dive into. And if you are reading this as an After Effects user who isn’t sure about Cinema 4D, now is the time to dive in. Once you learn the basics, whether it’s from Youtube tutorials or you sign up for www.cineversity.com classes, you will immediately see an increase in the quality of your work.

Combining Adobe After Effects, Element 3D and Cinema 4D R20 is the ultimate in 3D motion graphics and 2D compositing — accessible to almost everyone. And I didn’t even touch on the dozens of other updates to Cinema 4D R20 like the multitude of ProRender updates, FBX import/export options, new node materials and CAD import support for Cataia, Iges, JT, Solidworks and Step formats. Check out Cinema 4D Release 20’s newest features on YouTube and on their website.

And, finally, I think it’s safe to assume that Maxon’s acquisition of RedShift renderer poses a bright future for Cinema 4D users.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.

NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.

Autodesk’s Flame 2020 features machine learning tools

Autodesk’s new Flame 2020 offers a new machine-learning-powered feature set with a host of new capabilities for Flame artists working in VFX, color grading, look development or finishing. This latest update will be showcased at the upcoming NAB Show.

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

New creative tools include:
· Z-Depth Map Generator— Enables Z-depth map extraction analysis using machine learning for live-action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
· Human Face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
· Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity updates include:
· Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
· Simpler UX in Core Areas— A new expanded full-width UX layout for MasterGrade, Image surface and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
· Manager for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment.
· Open FX Support—Flame, Flare and Flame Assist version 2020 now include comprehensive support for industry-standard Open FX creative plugins such as Batch/BFX nodes or on the Flame timeline.
· Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene.

For single-user licenses, Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.
Flame, Flare, Flame Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. Pricing details can be found at the Autodesk website.

Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.

Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Behind the Title: Nice Shoes animator Yandong Dino Qiu

This artist/designer has taken to sketching people on the subway to keep his skills fresh and mind relaxed.

NAME: Yandong Dino Qiu

COMPANY: New York’s Nice Shoes

CAN YOU DESCRIBE YOUR COMPANY?
Nice Shoes is a full-service creative studio. We offer design, animation, VFX, editing, color grading, VR/AR, working with agencies, brands and filmmakers to help realize their creative vision.

WHAT’S YOUR JOB TITLE?
Designer/Animator

WHAT DOES THAT ENTAIL?
Helping our clients to explore different looks in the pre-production stage, while aiding them in getting as close as possible to the final look of the spot. There’s a lot of exploration and trial and error as we try to deliver beautiful still frames that inform the look of the moving piece.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Not so much for the title, but for myself, design and animation can be quite broad. People may assume you’re only 2D, but it also involves a lot of other skill sets such as 3D lighting and rendering. It’s pretty close to a generalist role that requires you to know nearly every software as well as to turn things around very quickly.

WHAT TOOLS DO YOU USE?
Photoshop, After Effects,. Illustrator, InDesign — the full Adobe Creative Suite — and Maxon Cinema 4D.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Pitch and exploration. At that stage, all possibilities are open. The job is alive… like a baby. You’re seeing it form and helping to make new life. Before this, you have no idea what it’s going to look like. After this phase, everyone has an idea. It’s very challenging, exciting and rewarding.

WHAT’S YOUR LEAST FAVORITE?
Revisions. Especially toward the end of a project. Everything is set up. One little change will affect everything else.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
2:15pm. Its right after lunch. You know you have the whole afternoon. The sun is bright. The mood is light. It’s not too late for anything.

Sketching on the subway.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a Manga artist.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
La Mer. Frontline. Friskies. I’ve also been drawing during my commute everyday, sketching the people I see on the subway. I’m trying to post every week on Instagram. I think it’s important for artists to keep to a routine. I started up with this at the beginning of 2019, and there’ve been about 50 drawings already. Artists need to keep their pen sharp all the time. By doing these sketches, I’m not only benefiting my drawing skills, but I’m improving my observation about shapes and compositions, which is extremely valuable for work. Being able to break down shapes and components is a key principle of design, and honing that skill helps me in responding to client briefs.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
TED-Ed What Is Time? We had a lot of freedom in figuring out how to animate Einstein’s theories in a fun and engaging way. I worked with our creative director Harry Dorrington to establish the look and then with our CG team to ensure that the feel we established in the style frames was implemented throughout the piece.

TED-Ed What Is Time?

The film was extremely well received. There was a lot of excitement at Nice Shoes when it premiered, and TED-Ed’s audience seemed to respond really warmly as well. It’s rare to see so much positivity in the YouTube comments.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Wacom tablet for drawing and my iPad for reading.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I take time and draw for myself. I love that drawing and creating is such a huge part of my job, but it can get stressful and tiring only creating for others. I’m proud of that work, but when I can draw something that makes me personally happy, any stress or exhaustion from the work day just melts away.

Quick Chat: Lord Danger takes on VFX-heavy Devil May Cry 5 spot

By Randi Altman

Visual effects for spots have become more and more sophisticated, and the recent Capcom trailer promoting the availability of its game Devil May Cry 5 is a perfect example.

 The Mike Diva-directed Something Greater starts off like it might be a commercial for an anti-depressant with images of a woman cooking dinner for some guests, people working at a construction site, a bored guy trimming hedges… but suddenly each of our “Everyday Joes” turns into a warrior fighting baddies in a video game.

Josh Shadid

The hedge trimmer’s right arm turns into a futuristic weapon, the construction worker evokes a panther to fight a monster, and the lady cooking is seen with guns a blazin’ in both hands. When she runs out of ammo, and to the dismay of her dinner guests, her arms turn into giant saws. 

Lord Danger’s team worked closely with Capcom USA to create this over-the-top experience, and they provided everything from production to VFX to post, including sound and music.

We reached out to Lord Danger founder/EP Josh Shadid to learn more about their collaboration with Capcom, as well as their workflow.

How much direction did you get from Capcom? What was their brief to you?
Capcom’s fight-games director of brand marketing, Charlene Ingram, came to us with a simple request — make a memorable TV commercial that did not use gameplay footage but still illustrated the intensity and epic-ness of the DMC series.

What was it shot on and why?
We shot on both Arri Alexa Mini and Phantom Flex 4k using Zeiss Super Speed MKii Prime lenses, thanks to our friends at Antagonist Camera, and a Technodolly motion control crane arm. We used the Phantom on the Technodolly to capture the high-speed shots. We used that setup to speed ramp through character actions, while maintaining 4K resolution for post in both the garden and kitchen transformations.

We used the Alexa Mini on the rest of the spot. It’s our preferred camera for most of our shoots because we love the combination of its size and image quality. The Technodolly allowed us to create frame-accurate, repeatable camera movements around the characters so we could seamlessly stitch together multiple shots as one. We also needed to cue the fight choreography to sync up with our camera positions.

You had a VFX supervisor on set. Can you give an example of how that was beneficial?
We did have a VFX supervisor on site for this production. Our usual VFX supervisor is one of our lead animators — having him on site to work with means we’re often starting elements in our post production workflow while we’re still shooting.

Assuming some of it was greenscreen?
We shot elements of the construction site and gardening scene on greenscreen. We used pop-ups to film these elements on set so we could mimic camera moves and lighting perfectly. We also took photogrammetry scans of our characters to help rebuild parts of their bodies during transition moments, and to emulate flying without requiring wire work — which would have been difficult to control outside during windy and rainy weather.

Can you talk about some of the more challenging VFX?
The shot of the gardener jumping into the air while the camera spins around him twice was particularly difficult. The camera starts on a 45-degree frontal, swings behind him and then returns to a 45-degree frontal once he’s in the air.

We had to digitally recreate the entire street, so we used the technocrane at the highest position possible to capture data from a slow pan across the neighborhood in order to rebuild the world. We also had to shoot this scene in several pieces and stitch it together. Since we didn’t use wire work to suspend the character, we also had to recreate the lower half of his body in 3D to achieve a natural looking jump position. That with the combination of the CG weapon elements made for a challenging composite — but in the end, it turned out really dramatic (and pretty cool).

Were any of the assets provided by Capcom? All created from scratch?
We were provided with the character and weapons models from Capcom — but these were in-game assets, and if you’ve played the game you’ll see that the environments are often dark and moody, so the textures and shaders really didn’t apply to a real-world scenario.

Our character modeling team had to recreate and re-interpret what these characters and weapons would look like in the real world — and they had to nail it — because game culture wouldn’t forgive a poor interpretation of these iconic elements. So far the feedback has been pretty darn good.

In what ways did being the production company and the VFX house on the project help?
The separation of creative from production and post production is an outdated model. The time it takes to bring each team up to speed, to manage the communication of ideas between creatives and to ensure there is a cohesive vision from start to finish, increases both the costs and the time it takes to deliver a final project.

We shot and delivered all of Devil May Cry’s Something Greater in four weeks total, all in-house. We find that working as the production company and VFX house reduces the ratio of managers per creative significantly, putting more of the money into the final product.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

VFX supervisor Christoph Schröer joins NYC’s Artjail

New York City-based VFX house Artjail has added Christoph Schröer as VFX supervisor. Previously a VFX supervisor/senior compositor at The Mill, Schröer brings over a decade of experience to his new role at Artjail. His work has been featured in spots for Mercedes-Benz, Visa, Volkswagen, Samsung, BMW, Hennessy and Cartier.

Combining his computer technology expertise and a passion for graffiti design, Schröer applied his degree in Computer and Media Sciences to begin his career in VFX. He started off working at visual effects studios in Germany and Switzerland where he collaborated with a variety of European auto clients. His credits from his tenure in the European market include lead compositor for multiple Mercedes-Benz spots, two global Volkswagen campaign launches and BMW’s “Rev Up Your Family.”

In 2016, Schröer made the move to New York to take on a role as senior compositor and VFX supervisor at The Mill. There, he teamed with directors such as Tarsem Singh and Derek Cianfrance, and worked on campaigns for Hennessy, Nissan Altima, Samsung, Cartier and Visa.

Autodesk Arnold 5.3 with Arnold GPU in public beta

Autodesk has made its Arnold 5.3 with Arnold GPU available as a public beta. The release provides artists with GPU rendering for a set number of features, and the flexibility to choose between rendering on the CPU or GPU without changing renderers.

From look development to lighting, support for GPU acceleration brings greater interactivity and speed to artist workflows, helping reduce iteration and review cycles. Arnold 5.3 also adds new functionality to help maximize performance and give artists more control over their rendering processes, including updates to adaptive sampling, a new version of the Randomwalk SSS mode and improved Operator UX.

Arnold GPU rendering makes it easier for artists and small studios to iterate quickly in a fast working environment and scale rendering capacity to accommodate project demands. From within the standard Arnold interface, users can switch between rendering on the CPU and GPU with a single click. Arnold GPU currently supports features such as arbitrary shading networks, SSS, hair, atmospherics, instancing, and procedurals. Arnold GPU is based on the Nvidia OptiX framework and is optimized to leverage Nvidia RTX technology.

New feature summary:
— Major improvements to quality and performance for adaptive sampling, helping to reduce render times without jeopardizing final image quality
— Improved version of Randomwalk SSS mode for more realistic shading
— Enhanced usability for Standard Surface, giving users more control
— Improvements to the Operator framework
— Better sampling of Skydome lights, reducing direct illumination noise
— Updates to support for MaterialX, allowing users to save a shading network as a MaterialX look

Arnold 5.3 with Arnold GPU in public beta will be available March 20 as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. You can also try Arnold GPU with a free 30-day trial of Arnold. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, Houdini, Cinema 4D and Katana.

Sandbox VR partners with Vicon on Amber Sky 2088 experience

VR gaming company Sandbox VR has been partnering and working with Vicon motion capture tools to create next-generation immersive experiences. By using Vicon’s motion capture cameras and its location-based VR (LBVR) software Evoke, the Hong Kong-based Sandbox VR is working to transport up to six people at a time into the Amber Sky 2088 experience, which takes place in a future where the fate of humanity lies in the balance.

Sandbox VR’s adventures resemble movies where the players become the characters. With two proprietary AAA-quality games already in operation across Sandbox VR’s seven locations, for its third title, Amber Sky 2088, a new motion capture solution was needed. In the futuristic game, users step into the role of androids, granting players abilities far beyond the average human while still scaling the game to their actual movements. To accurately convey that for multiple users in a free-roam environment, precision tracking and flexible scalability were vital. For that, Sandbox VR turned to Vicon.

Set in the twilight of the 21st century, Amber Sky 2088 takes players to a futuristic version of Hong Kong, then through the clouds to the edge of space to fight off an alien invasion. Android abilities allow players to react with incredible strength and move at speeds fast enough to dodge bullets. And while the in-game action is furious, participants in the real-world — equipped with VR headsets —  freely roam an open environment as Vicon LBVR motion capture cameras track their movement.

Vicon’s motion capture cameras record every player movement, then send the data to its Evoke software, a solution introduced last year as part of its LBVR platform, Origin. Vicon’s solution offers  precise tracking, while also animating player motion in realtime, creating a seamless in-game experience. Automatic re-calibration also makes the experience’s operation easier than ever despite its complex nature, and the system’s scalability means fewer cameras can be used to capture more movement, making it cost-effective for large scale expansion.

Since its founding in 2016, Sandbox VR has been creating interactive experiences by combining motion capture technology with virtual reality. After opening its first location in Hong Kong in 2017, the company has since expanded to seven locations across Asia and North America, with six new sites on the way. Each 30- to 60-minute experience is created in-house by Sandbox VR, and each can accommodate up to six players at a time.

The recent partnership with Vicon is the first step in Sandbox VR’s expansion plans that will see it open over 40 experience rooms across 12 new locations around the world by the end of the year. In considering its plans to build and operate new locations, the VR makers chose to start with five systems from Vicon, in part because of the company’s collaborative nature.

Review: Red Giant’s Trapcode Suite 15

By Brady Betzel

We are now comfortably into 2019 and enjoying the Chinese Year of the Pig — or at least I am! So readers, you might remember that with each new year comes a Red Giant Trapcode Suite update. And Red Giant didn’t disappoint with Trapcode Suite 15.

Every year Red Giant adds more amazing features to its already amazing particle generator and emitter toolset, Trapcode Suite, and this year is no different. Trapcode Suite 15 is keeping tools like 3D Stroke, Shine, Starglow, Sound Keys, Lux, Tao, Echospace and Horizon while significantly updating Particular, Form and Mir.

I won’t be covering each plugin in this review but you can check out what each individual plugin does on the Red Giant’s website.

Particular 4
The bread and butter of the Trapcode Suite has always been Particular, and Version 4 continues to be a powerhouse. The biggest differences between using a true 3D app like Maxon’s Cinema 4D or Autodesk Maya and Adobe After Effects (besides being pseudo 3D) are features like true raytraced rendering and interacting particle systems with fluid dynamics. As I alluded to, After Effects isn’t technically a 3D app, but with plugins like Particular you can create pseudo-3D particle systems that can affect and be affected by different particle emitters in your scenes. Trapcode Suite 15 and, in particular (all the pun intended), Particular 4, have evolved to another level with the latest update to include Dynamic Fluids. Dynamic Fluids essentially allows particle systems that have the fluid-physics engine enabled to interact with one another as well as create mind-blowing liquid-like simulations inside of After Effects.

What’s even more impressive is that with the Particular Designer and over 335 presets, you don’t  need a master’s degree to make impressive motion graphics. While I love to work in After Effects, I don’t always have eight hours to make a fluidly dynamic particle system bounce off 3D text, or have two systems interact with each other for a text reveal. This is where Particular 4 really pays for itself. With a little research and tutorial watching, you will be up and rendering within 30 minutes.

When I was using Particular 4, I simply wanted to recreate the Dynamic Fluid interaction I had seen in one of their promos. Basically, two emitters crashing into each other in a viscus-like fluid, then interacting. While it isn’t necessarily easy, if you have a slightly above-beginner amount of After Effects knowledge you will be able to do this. Apply the Particular plugin to a new solid object and open up the Particular Designer in Effect Controls. From there you can designate emitter type, motion, particle type, particle shadowing, particle color and dispersion types, as well as add multiple instances of emitters, adjust physics and much more.

The presets for all of these options can be accessed by clicking the “>” symbol in the upper left of the Designer interface. You can access all of the detailed settings and building “Blocks” of each of these categories by clicking the “<” in the same area. With a few hours spent watching tutorials on YouTube, you can be up and running with particle emitters and fluid dynamics. The preset emitters are pretty amazing, including my favorite, the two-emitter fluid dynamic systems that interact with one another.

Form 4
The second plugin in the Trapcode Suite 15 that has been updated is Trapcode Form 4. Form is a plugin that literally creates forms using particles that live forever in a unified 3D space, allowing for interaction. Form 4 adds the updated Designer, which makes particle grids a little more accessible and easier to construct for non-experts. Form 4 also includes the latest Fluid Dynamics update that Particular gained. The Fluid Dynamics engine really adds another level of beauty to Form projects, allowing you to create fluid-like particle grids from the 150 included presets or even your own .obj files.

My favorite settings to tinker with are Swirl and Viscosity. Using both settings in tandem can help create an ooey-gooey liquid particle grid that can interact with other Form systems to build pretty incredible scenes. To test out how .obj models worked within form, I clicked over to www.sketchfab.com and downloaded an .obj 3D model. If you search for downloadable models that do not cost anything, you can use them in your projects under Creative Commons licensing protocols, as long as you credit the creator. When in doubt always read the licensing (You can find more info on creative commons licensing here, but in this case you can use them as great practice models.

Anyway, Form 4 allows us to import .obj files, including animated .obj sequences as well as their textures. I found a Day of the Dead-type skull created by JMUHIST, pointed form to the .obj as well as its included texture, added a couple After Effect’s lights, a camera, and I was in business. Form has a great replicator feature (much like Element3D). There are a ton of options, including fog distance under visibility, animation properties, and even the ability to quickly add a null object linked to your model for quick alignment of other elements in the scene.

Mir 3
Up last is Trapcode Mir 3. Mir 3 is used to create 3D terrains, objects and wireframes in After Effects. In this latest update, Mir has added the ability to import .obj models and textures. Using fractal displacement mapping, you can quickly create some amazing terrains. From mountain-like peaks to alien terrains, Mir is a great supplement when using plugins like Video Copilot Element 3D to add endless tunnels or terrains to your 3D scenes quickly and easily.

And if you don’t have or own Element 3D, you will really enjoy the particle replication system. Use one 3D object and duplicate, then twist, distort and animate multiple instances of them quickly. The best part about all of these Trapcode Suite tools is that they interact with the cameras and lighting native to After Effects, making it a unified animating experience (instead of animating separate camera and lighting rigs like in the old days). Two of my favorite features from the last update are the ability to use quad- or triangle-based polygons to texture your surfaces. This can give an 8-bit or low-poly feel quickly, as well as a second pass wireframe to add a grid-like surface to your terrain.

Summing Up
Red Giant’s Trapcode Suite 15 is amazing. If you have a previous version of the Trapcode Suite, you’re in luck: the upgrade is “only” $199. If you need to purchase the full suite, it will cost you $999. Students get a bit of a break at $499.

If you are on the fence about it, go watch Daniel Hashimoto’s Cheap Tricks: Aquaman Underwater Effects tutorial (Part 1 and Part 2). He explains how you can use all of the Red Giant Trapcode Suite effects with other plugins like Video CoPilot’s Element 3D and Red Giant’s Universe and offers up some pro tips when using www.sketchfab.com to find 3D models.

I think I even saw him using Video CoPilot’s FX Console, which is a free After Effects plugin that makes accessing plugins much faster in After Effects. You may have seen his work as @ActionMovieKid on Twitter or @TheActionMovieKid on Instagram. He does some amazing VFX with his kids — he’s a must follow. Red Giant made a power move to get him to make tutorials for them! Anyway, his Aquaman Underwater Effects tutorial take you step by step through how to use each part of the Trapcode Suite 15 in an amazing way. He makes it look a little too easy, but I guess that is a combination of his VFX skills and the Trapcode Suite toolset.

If you are excited about 3D objects, particle systems and fluid dynamics you must check out Trapcode Suite 15 and its latest updates to Particular, Mir and Form.

After I finished the Trapcode Suite 15 review, Red Giant released the Trapcode Suite 15.1 update. The 15.1 update includes Text and Mask Emitters for Form and Particular 4.1, updated Designer, Shadowlet particle type matching, shadowlet softness and 21 additional presets.

This is a free update that can be downloaded from the Red Giant website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 

Behind the Title: Gentleman Scholar MD/EP Jo Arghiris

LA-based Jo Arghiris embraces the creativity of the job and enjoys “pulling treatments together with our directors. It’s always such a fun, collaborative process.” Find out more…

Name: Jo Arghiris

Company: Gentleman Scholar (@gentscholar)

Can You Describe Your Company?
Gentleman Scholar is a creative production studio, drawn together by a love of design and an eagerness to push boundaries.  Since launching in Los Angeles in 2010, and expanding to New York in 2016, we have evolved within the disciplines of live-action production, digital exploration, print and VR. At our very core, we are a band of passionate artists and fearless makers.

The biggest thing that struck me when I joined Scholar was everyone’s willingness to roll up their sleeves and give it a go. There are so many creative people working across both our studios, it’s quite amazing what we can achieve when we put our collective minds to it. In fact, it’s really hard to put us in a category or to define what we do on a day-to-day basis. But if I had to sum it up in just one word, our company feels like “home”; there’s no place quite like it.

What’s Your Job Title?
Managing Director/EP Los Angeles

What Does That Entail?
Truth be told, it’s evolving all the time. In its purest form, my job entails having top-line involvement on everything going on in the LA studio, both from operational and new business POVs. I face inwards and outwards. I mentor and I project. I lead and I follow. But the main thing I want to mention is that I couldn’t do my job without all these incredible people by my side. It really does take a village, every single day.

What Would Surprise People the Most About What Falls Under That Title?
Not so much “surprising” but certainly different from other roles, is that my job is never done (or at least it shouldn’t be). I never go home with all my to-do’s ticked off. The deck is constantly shuffled and re-dealt. This fluidity can be off-putting to some people who like to have a clear idea of what they need to achieve on any given day. But I really like to work that way, as it keeps my mind nimble and fresh.

What’s Your Favorite Part of the Job?
Learning new things and expanding my mind. I like to see our teams push themselves in this way, too. It’s incredibly satisfying watching folks overcome challenges and grow into their roles. Also, I obviously love winning work, especially if it’s an intense pitch process. I’m a creative person and I really enjoy pulling treatments together with our directors. It’s always such a fun, collaborative process.

What’s Your Least Favorite?
Well, I guess the 24/7 availability thing that we’ve all become accustomed to and are all guilty of. It’s so, so important for us to have boundaries. If I’m emailing the team late at night or on the weekend, I will write in the subject line, “For the Morning” or “For Monday.” I sometimes need to get stuff set up in advance, but I absolutely do not expect a response at 10pm on a Sunday night. To do your best work, it’s essential that you have a healthy work/life balance.

What is Your Favorite Time of the Day?
As clichéd as it may sound, I love to get up before anyone else and sit, in silence, with a cup of coffee. I’m a one-a-day kind of girl, so it’s pretty sacred to me. Weekdays or weekends, I have so much going on, I need to set my day up in these few solitary moments. I am not a night person at all and can usually be found fast asleep on the sofa sometime around 9pm each night. Equally favorite is when my kids get up and we do “huggle” time together, before the day takes us away on our separate journeys.

Bleacher Report

Can you Name Some Recent Projects?
Gentleman Scholar worked on a big Acura TLX campaign, which is probably one of my all-time favorites. Other fun projects include Legends Club for Timberland, Upwork “Hey World!” campaign from Duncan Channon, the Sponsor Reel for the 2018 AICP Show and Bleacher Report’s Sports Alphabet.

If You Didn’t Have This Job, What Would You be Doing Instead?
I love photography, writing and traveling. So if I could do it all again, I’d be some kind of travel writer/photographer combo or a journalist or something. My brother actually does just that, and I’m super-proud of his choices. To stand behind your own creative point of view takes skill and dedication.

How Did You Know This Would Be Your Path?
The road has been long, and it has carried me from London to New York to Los Angeles. I originally started in post production and VFX, where I got a taste for creative problem-solving. The jump from this world to a creative production studio like Scholar was perfectly timed and I relished the learning curve that came with it. I think it’s quite hard to have a defined “path” these days.

My advice to anyone getting into our industry right now would be to understand that knowledge and education are powerful tools, so go out of your way to harness them. And never stand still; always keep pushing yourself.

Name Three Pieces of Technology You Can’t Live Without.
My Ear Pods — so happy to not have that charging/listening conflict with my iPhone anymore; all the apps that allow me to streamline my life and get shit done any time of day no matter what, no matter where; I think my electric toothbrush is pretty high up there too. Can I have one more? Not “tech” per se, but my super-cute mini-hair straightener, which make my bangs look on point, even after working out!

What Social Media Channels Do You Follow?
Well, I like Instagram mostly. Do you count Pinterest? I love a Pinterest board. I have many of those. And I read Twitter, but I don’t Tweet too much. To be honest, I’m pretty lame on social media, and all my accounts are private. But I realize they are such important tools in our industry so I use them on an as-needed basis. Also, it’s something I need to consider soon for my kids, who are obsessed with watching random, “how-to” videos online and periodically ask me, “Are you going to put that on YouTube?” So I need to keep on top of it, not just for work, but also for them. It will be their world very soon.

Do You Listen to Music While You Work? Care to Share Your Favorite Music to Work to?
Yes, I have a Sonos set up in my office. I listen to a lot of playlists — found ones and the random ones that your streaming services build for you. Earlier this morning I had an album called Smino by blkswn playing. Right now I’m listening to a band called Pronoun. They were on a playlist Nylon Studios released called, “All the Brooklyn Bands You Should Be Listening To.”

My drive home is all about the podcast. I’m trying to educate myself more on American history at the moment. I’m also tempted to get into Babel and learn French. With all the hours I spend in the car, I’m pretty sure I would be fluent in no time!

What Do You Do to De-stress From it All?
So many things! I literally never stop. Hot yoga, spinning, hiking, mountain biking, cooking and thinking of new projects for my house. Road tripping, camping and exploring new places with my family and friends. Taking photographs and doing art projects with my kids. My all-time favorite thing to do is hit the beach for the day, winter and summer. I find it one of the most restorative places on Earth. I’m so happy to call LA my home. It suits me down to the ground!

Autodesk cloud-enabled tools now work with BeBop post platform

Autodesk has enabled use of its software in the cloud — including 3DS Max, Arnold, Flame and Maya — and BeBop Technology will deploy the tools on its cloud-based post platform. The BeBop platform enables processing-heavy post projects, such as visual effects and editing, in the cloud on powerful and highly secure virtualized desktops. Creatives can process, render, manage and deliver media files from anywhere on BeBop using any computer and as small as a 20Mbps Internet connection.

The ongoing deployment of Autodesk software on the BeBop platform mirrors the ways BeBop and Adobe work closely together to optimize the experience of Adobe Creative Cloud subscribers. Adobe applications have been available natively on BeBop since April 2018.

Autodesk software users will now also gain access to BeBop Rocket Uploader, which enables ingestion of large media files at incredibly high speeds for a predictable monthly fee with no volume limits. Additionally, BeBop Over the Shoulder (OTS) enables secure and affordable remote collaboration, review and approval sessions in real-time. BeBop runs on all of the major public clouds, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Cinesite recreates Nottingham for Lionsgate’s Robin Hood

The city of Nottingham perpetually exists in two states: the metropolitan center that it is today, and the fictional home of one of the world’s most famous outlaws. So when the filmmakers behind Robin Hood, which is now streaming and on DVD, looked to recreate the fictional Nottingham, they needed to build it from scratch with help from London’s Cinesite Studio. The film stars Taron Egerton, Jamie Foxx, Ben Mendelsohn, Eve Hewson, and Jamie Dornan.

Working closely with Robin Hood’s VFX supervisor Simon Stanley-Clamp and director Otto Bathurst, Cinesite created a handful of settings and backgrounds for the film, starting with a digital model of Nottingham built to scale. Given its modern look and feel, Nottingham of today wouldn’t do, so the team used Dubrovnik, Croatia, as its template. The Croatian city — best known to TV fans around the world as the model for Game of Thrones’ Kings Landing — has become a popular spot for filming historical fiction, thanks to its famed stone walls and medieval structures. That made it an ideal starting point for a film set around the time of the Crusades.

“Robin’s Nottingham is a teeming industrial city dominated by global influences, politics and religion. It’s also full of posh grandeur but populated by soot-choked mines and sprawling slums reflecting the gap between haves and have-nots, and we needed to establish that at a glance for audiences,” says Cinesite’s head of assets, Tim Potter. “With so many buildings making up the city, the Substance Suite allowed us to achieve the many variations and looks that were required for the large city of Nottingham in a very quick and easy manner.”

Using Autodesk Maya for the builds and Pixologic ZBrush for sculpting and displacement, the VFX team then relied on Allegorithmic Substance Designer (which was acquired by Adobe recently) to customize the city, creating detailed materials that would give life and personality to the stone and wood structures. From the slums inspired by Brazilian favelas to the gentry and nobility’s grandiose environments, the texturing and materials helped to provide audiences with unspoken clues about the outlaw archer’s world.

Creating these swings from the oppressors to the oppressed was often a matter of dirt, dust and grime, which were added to the RGB channels over the textures to add wear and tear to the city. Once the models and layouts were finalized, Cinesite then added even more intricate details using Substance Painter, giving an already realistic recreation additional touches to reflect the sometimes messy lives of the people that would inhabit a city like Nottingham.

At its peak, Cinesite had around 145 artists working on the project, including around 10 artists focusing on texturing and look development. The team spent six months alone creating the reimagined Nottingham, with another three months spent on additional scenes. Although the city of Dubrovnik informed many of the design choices, one of the pieces that had to be created from scratch was a massive cathedral, a focal point of the story. To fit with the film’s themes, Cinesite took inspiration from several real churches around the world to create something original, with a brutalist feel.

Using models and digital texturing, the team also created Robin’s childhood home of Loxley Manor, which was loosely based on a real structure in Završje, Croatia. There were two versions of the manor: one meant to convey the Loxley family in better times, and another seen after years of neglect and damage. Cinesite also helped to create one of the film’s most integral and complex moments, which saw Robin engage in a wagon chase through Nottingham. The scene was far too dangerous to use real animals in most shots, requiring Cinesite to dip back into its toolbox to create the texturing and look of the horse and its groom, along with the rigging and CFX.

“To create the world that the filmmakers wanted, we started by going through the process of understanding the story. From there we saw what the production had filmed and where the action needed to take place within the city, then we went about creating something unique,” Potter says. “The scale was massive, but the end result is a realistic world that will feel somewhat familiar, and yet still offer plenty of surprises.”

Robin Hood was released on home media on February 19.

Behind the Title: Carousel’s Head of VFX/CD Jeff Spangler

This creative has been an artist for as long as he could remember. “I’ve always loved the process of creation and can’t imagine any career where I’m not making something,” he says.

Name: Jeff Spangler

Company: NYC’s Carousel

Can you describe your company?
Carousel is a “creative collective” that was a response to this rapidly changing industry we all know and love. Our offerings range from agency creative services to editorial, design, animation (including both motion design and CGI), retouching, color correction, compositing, music licensing, content creation, and pretty much everything that falls between.

We have created a flexible workflow that covers everything from concept to execution (and delivery), while also allowing for clients whose needs are less all-encompassing to step on or off at any point in the process. That’s just one of the reasons we called ourselves Carousel — our clients have the freedom to climb on board for as much of the ride as they desire. And with the different disciplines all living under the same roof, we find that a lot of the inefficiencies and miscommunications that can get in the way of achieving the best possible result are eliminated.

What’s your job title?
Head of VFX/Creative Director

What does that entail?
That’s a really good question. There is the industry standard definition of that title as it applies to most companies. But it’s quite different if you are talking about a collective that combines creative with post production, animation and design. So for me, the dual role of CD and head of VFX works in a couple of ways. Where we have the opportunity to work with agencies, I am able to bring my experience and talents as a VFX lead to bear, communicating with the agency creatives and ensuring that the different Carousel artist involved are all able to collaborate and communicate effectively to get the work done.

Alternatively, when we work direct-to-client, I get involved much earlier in the process, collaborating with the Carousel creative directors to conceptualize and pitch new ideas, design brand elements, visualize concept art, storyboard and write copy or even work with stargeists to help hone the direction and target of a campaign.

That’s the true strength of Carousel — getting creatives from different backgrounds involved early on in the process where their experience and talent can make a much bigger impact in the long run. Most importantly, my role is not about dictating direction as much as it is about guiding and allowing for people’s talents to shine. You have to give artists the room to flourish if you really want to serve your clients and are serious about getting them something more than what they expected.

What would surprise people the most about what falls under that title?
I think that there is this misconception that it’s one creative sitting in a room that comes up with the “Big Idea” and he or she just dictates that idea to everyone. My experience is that any good idea started out as a lot of different ideas that were merged, pruned, refined and polished until they began to resemble something truly great.

Then after 24 hours, you look at that idea again and tear it apart because all of the flaws have started to show and you realize it still needs to be pummeled into shape. That process is generally a collaboration within a group of talented people who all look at the world very differently.

What tools do you use?
Anything that I can get my hands on (and my brain wrapped around). My foundation is as a traditional artist and animator and I find that those core skills are really the strength behind what I do everyday. I started out after college as a broadcast designer and later transitioned into a Flame artist where I spent many years working as a beauty retouch artist and motion designer.

These days, I primarily use Adobe Creative Suite as my role has become more creative in nature. I use Photoshop for digital painting and concept art , Illustrator for design and InDesign for layouts and decks. I also have a lot of experience in After Effects and Autodesk Maya and will use those tools for any animation or CGI that requires me to be hands-on, even if just to communicate the initial concept or design.

What’s your favorite part of the job?
Coming up with new ideas at the very start. At that point, the gloves are off and everything is possible.

What’s your least favorite?
Navigating politics within the industry that can sometimes get in the way of people doing their best work.

What is your favorite time of the day?
I’m definitely more of a night person. But if I had to choose a favorite time of day, it would be early morning — before everything has really started and there’s still a ton of anticipation and potential.

If you didn’t have this job, what would you be doing instead?
Working as a full-time concept artist. Or a logo designer. While I frequently have the opportunity to do both of those things in my role at Carousel, they are, for me, the most rewarding expression of being creative.

A&E’s Scraps

How early on did you know this would be your path?
I’ve been an artist for as long as I can remember and never really had any desire (or ability) to set it aside. I’ve always loved the process of creation and can’t imagine any career where I’m not “making” something.

Can you name some recents projects you have worked on?
We are wrapping up Season 2 of an A&E food show titled Scraps that has allowed us to flex our animation muscles. We’ve also been doing some in-store work with Victoria’s Secret for some of their flagship stores that has been amazing in terms of collaboration and results.

What is the project that you are most proud of?
It’s always hard to pick a favorite and my answer would probably change if you asked me more than once. But I recently had the opportunity to work with an up-and-coming eSports company to develop their logo. Collaborating with their CD, we landed on a design and aesthetic that makes me smile every time I see it out there. The client has taken that initial work and continues to surprise me with the way they use it across print, social media, swag, etc. Seeing their ability to be creative and flexible with what I designed is just validation that I did a good job. That makes me proud.

Name pieces of technology you can’t live without.
My iPad Pro. It’s my portable sketch tablet and presentation device that also makes for a damn good movie player during long commutes.

What do you do to de-stress from it all?
Muay Thai. Don’t get me wrong. I’m no serious martial artist and have never had the time to dedicate myself properly. But working out by punching and kicking a heavy bag can be very cathartic.

Review: Boris FX’s Continuum and Mocha Pro 2019

By Brady Betzel

I realize I might sound like a broken record, but if you are looking for the best plugin to help with object removals or masking, you should seriously consider the Mocha Pro plugin. And if you work inside of Avid Media Composer, you should also seriously consider Boris Continuum and/or Sapphire, which can use the power of Mocha.

As an online editor, I consistently use Continuum along with Mocha for tight blur and mask tracking. If you use After Effects, there is even a whittled-down version of Mocha built in for free. For those pros who don’t want to deal with Mocha inside of an app, it also comes as a standalone software solution where you can copy and paste tracking data between apps or even export the masks, object removals or insertions as self-contained files.

The latest releases of Continuum and Mocha Pro 2019 continue the evolution of Boris FX’s role in post production image restoration, keying and general VFX plugins, at least inside of NLEs like Media Composer and Adobe Premiere.

Mocha Pro

As an online editor I am alway calling on Continuum for its great Chroma Key Studio, Flicker Fixer and blurring. Because Mocha is built into Continuum, I am able to quickly track (backwards and forwards) difficult shapes and even erase shapes that the built-in Media Composer tools simply can’t do. But if you are lucky enough to own Mocha Pro you also get access to some amazing tools that go beyond planar tracking — such as automated object removal, object insertion, stabilizing and much more.

Boris FX’s latest updates to Boris Continuum and Mocha Pro go even further than what I’ve already mentioned and have resulted in a new version naming, this round we are at 2019 (think of it as Version 12). They have also created the new Application Manager, which makes it a little easier to find the latest downloads. You can find them here. This really helps when jumping between machines and you need to quickly activate and deactivate licenses.

Boris Continuum 2019
I often get offline edits effects from a variety plugins — lens flares, random edits, light flashes, whip transitions, and many more — so I need Continuum to be compatible with offline clients. I also need to use it for image repair and compositing.

In this latest version of Continuum, BorisFX has not only kept plugins like Primatte Studio, they have brought back Particle Illusion and updated Mocha and Title Studio. Overall, Continuum and Mocha Pro 2019 feel a lot snappier when applying and rendering effects, probably because of the overall GPU-acceleration improvements.

Particle Illusion has been brought back from the brink of death in Continuum 2019 for a 64-bit keyframe-able particle emitter system that can even be tracked and masked with Mocha. In this revamp of Particle Illusion there is an updated interface, realtime GPU-based particle generation, expanded and improved emitter library (complete with motion-blur-enabled particle systems) and even a standalone app that can design systems to be used in the host app — you cannot render systems inside of the standalone app.

While Particle Illusion is a part of the entire Continuum toolset that works with OFX apps like Blackmagic’s DaVinci Resolve, Media Composer, After Effects, and Premiere, it seems to work best in applications like After Effects, which can handle composites simply and naturally. Inside the Particle Illusion interface you can find all of the pre-built emitters. If you only have a handful make sure you download additional emitters, which you can find in the Boris FX App Manager.

       
Particle Illusion: Before and After

I had a hard time seeing my footage in a Media Composer timeline inside of Particle Illusion, but I could still pick my emitter, change specs like life and opacity, exit out and apply to my footage. I used Mocha to track some fire from Particle Illusion to a dumpster I had filmed. Once I dialed in the emitter, I launched Mocha and tracked the dumpster.

The first time I went into Mocha I didn’t see the preset tracks for the emitter or the world in which the emitter lives. The second time I launched Mocha, I saw track points. From there you can track where you want your emitter to track and be placed. Once you are done and happy with your track, jump back to your timeline where it should be reflected. In Media Composer I noticed that I had to go to the Mocha options and change the option from Mocha Shape to no shape. Essentially, the Mocha shape will act like a matte and cut off anything outside the matte.

If you are inside of After Effects, most parameters can now be keyframed and parented (aka pick-whipped) natively in the timeline. The Particle Illusion plugin is a quick, easy and good-looking tool to add sparks, Milky Way-like star trails or even fireworks to any scene. Check out @SurfacedStudio’s tutorial on Particle Illusion to get a good sense of how it works in Adobe Premiere Pro.

Continuum Title Studio
When inside of Media Composer (prior to the latest release 2018.12), there were very few ways to create titles that were higher resolution than HD (1920×1080) — the New Blue Titler was the only other option if you wanted to stay within Media Composer.

Title Studio within Media Composer

At first, the Continuum Title Studio interface appeared to be a mildly updated Boris Red interface — and I am allergic to the Boris Red interface. Some of the icons for the keyframing and the way properties are adjusted looks similar and threw me off. I tried really hard to jump into Title Studio and love it, but I really never got comfortable with it.

On the flip side, there are hundreds of presets that could help build quick titles that render a lot faster than New Blue Titler did. In some of the presets I noticed the text was placed outside of 16×9 Title Safety, which is odd since that is kind of a long standing rule in television. In the author’s defense, they are within Action Safety, but still.

If you need a quick way to make 4K titles, Title Studio might be what you want. The updated Title Studio includes realtime playback using the GPU instead of the CPU, new materials, new shaders and external monitoring support using Blackmagic hardware (AJA will be coming at some point). There are some great pre-sets including pre-built slates, lower thirds, kinetic text and even progress bars.

If you don’t have Mocha Pro, Continuum can still access and use Mocha to track shapes and masks. Almost every plugin can access Mocha and can track objects quickly and easily.
That brings me to the newly updated Mocha, which has some new features that are extremely helpful including a Magnetic Spline tool, prebuilt geometric shapes and more.

Mocha Pro 2019
If you loved the previous version of Mocha, you are really going to love Mocha Pro 2019. Not only do you get the Magnetic Lasso, pre-built geometric shapes, the Essentials interface and high-resolution display support, but BorisFX has rewritten the Remove Module code to use GPU video hardware. This increases render speeds about four to five times. In addition, there is no longer a separate Mocha VR software suite. All of the VR tools are included inside of Mocha Pro 2019.

If you are unfamiliar with what Mocha is, then I have a treat for you. Mocha is a standalone planar tracking app as well as a native plugin that works with Media Composer, Premiere and After Effects, or through OFX in Blackmagic’s Fusion, Foundry’s Nuke, Vegas Pro and Hitfilm.

Mocha tracking

In addition (and unofficially) it will work with Blackmagic DaVinci Resolve by way of importing the Mocha masks through Fusion. While I prefer to use After Effects for my work, importing Mocha masks is relatively painless. You can watch colorist Dan Harvey run through the process of importing Mocha masks to Resolve through Fusion, here.

But really, Mocha is a planar tracker, which means it tracks multiple points in a defined area that works best in flat surfaces or at least segmented surfaces, like the side of a face, ear, nose, mouth and forehead tracked separately instead of all at once. From blurs to mattes, Mocha tracks objects like glue and can be a great asset for an online editor or colorist.

If you have read any of my plugin reviews you probably are sick of me spouting off about Mocha, saying how it is probably the best plugin ever made. But really, it is amazing — especially when incorporated with plugins like Continuum and Sapphire. Also, thanks to the latest Media Composer with Symphony option you can incorporate the new Color Correction shapes with Mocha Pro to increase the effectiveness of your secondary color corrections.

Mocha Pro Remove module

So how fast is Mocha Pro 2019’s Remove Module these days? Well, it used to be a very slow process, taking lots of time to calculate an object’s removal. With the latest Mocha Pro 2019 release, including improved GPU support, the render time has been cut down tremendously. In my estimation, I would say three to four times the speed (that’s on the safe side). In Mocha Pro 2019 removal jobs that take under 30 seconds would have taken four to five minutes in previous versions. It’s quite a big improvement in render times.

There are a few changes in the new Mocha Pro, including interface changes and some amazing tool additions. There is a new drop-down tab that offers different workflow views once you are inside of Mocha: Essentials, Classic, Big Picture and Roto. I really wish the Essentials view was out when I first started using Mocha, because it gives you the basic tools you need to get a roto job done and nothing more.

For instance, just giving access to the track motion objects (Translation, Scale, Rotate, Skew and Perspective) with big shiny buttons helps to eliminate my need to watch YouTube videos on how to navigate the Mocha interface. However, if like me you are more than just a beginner, the Classic interface is still available and one I reach for most often — it’s literally the old interface. Big Screen hides the tools and gives you the most screen real estate for your roto work. My favorite after Classic is Roto. The Roto interface shows just the project window and the classic top toolbar. It’s the best of both worlds.

Mocha Pro 2019 Essentials Interface

Beyond the interface changes are some additional tools that will speed up any roto work. This has been one of the longest running user requests. I imagine the most requested feature that BorisFX gets for Mocha is the addition of basic shapes, such as rectangles and circles. In my work, I am often drawing rectangles around license plates or circles around faces with X-splines, so why not eliminate a few clicks and have that done already? Answering my need, Mocha now has elliptical and rectangular shapes ready to go in both X-splines and B-splines with one click.

I use Continuum and Mocha hand in hand. Inside of Media Composer I will use tools like Gaussian Blur or Remover, which typically need tracking and roto shapes created. Once I apply the Continuum effect, I launch Mocha from the Effect Editor and bam, I am inside Mocha. From here I track the objects I want to affect, as well as any objects I don’t want to affect (think of it like an erase track).

Summing Up
I can save tons of time and also improve the effectiveness of my work exponentially when working in Continuum 2019 and Mocha Pro 2019. It’s amazing how much more intuitive Mocha is to track with instead of the built-in Media Composer and Symphony trackers.

In the end, I can’t say enough great things about Continuum and especially Mocha Pro. Mocha saves me tons of time in my VFX and image restoration work. From removing camera people behind the main cast in the wilderness to blurring faces and license plates, using Mocha in tandem with Continuum is a match made in post production heaven.

Rendering in Continuum and Mocha Pro 2019 is a lot faster than previous versions, really giving me a leg up on efficiency. Time is money right?! On top of that, using Mocha Pro’s magic Object removal and Modules takes my image restoration work to the next level, separating me from other online editors who use standard paint and tracking tools.

In Continuum, Primatte Studio gives me the leg up on greenscreen keys with its exceptional ability to auto analyze a scene and perform 80% of the keying work before I dial-in the details. Whenever anyone asks me what tools I couldn’t live without, I without a doubt always say Mocha.
If you want a real Mocha Pro education you need to watch all of Mary Poplin’s tutorials. You can find them on YouTube. Check out this one on how to track and replace a logo using Mocha Pro 2019 in Adobe After Effects. You can also find great videos at Borisfx.com.

Mocha point parameter tracking

I always feel like there are tons of tools inside of the Mocha Pro toolset that go unused simply because I don’t know about them. One I recently learned about in a Surfaced Studio tutorial was the Quick Stabilize function. It essentially stabilizes the video around the object you are tracking allowing you to more easily rotoscope your object with it sitting still instead of moving all over the screen. It’s an amazing feature that I just didn’t know about.

As I was finishing up this review I saw that Boris FX came out with a training series, which I will be checking out. One thing I always wanted was a top-down set of tutorials like the ones on Mocha’s YouTube page but organized and sent along with practical footage to practice with.

You can check out Curious Turtle’s “More Than The Essentials: Mocha in After Effects” on their website where I found more Mocha training. There is even a great search parameter called Getting Started on BorisFX.com. Definitely check them out. You can never learn enough Mocha!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Behind the Title: Left Field Labs ECD Yann Caloghiris

NAME: Yann Caloghiris

COMPANY: Left Field Labs (@LeftFieldLabs)

CAN YOU DESCRIBE YOUR COMPANY?
Left Field Labs is a Venice-California-based creative agency dedicated to applying creativity to emerging technologies. We create experiences at the intersection of strategy, design and code for our clients, who include Google, Uber, Discovery and Estée Lauder.

But it’s how we go about our business that has shaped who we have become. Over the past 10 years, we have consciously moved away from the traditional agency model and have grown by deepening our expertise, sourcing exceptional talent and, most importantly, fostering a “lab-like” creative culture of collaboration and experimentation.

WHAT’S YOUR JOB TITLE?
Executive Creative Director

WHAT DOES THAT ENTAIL?
My role is to drive the creative vision across our client accounts, as well as our own ventures. In practice, that can mean anything from providing insights for ongoing work to proposing creative strategies to running ideation workshops. Ultimately, it’s whatever it takes to help the team flourish and push the envelope of our creative work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably that I learn more now than I did at the beginning of my career. When I started, I imagined that the executive CD roles were occupied by seasoned industry veterans, who had seen and done it all, and would provide tried and tested direction.

Today, I think that cliché is out of touch with what’s required from agency culture and where the industry is going. Sure, some aspects of the role remain unchanged — such as being a supportive team lead or appreciating the value of great copy — but the pace of change is such that the role often requires both the ability to leverage past experience and accept that sometimes a new paradigm is emerging and assumptions need to be adjusted.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team, and the excitement that comes from workshopping the big ideas that will anchor the experiences we create.

WHAT’S YOUR LEAST FAVORITE?
The administrative parts of a creative business are not always the most fulfilling. Thankfully, tasks like timesheeting, expense reporting and invoicing are becoming less exhaustive thanks to better predictive tools and machine learning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The early hours of the morning, usually when inspiration strikes — when we haven’t had to deal with the unexpected day-to-day challenges that come with managing a busy design studio.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be somewhere at the cross-section between an artist, like my mum was, and an engineer like my dad. There is nothing more satisfying than to apply art to an engineering challenge or vice versa.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school in France, and there wasn’t much room for anything other than school and homework. When I got my Baccalaureate, I decided that from that point onward that whatever I did, it would be fun, deeply engaging and at a place where being creative was an asset.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently partnered with ad agency RK Venture to craft a VR experience for the New Mexico Department of Transportation’s ongoing ENDWI campaign, which immerses viewers into a real-life drunk-driving scenario.

ENDWI

To best communicate and tell the human side of this story, we turned to rapid breakthroughs within volumetric capture and 3D scanning. Working with Microsoft’s Mixed Reality Capture Studio, we were able to bring every detail of an actor’s performance to life with volumetric performance capture in a way that previous techniques could not.

Bringing a real actor’s performance into a virtual experience is a game changer because of the emotional connection it creates. For ENDWI, the combination of rich immersion with compelling non-linear storytelling proved to affect the participants at a visceral level — with the goal of changing behavior further down the road.

Throughout this past year, we partnered with the VMware Cloud Marketing Team to create a one-of-a-kind immersive booth experience for VMworld Las Vegas 2018 and Barcelona 2018 called Cloud City. VMware’s cloud offering needed a distinct presence to foster a deeper understanding and greater connectivity between brand, product and customers stepping into the cloud.

Cloud City

Our solution was Cloud City, a destination merging future-forward architecture, light, texture, sound and interactions with VMware Cloud experts to give consumers a window into how the cloud, and more specifically how VMware Cloud, can be an essential solution for them. VMworld is the brand’s penultimate engagement where hands-on learning helped showcase its cloud offerings. Cloud City garnered 4000-plus demos, which led to a 20% lead conversion in 10 days.

Finally, for Google, we designed and built a platform for the hosting of online events anywhere in the world: Google Gather. For its first release, teams across Google, including Android, Cloud and Education, used Google Gather to reach and convert potential customers across the globe. With hundreds of events to date, the platform now reaches enterprise decision-makers at massive scale, spanning far beyond what has been possible with traditional event marketing, management and hosting.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Recently, a friend and I shot and edited a fun video homage to the original technology boom-town: Detroit, Michigan. It features two cultural icons from the region, an original big block ‘60s muscle car and some gritty electro beats. My four-year-old son thinks it’s the coolest thing he’s ever seen. It’s going to be hard for me to top that.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Human flight, the Internet and our baby monitor!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram, Twitter, Medium and LinkedIn.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Where to start?! Music has always played an important part of my creative process, and the joy I derive from what we do. I have day-long playlists curated around what I’m trying to achieve during that time. Being able to influence how I feel when working on a brief is essential — it helps set me in the right mindset.

Sometimes, it might be film scores when working on visuals, jazz to design a workshop schedule or techno to dial-up productivity when doing expenses.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Spend time with my kids. They remind me that there is a simple and unpretentious way to look at life.

Adobe acquires Allegorithmic, makers of Substance

Adobe has acquired Allegorithmic, makers of Substance, the industry standard for 3D textures and material creation in game and post production. By combining Allegorithmic’s Substance 3D design tools with Creative Cloud’s imaging, video and motion graphics tools, Adobe will empower video game creators, VFX artists working in film and television, designers and marketers to deliver the next generation of immersive experiences.

As brands look to compete and differentiate themselves, compelling, interactive experiences enabled by 3D content, VR and AR will become more critical to their future success. 3D content is already transforming traditional workflows into fully immersive and digital ones that save time, reduce cost and open new creative horizons. With the acquisition of Allegorithmic, Adobe has added expanded 3D and immersive workflows to Creative Cloud and provides Adobe’s users a new set of tools for 3D projects.

“We are seeing an increasing appetite from customers to leverage 3D technology across media, entertainment, retail and marketing to design and deliver fully immersive experiences,” says Scott Belsky, chief product officer/executive VP, Creative Cloud, Adobe. “Substance products are a natural complement to existing Creative Cloud apps that are used in the creation of immersive content, including Photoshop, Dimension, After Effects and Project Aero.”

Allegorithmic has users working across the gaming, film and television, automotive, design and advertising industries, including brands like Electronic Arts, Ubisoft, BMW, Ikea, Louis Vuitton and Foster + Partners. Allegorithmic is used on AAA gaming franchises, including Call of Duty, Assassin’s Creed and Forza, and was used in the making of movies, including Blade Runner 2049, Pacific Rim Uprising and Tomb Raider.

Allegorithmic tools are already offered as a subscription service to individuals and enterprise customers, and in the future Adobe will focus on expanding the availability of the Allegorithmic tools via subscription. Later this year, Adobe will announce an update on new offerings that will bring the full power of Allegorithmic technology and Adobe Creative Cloud together.

Efilm’s Natasha Leonnet: Grading Spider-Man: Into the Spider-Verse

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse is not your typical Spider-Man film… in so many ways. The most obvious is the movie’s look, which was designed to make the viewer feel they are walking inside a comic book. This tale, which blends CGI with 2D hand-drawn animation and comic book textures, focuses on a Brooklyn teen who is bitten by a radioactive spider on the subway and soon develops special powers.

Natasha Leonnet

When he meets Peter Parker, he realizes he’s not alone in the Spider-Verse. It was co-directed by Peter Ramsey, Robert Persichetti Jr. and Rodney Rothman and produced by Phil Lord and Chris Miller, the pair behind 21 Jump Street and The Lego Movie.

Efilm senior colorist Natasha Leonnet provided the color finish for the film, which was nominated for an Oscar in the Best Animated Feature category. We reached out to find out more.

How early were you brought on the film?
I had worked on Angry Birds with visual effects supervisor Danny Dimian, which is how I was brought onto the film. It was a few months before we started color correction. Also, there was no LUT for the film. They used the ACES workflow, developed by The Academy and Efilm’s VP of technology, Joachim “JZ” Zell.

Can you talk about the kind of look they were after and what it took to achieve that look?
They wanted to achieve a comic book look. You look at the edges of characters or objects in comic books and you actually see aspects of the color printing from the beginning of comic book printing — the CMYK dyes wouldn’t all be the same line — it creates a layered look along with the comic book dots and expression lines on faces, as if you’re drawing a comic book.

For example, if someone gets hurt you put actual slashes on their face. For me it was a huge education about the comic book art form. Justin Thompson, the art director, in particular is so knowledgeable about the history of comic books. I was so inspired I just bought my first comic book. Also, with the overall look, the light is painting color everywhere the way it does in life.

You worked closely Justin, VFX supervisor Danny Dimian and art director Dean Gordon What was that process like?
They were incredible. It was usually a group of us working together during the color sessions — a real exercise in collaboration. They were all so open to each other’s opinions and constantly discussing every change in order to make certain that the change best served the film. There was no idea that was more important than another idea. Everyone listened to each other’s ideas.

Had you worked on an animated film previously? What are the challenges and benefits of working with animation?
I’ve been lucky enough to do all of Blue Sky Studios’ color finishes so far, except for the first Ice Age. One of the special aspects of working on animated films is that you’re often working with people who are fine-art painters. As a result, they bring in a different background and way of analyzing the images. That’s really special. They often focus on the interplay of different hues.

In the case of Spider-Man: Into the Spider-Verse, they also wanted to bring a certain naturalism to the color experience. With this particular film, they made very bold choices with their use of color finishing. They used an aspect of color correctors that are used to shift all of the hues and colors; that’s usually reserved for music videos. They completely embraced it. They were basically using color finishing to augment the story and refine their hues, especially time of day and progression of the day or night. They used it as their extra lighting step.

Can you talk about your typical process? Did that differ because of the animated content?
My process actually does not differ when I’m color finishing animated content. Continuity is always at the forefront, even in animation. I use the color corrector as a creative tool on every project.

How would you describe the look of the film?
The film embodies the vivid and magical colors that I always observed in childhood but never saw reflected on the screen. The film is very color intense. It’s as if you’re stepping inside a comic book illustrator’s mind. It’s a mind-meld with how they’re imagining things.

What system did you use for color and why?
I used Resolve on this project, as it was the system that the clients were most familiar with.

Any favorite parts of the process?
My favorite part is from start to finish. It was all magical on this film.

What was your path to being a colorist?
My parents loved going to the cinema. They didn’t believe in babysitters, so they took me to everything. They were big fans of the French new wave movement and films that offered unconventional ways of depicting the human experience. As a result, I got to see some pretty unusual films. I got to see how passionate my parents were about these films and their stories and unusual way of telling them, and it sparked something in me. I think I can give my parents full credit for my career.

I studied non-narrative experimental filmmaking in college even though ultimately my real passion was narrative film. I started as a runner in the Czech Republic, which is where I’d made my thesis film for my BA degree. From there I worked my way up and met a colorist (Biggi Klier) who really inspired me. I was hooked and lucky enough to study with her and another mentor of mine in Munich, Germany.

How do you prefer a director and DP describe a look?
Every single person I’ve worked with works differently, and that’s what makes it so fun and exciting, but also challenging. Every person communicates about color differently and our vocabulary for color is so limited, therein lies the challenge.

Where do you find inspiration?
From both the natural world and the world of films. I live in a place that faces east, and I get up every morning to watch the sunrise and the color palette is always different. It’s beautiful and inspiring. The winter palettes in particular are gorgeous, with reds and oranges that don’t exist in summer sunrises.

Autodesk launches Maya 2019 for animation, rendering, more

Autodesk has released the latest version of Maya, its 3D animation, modeling, simulation and rendering software. Maya 2019 features significant updates for speed and interactivity and addresses some challenges artists face throughout production, providing faster animation playback to reduce the need for playblasts, higher quality 3D previews with Autodesk Arnold updates in viewport 2.0, improved pipeline integration with more flexible development environment support, and performance improvements that most Maya artists will notice in their daily work.

Key new Maya 2019 features include:
• Faster Animation: New cached playback increases animation playback speeds in viewport 2.0, giving animators a more interactive and responsive animating environment to produce better quality animations. It helps reduce the need to produce time-consuming playblasts to evaluate animation work, so animators can work faster.


• Higher Quality Previews Closer to Final Renders: Arnold upgrades improve realtime previews in viewport 2.0, allowing artists to preview higher quality results that are closer to the final Arnold render for better creativity and less wasted time.
• Faster Maya: New performance and stability upgrades help improve daily productivity in a range of areas that most artists will notice in their daily work.
• Refining Animation Data: New filters within the graph editor make it easier to work with motion capture data, including the Butterworth filter and the key reducer to help refine animation curves.
• Rigging Improvements: New updates help make the work of riggers and character TDs easier, including the ability to hide sets from the outliner to streamline scenes, improvements to the bake deformer tool and new methods for saving deformer weights to more easily script rig creation.
• Pipeline Integration Improvements: Development environment updates make it easier for pipeline and tool developers to create, customize and integrate into production pipelines.
• Help for Animators in Training: Sample rigged and animated characters, as well as motion capture samples, make it easier for students to learn and quickly get started animating.

Maya 2019 is available now as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection.

Avengers: Infinity War leads VES Awards with six noms

The Visual Effects Society (VES) has announced the nominees for the 17th Annual VES Awards, which recognize outstanding visual effects artistry and innovation in film, animation, television, commercials and video games as well as the VFX supervisors, VFX producers and hands-on artists who bring this work to life.

Avengers: Infinity War garners the most feature film nomination with six. Incredibles 2 is the top animated film contender with five nominations and Lost in Space leads the broadcast field with six nominations.

Nominees in 24 categories were selected by VES members via events hosted by 11 of the organizations Sections, including Australia, the Bay Area, Germany, London, Los Angeles, Montreal, New York, New Zealand, Toronto, Vancouver and Washington.

The VES Awards will be held on February 5th at the Beverly Hilton Hotel. As previously announced, the VES Visionary Award will be presented to writer/director/producer and co-creator of Westworld Jonathan Nolan. The VES Award for Creative Excellence will be given to award-winning creators/executive producers/writers/directors David Benioff and D.B. Weiss of Game of Thrones fame. Actor-comedian-author Patton Oswalt will once again host the VES Awards.

Here are the nominees:

Outstanding Visual Effects in a Photoreal Feature

Avengers: Infinity War

Daniel DeLeeuw

Jen Underdahl

Kelly Port

Matt Aitken

Daniel Sudick

 

Christopher Robin

Christopher Robin

Chris Lawrence

Steve Gaub

Michael Eames

Glenn Melenhorst

Chris Corbould

 

Ready Player One

Roger Guyett

Jennifer Meislohn

David Shirk

Matthew Butler

Neil Corbould

 

Solo: A Star Wars Story

Rob Bredow

Erin Dusseault

Matt Shumway

Patrick Tubach

Dominic Tuohy

 

Welcome to Marwen

Kevin Baillie

Sandra Scott

Seth Hill

Marc Chu

James Paradis

 

Outstanding Supporting Visual Effects in a Photoreal Feature 

12 Strong

Roger Nall

Robert Weaver

Mike Meinardus

 

Bird Box

Marcus Taormina

David Robinson

Mark Bakowski

Sophie Dawes

Mike Meinardus

 

Bohemian Rhapsody

Paul Norris

Tim Field

May Leung

Andrew Simmonds

 

First Man

Paul Lambert

Kevin Elam

Tristan Myles

Ian Hunter

JD Schwalm

 

Outlaw King

Alex Bicknell

Dan Bethell

Greg O’Connor

Stefano Pepin

 

Outstanding Visual Effects in an Animated Feature

Dr. Seuss’ The Grinch

Pierre Leduc

Janet Healy

Bruno Chauffard

Milo Riccarand

 

Incredibles 2

Brad Bird

John Walker

Rick Sayre

Bill Watral

 

Isle of Dogs

Mark Waring

Jeremy Dawson

Tim Ledbury

Lev Kolobov

 

Ralph Breaks the Internet

Scott Kersavage

Bradford Simonsen

Ernest J. Petti

Cory Loftis

 

Spider-Man: Into the Spider-Verse

Joshua Beveridge

Christian Hejnal

Danny Dimian

Bret St. Clair

 

Outstanding Visual Effects in a Photoreal Episode

Altered Carbon; Out of the Past

Everett Burrell

Tony Meagher

Steve Moncur

Christine Lemon

Joel Whist

 

Krypton; The Phantom Zone

Ian Markiewicz

Jennifer Wessner

Niklas Jacobson

Martin Pelletier

 

LOST IN SPACE

Lost in Space; Danger, Will Robinson

Jabbar Raisani

Terron Pratt

Niklas Jacobson

Joao Sita

 

The Terror; Go For Broke

Frank Petzold

Lenka Líkařová

Viktor Muller

Pedro Sabrosa

 

Westworld; The Passenger

Jay Worth

Elizabeth Castro

Bruce Branit

Joe Wehmeyer

Michael Lantieri

 

Outstanding Supporting Visual Effects in a Photoreal Episode

Tom Clancy’s Jack Ryan; Pilot

Erik Henry

Matt Robken

Bobo Skipper

Deak Ferrand

Pau Costa

 

The Alienist; The Boy on the Bridge

Kent Houston

Wendy Garfinkle

Steve Murgatroyd

Drew Jones

Paul Stephenson

 

The Deuce; We’re All Beasts

Jim Rider

Steven Weigle

John Bair

Aaron Raff

 

The First; Near and Far

Karen Goulekas

Eddie Bonin

Roland Langschwert

Bryan Godwin

Matthew James Kutcher

 

The Handmaid’s Tale; June

Brendan Taylor

Stephen Lebed

Winston Lee

Leo Bovell

 

Outstanding Visual Effects in a Realtime Project

Age of Sail

John Kahrs

Kevin Dart

Cassidy Curtis

Theresa Latzko

 

Cycles

Jeff Gipson

Nicholas Russell

Lauren Nicole Brown

Jorge E. Ruiz Cano

 

Dr Grordbort’s Invaders

Greg Broadmore

Mhairead Connor

Steve Lambert

Simon Baker

 

God of War

Maximilian Vaughn Ancar

Corey Teblum

Kevin Huynh

Paolo Surricchio

 

Marvel’s Spider-Man

Grant Hollis

Daniel Wang

Seth Faske

Abdul Bezrati

 

Outstanding Visual Effects in a Commercial 

Beyond Good & Evil 2

Maxime Luere

Leon Berelle

Remi Kozyra

Dominique Boidin

 

John Lewis; The Boy and the Piano

Kamen Markov

Philip Whalley

Anthony Bloor

Andy Steele

 

McDonald’s; #ReindeerReady

Ben Cronin

Josh King

Gez Wright

Suzanne Jandu

 

U.S. Marine Corps; A Nation’s Call

Steve Drew

Nick Fraser

Murray Butler

Greg White

Dave Peterson

 

Volkswagen; Born Confident

Carsten Keller

Anandi Peiris

Dan Sanders

Fabian Frank

 

Outstanding Visual Effects in a Special Venue Project

Beautiful Hunan; Flight of the Phoenix

R. Rajeev

Suhit Saha

Arish Fyzee

Unmesh Nimbalkar

 

Childish Gambino’s Pharos

Keith Miller

Alejandro Crawford

Thelvin Cabezas

Jeremy Thompson

 

DreamWorks Theatre Presents Kung Fu Panda

Marc Scott

Doug Cooper

Michael Losure

Alex Timchenko

 

Osheaga Music and Arts Festival

Andre Montambeault

Marie-Josee Paradis

Alyson Lamontagne

David Bishop Noriega

 

Pearl Quest

Eugénie von Tunzelmann

Liz Oliver

Ian Spendloff

Ross Burgess

 

Outstanding Animated Character in a Photoreal Feature

Avengers: Infinity War; Thanos

Jan Philip Cramer

Darren Hendler

Paul Story

Sidney Kombo-Kintombo

 

Christopher Robin; Tigger

Arslan Elver

Kayn Garcia

Laurent Laban

Mariano Mendiburu

 

Jurassic World: Fallen Kingdom; Indoraptor

Jance Rubinchik

Ted Lister

Yannick Gillain

Keith Ribbons

 

Ready Player One; Art3mis

David Shirk

Brian Cantwell

Jung-Seung Hong

Kim Ooi

 

Outstanding Animated Character in an Animated Feature

Dr. Seuss’ The Grinch; The Grinch

David Galante

Francois Boudaille

Olivier Luffin

Yarrow Cheney

 

Incredibles 2; Helen Parr

Michal Makarewicz

Ben Porter

Edgar Rodriguez

Kevin Singleton

 

Ralph Breaks the Internet; Ralphzilla

Dong Joo Byun

Dave K. Komorowski

Justin Sklar

Le Joyce Tong

 

Spider-Man: Into the Spider-Verse; Miles Morales

Marcos Kang

Chad Belteau

Humberto Rosa

Julie Bernier Gosselin

 

Outstanding Animated Character in an Episode or Realtime Project

Cycles; Rae

Jose Luis Gomez Diaz

Edward Everett Robbins III

Jorge E. Ruiz Cano

Jose Luis -Weecho- Velasquez

 

Lost in Space; Humanoid

Chad Shattuck

Paul Zeke

Julia Flanagan

Andrew McCartney

 

Nightflyers; All That We Have Found; Eris

Peter Giliberti

James Chretien

Ryan Cromie

Cesar Dacol Jr.

 

Spider-Man; Doc Ock

Brian Wyser

Henrique Naspolini

Sophie Brennan

William Salyers

 

Outstanding Animated Character in a Commercial

McDonald’s; Bobbi the Reindeer

Gabriela Ruch Salmeron

Joe Henson

Andrew Butler

Joel Best

 

Overkill’s The Walking Dead; Maya

Jonas Ekman

Goran Milic

Jonas Skoog

Henrik Eklundh

 

Peta; Best Friend; Lucky

Bernd Nalbach

Emanuel Fuchs

Sebastian Plank

Christian Leitner

 

Volkswagen; Born Confident; Bam

David Bryan

Chris Welsby

Fabian Frank

Chloe Dawe

 

Outstanding Created Environment in a Photoreal Feature

Ant-Man and the Wasp; Journey to the Quantum Realm

Florian Witzel

Harsh Mistri

Yuri Serizawa

Can Yuksel

 

Aquaman; Atlantis

Quentin Marmier

Aaron Barr

Jeffrey De Guzman

Ziad Shureih

 

Ready Player One; The Shining, Overlook Hotel

Mert Yamak

Stanley Wong

Joana Garrido

Daniel Gagiu

 

Solo: A Star Wars Story; Vandor Planet

Julian Foddy

Christoph Ammann

Clement Gerard

Pontus Albrecht

 

Outstanding Created Environment in an Animated Feature

Dr. Seuss’ The Grinch; Whoville

Loic Rastout

Ludovic Ramiere

Henri Deruer

Nicolas Brack

 

Incredibles 2; Parr House

Christopher M. Burrows

Philip Metschan

Michael Rutter

Joshua West

 

Ralph Breaks the Internet; Social Media District

Benjamin Min Huang

Jon Kim Krummel II

Gina Warr Lawes

Matthias Lechner

 

Spider-Man; Into the Spider-Verse; Graphic New York City

Terry Park

Bret St. Clair

Kimberly Liptrap

Dave Morehead

 

Outstanding Created Environment in an Episode, Commercial, or Realtime Project

Cycles; The House

Michael R.W. Anderson

Jeff Gipson

Jose Luis Gomez Diaz

Edward Everett Robbins III

 

Lost in Space; Pilot; Impact Area

Philip Engström

Kenny Vähäkari

Jason Martin

Martin Bergquist

 

The Deuce; 42nd St

John Bair

Vance Miller

Jose Marin

Steve Sullivan

 

The Handmaid’s Tale; June; Fenway Park

Patrick Zentis

Kevin McGeagh

Leo Bovell

Zachary Dembinski

 

The Man in the High Castle; Reichsmarschall Ceremony

Casi Blume

Michael Eng

Ben McDougal

Sean Myers

 

Outstanding Virtual Cinematography in a Photoreal Project

Aquaman; Third Act Battle

Claus Pedersen

Mohammad Rastkar

Cedric Lo

Ryan McCoy

 

Echo; Time Displacement

Victor Perez

Tomas Tjernberg

Tomas Wall

Marcus Dineen

 

Jurassic World: Fallen Kingdom; Gyrosphere Escape

Pawl Fulker

Matt Perrin

Oscar Faura

David Vickery

 

Ready Player One; New York Race

Daniele Bigi

Edmund Kolloen

Mathieu Vig

Jean-Baptiste Noyau

 

Welcome to Marwen; Town of Marwen

Kim Miles

Matthew Ward

Ryan Beagan

Marc Chu

 

Outstanding Model in a Photoreal or Animated Project 

Avengers: Infinity War; Nidavellir Forge Megastructure

Chad Roen

Ryan Rogers

Jeff Tetzlaff

Ming Pan

 

Incredibles 2; Underminer Vehicle

Neil Blevins

Philip Metschan

Kevin Singleton

 

Mortal Engines; London

Matthew Sandoval

James Ogle

Nick Keller

Sam Tack

 

Ready Player One; DeLorean DMC-12

Giuseppe Laterza

Kim Lindqvist

Mauro Giacomazzo

William Gallyot

 

Solo: A Star Wars Story; Millennium Falcon

Masa Narita

Steve Walton

David Meny

James Clyne

 

Outstanding Effects Simulations in a Photoreal Feature

Avengers: Infinity War; Titan

Gerardo Aguilera

Ashraf Ghoniem

Vasilis Pazionis

Hartwell Durfor

 

Avengers: Infinity War; Wakanda

Florian Witzel

Adam Lee

Miguel Perez Senent

Francisco Rodriguez

 

Fantastic Beasts: The Crimes of Grindelwald

Dominik Kirouac

Chloe Ostiguy

Christian Gaumond

 

Venom

Aharon Bourland

Jordan Walsh

Aleksandar Chalyovski

Federico Frassinelli

 

Outstanding Effects Simulations in an Animated Feature

Dr. Seuss’ The Grinch; Snow, Clouds and Smoke

Eric Carme

Nicolas Brice

Milo Riccarand

 

Incredibles 2

Paul Kanyuk

Tiffany Erickson Klohn

Vincent Serritella

Matthew Kiyoshi Wong

 

Ralph Breaks the Internet; Virus Infection & Destruction

Paul Carman

Henrik Fält

Christopher Hendryx

David Hutchins

 

Smallfoot

Henrik Karlsson

Theo Vandernoot

Martin Furness

Dmitriy Kolesnik

 

Spider-Man: Into the Spider-Verse

Ian Farnsworth

Pav Grochola

Simon Corbaux

Brian D. Casper

 

Outstanding Effects Simulations in an Episode, Commercial, or Realtime Project

Altered Carbon

Philipp Kratzer

Daniel Fernandez

Xavier Lestourneaud

Andrea Rosa

 

Lost in Space; Jupiter is Falling

Denys Shchukin

Heribert Raab

Michael Billette

Jaclyn Stauber

 

Lost in Space; The Get Away

Juri Bryan

Will Elsdale

Hugo Medda

Maxime Marline

 

The Man in the High Castle; Statue of Liberty Destruction

Saber Jlassi

Igor Zanic

Nick Chamberlain

Chris Parks

 

Outstanding Compositing in a Photoreal Feature

Avengers: Infinity War; Titan

Sabine Laimer

Tim Walker

Tobias Wiesner

Massimo Pasquetti

 

First Man

Joel Delle-Vergin

Peter Farkas

Miles Lauridsen

Francesco Dell’Anna

 

Jurassic World: Fallen Kingdom

John Galloway

Enrik Pavdeja

David Nolan

Juan Espigares Enriquez

 

Welcome to Marwen

Woei Lee

Saul Galbiati

Max Besner

Thai-Son Doan

 

Outstanding Compositing in a Photoreal Episode

Altered Carbon

Jean-François Leroux

Reece Sanders

Stephen Bennett

Laraib Atta

 

Handmaids Tale; June

Winston Lee

Gwen Zhang

Xi Luo

Kevin Quatman

 

Lost in Space; Impact; Crash Site Rescue

David Wahlberg

Douglas Roshamn

Sofie Ljunggren

Fredrik Lönn

 

Silicon Valley; Artificial Emotional Intelligence; Fiona

Tim Carras

Michael Eng

Shiying Li

Bill Parker

 

Outstanding Compositing in a Photoreal Commercial

Apple; Unlock

Morten Vinther

Michael Gregory

Gustavo Bellon

Rodrigo Jimenez

 

Apple; Welcome Home

Michael Ralla

Steve Drew

Alejandro Villabon

Peter Timberlake

 

Genesis; G90 Facelift

Neil Alford

Jose Caballero

Joseph Dymond

Greg Spencer

 

John Lewis; The Boy and the Piano

Kamen Markov

Pratyush Paruchuri

Kalle Kohlstrom

Daniel Benjamin

 

Outstanding Visual Effects in a Student Project

Chocolate Man

David Bellenbaum

Aleksandra Todorovic

Jörg Schmidt

Martin Boué

 

Proxima-b

Denis Krez

Tina Vest

Elias Kremer

Lukas Löffler

 

Ratatoskr

Meike Müller

Lena-Carolin Lohfink

Anno Schachner

Lisa Schachner

 

Terra Nova

Thomas Battistetti

Mélanie Geley

Mickael Le Mezo

Guillaume Hoarau

VFX studio Electric Theatre Collective adds three to London team

London visual effects studio Electric Theatre Collective has added three to its production team: Elle Lockhart, Polly Durrance and Antonia Vlasto.

Lockhart brings with her extensive CG experience, joining from Touch Surgery where she ran the Johnson & Johnson account. Prior to that she worked at Analog as a VFX producer where she delivered three global campaigns for Nike. At Electric, she will serve as producer on Martini and Toyota.

Vlasto joins Electric working on clients such Mercedes, Tourism Ireland and Tui. She joins from 750MPH where, over a four-year period, she served as producer on Nike, Great Western Railway, VW and Amazon to name but a few.

At Electric, Polly Durrance will serve as producer on H&M, TK Maxx and Carphone Warehouse. She joins from Unit where she helped launched their in-house Design Collective, worked with clients such as Lush, Pepsi and Thatchers Cider. Prior to Unit Polly was at Big Buoy where she produced work for Jaguar Land Rover, giffgaff and Redbull.

Recent projects at the studio, which also has an office in Santa Monica, California, include Tourism Ireland Capture Your Heart and Honda Palindrome.

Main Image: (L-R) Elle Lockhart, Antonia Vlasto and Polly Durrance.

Behind the Title: FuseFX VFX supervisor Marshall Krasser

Over the years, this visual effects veteran has worked with both George Lucas and Steven Spielberg, whose films helped inspire his career path.

NAME: Marshall Krasser

COMPANY: FuseFX 

CAN YOU DESCRIBE YOUR COMPANY?
FuseFX offers visual effects services for episodic television, feature films, commercials and VR productions. Founded in 2006, the company employs over 300 people across three studio locations in LA, NYC and Vancouver

WHAT’S YOUR JOB TITLE?
Visual Effects Supervisor

WHAT DOES THAT ENTAIL?
In general, a VFX supervisor is responsible for leading the creative team that brings the director’s vision to life. The role does vary from show to show depending on whether or not there is an on-set or studio-side VFX supervisor.

Here is a list of responsibilities across the board:
– Read and flag the required VFX shots in the script.
– Work with the producer and team to bid the VFX work.
– Attend the creative meetings and location scouts.
– Work with the studio creative team to determine what they want and what we need to achieve it.
– Be the on-set presence for VFX work — making sure the required data and information we need is shot, gathered and catalogued.
– Work with our in-house team to start developing assets and any pre-production concept art that will be needed.
– Once the VFX work is in post production, the VFX supervisor guides the team of in-house artists and technicians through the shot creation/completion phase, while working with the producer to keep the show within the budgets constraints.
– Keep the client happy!

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That the job is much more than pointing at the computer screen and making pretty images. Team management is critical. Since you are working with very talented and creative people, it takes a special skill set and understanding. Having worked up through the VFX ranks, it helps you understand the mind set since you have been in their shoes.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
My first job was creating computer graphic images for speaker support presentations on a Genigraphics workstation in 1984. I then transitioned into feature film in 1994.

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? WHAT’S BEEN GOOD, WHAT’S BEEN BAD?
It’s changed a lot. In the early days at ILM, we were breaking ground by being asked to create imagery that had never been seen before. This involved creating new tools and approaches that had not been previously possible.

Today, VFX has less of the “man behind the curtain” mystique and has become more mainstream and familiar to most. The tools and computer power have evolved so there is less of the “heavy lifting” that was required in the past. This is all good, but the “bad” part is the fact that “tricking” people’s eyes is more difficult these days.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
A couple really focused my attention toward VFX. There is a whole generation that was enthralled with the first Star Wars movie. I will never forget the feeling I had upon first viewing it — it was magical.

The other was E.T., since it was more grounded on Earth and more plausible. I was blessed to be able to work directly with both George Lucas and Steven Spielberg [and the artisans who created the VFX for these films] during the course of my career.

DID YOU GO TO FILM SCHOOL?
I did not. At the time, there was virtually no opportunity to attend a film school, or any school, that taught VFX. I took the route that made the most sense for me at the time — art major. I am a classically trained artist who focused on graphic design and illustration, but I also took computer programming.

On a typical Saturday, I would spend the morning in the computer lab programming and the afternoon on the potter’s wheel throwing pots. Always found that ironic – primitive to modern in the same day!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team and bringing the creative to life.

WHAT’S YOUR LEAST FAVORITE?
Numbers, no one told me there would be math! Re: bidding.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Maybe a fishing or outdoor adventure guide. Something far away from computers and an office.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
– the Vice movie
– the Waco miniseries
–  the Life Sentence TV series
– the Needle in a Timestack film
The 100 TV series

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
A few stand out, in no particular order. Pearl Harbor, Harry Potter, Galaxy Quest, Titanic, War of the Worlds and the last Indiana Jones movie.

WHAT TOOLS DO YOU USE DAY TO DAY?
I would have to say Nuke. I use it for shot and concept work when needed.

WHERE DO YOU FIND INSPIRATION NOW?
Everything around me. I am heavily into photography these days, and am always looking at putting a new spin on ordinary things and capturing the unique.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Head into the great British Columbian outdoors for camping and other outdoor activities.

Asahi beer spot gets the VFX treatment

A collaboration between The Monkeys Melbourne, In The Thicket and Alt, a newly released Asahi campaign takes viewers on a journey through landscapes built around surreal Japanese iconography. Watch Asahi Super Dry — Enter Asahi here.

From script to shoot — a huge operation that took place at Sydney’s Fox Studios — director Marco Prestini and his executive producer Genevieve Triquet (from production house In The Thicket) brought on the VFX team at Alt to help realize the creative vision.

The VFX team at Alt (which has offices in Sydney, Melbourne and Los Angeles) worked with Prestini to help design and build the complex “one shot” look, with everything from robotic geishas to a gigantic CG squid in the mix, alongside a seamless blend of CG set extensions and beautifully shot live-action plates.

“VFX supervisor Dave Edwards and the team at Alt, together with my EP Genevieve, have been there since the very beginning, and their creative input and expertise were key in every step of the way,” explains Prestini. “Everything we did on set was the results of weeks of endless back and forth on technical previz, a process that required pretty much everyone’s input on a daily basis and that was incredibly inspiring for me to be part of.”

Dave Edwards, VFX supervisor at Alt, shares: “Production designer Michael Iacono designed sets in 3D, with five huge sets built for the shoot. The team then worked out camera speeds for timings based on these five sets and seven plates. DP Stefan Duscio would suggest rigs and mounts, which our team was able to then test it in previs to see if it would work with the set. During previs, we worked out that we couldn’t get the resolution and the required frame rate to shoot the high frame rate samurais, so we had to use Alexa LF. Of course, that also helped Marco, who wanted minimal lens distortion as it allowed a wide field of view without the distortion of normal anamorphic lenses.”

One complex scene involves a character battling a gigantic underwater squid, which was done via a process known as “dry for wet” — a film technique in which smoke, colored filters and/or lighting effects are used to simulate a character being underwater while filming on a dry stage. The team at Alt did a rough animation of the squid to help drive the actions of the talent and the stunt team on the day, before spending the final weeks perfecting the look of the photoreal monster.

In terms of tools, for concept design/matte painting Alt used Adobe Photoshop while previs/modeling/texturing/animation was done in Autodesk Maya. All of the effects/lighting/look development was via Side Effects Houdini; the compositing pipeline was built around Foundry Nuke; final online was completed in Autodesk Flame; and for graphics, they used Adobe After Effects.
The final edit was done by The Butchery.

Here is the VFX breakdown:

Enter Asahi – VFX Breakdown from altvfx on Vimeo.