Cinnafilm 6.6.19

Category Archives: 2D

Review: Red Giant’s VFX Suite plugins

By Brady Betzel

If you have ever watched After Effects tutorials, you are bound to have seen the people who make up Red Giant. There is Aharon Rabinowitz, who you might mistake for a professional voiceover talent; Seth Worley, who can combine a pithy sense of humor and over-the-top creativity seamlessly; and my latest man-crush Daniel Hashimoto, better known as “Action Movie Dad” of Action Movie Kid.

In these videos, these talented pros show off some amazing things they created using Red Giant’s plugin offerings, such as the Trapcode Suite, the Magic Bullet Suite, Universe and others.

Now, Red Giant is trying to improve your visual effects workflow even further with the new VFX Suite for Adobe After Effects (although some work in Adobe Premiere as well).

The new VFX Suite is a compositing focused tool kit that will compliment many aspects of your work, from green screen keying to motion graphics compositing with tools such as Video CoPilot’s Element 3D. Whether you want to seamlessly composite light and atmospheric fog with less pre-composites, add a reflection to an object easily or even just have a better greenscreen keyer, the VFX Suite will help.

The VFX Suite includes Supercomp, Primatte Keyer 6; King Pin Tracker; Spot Clone Tracker; Optical Glow; Chromatic Displacement; Knoll Light Factory 3.1; Shadow; and Reflection. The VFX Suite is priced at $999 unless you qualify for the Academic discount, which means you can get it for $499.

In this review, I will go over each of the plugins within the VFX Suite but up first will be Primatte Keyer 6.

Overall, I love the Red Giant’s interface and GUIs, in addition they seem to be a little more intuitive for me allowing me to work more “creatively” as opposed to spending time figuring out technical issues.

I asked Red Giant what makes VFX Suite so powerful and Rabinowitz, head of marketing for Red Giant and general post production wizard shared this: “Red Giant has been helping VFX artists solve compositing challenges for over 15 years. For VFX suite, we looked at those challenges with fresh eyes and built new tools to solve them with new technologies. Most of these tools are built entirely from scratch. In the case of Primatte Keyer, we further enhanced the UI and sped it up dramatically with GPU acceleration. Primatte Keyer 6 becomes even more powerful when you combine the keying results with Supercomp, which quickly turns your keyed footage into beautifully comped footage.”

Primatte Keyer 6
Primatte is a chromakey/single-color keying technology used in tons of movies and television shows. I got familiar with Primatte once BorisFX included it in their Continuum suite of plugins. Once I used Primatte and learned the intricacies of extracting detail from hair and even just using their auto analyze function, I never looked back. On occasion, Primatte needs a little help from others, like Keylight, but typically I can usually pull easy and tough keys all within one or two instances of Primatte.

If you haven’t used Primatte before, you essentially pick your key color by drawing a line or rectangle around the color, adjust the detail and opacity of the matte and boom you’re done. With Primatte 6 you now also get Core Matte. Core Matte, essentially draws an inside mask automatically while allowing you to refine the edges — this is a real time saver when doing hundreds of interview greenscreen keys, especially when someone decides to wear a reflective necklace or piece of jewelry that usually requires an extra mask and tracking. Primatte 6 also adds GPU optimization, gaining even more preview and rendering speed than previous versions.

Supercomp
If you are an editor like me — who knows enough to be dangerous when compositing and working within After Effects — sometimes you just want (or need) a more simple interface without having to figure out all the expressions, layer order, effects and compositing modes to get something to look right. And if you are an Avid Media Composer user you may have encountered the Paint Tool, which is one of those one for all plugins. You can paint, sharpen, blur and much more from inside one tool, much like Supercomp. Think of the Supercomp interface as a Colorista or Magic Bullet Looks type interface, where you can work with composite effects such as fog, glow, lights, matte chokers, edge blend and more inside of one interface with much less pre-composing.

The effects are all GPU-accelerated and are context-aware. You can think of Supercomp as a great tool to use with your results from the Primatte Keyer, adding in atmosphere and light wraps quickly and easily inside one plugin and not multiple.

King Pin Tracker and Spot Clone Tracker
As an online editor, I am often tasked with sign replacements, paint-out of crew or cameras in shots, as well as other clean-ups. If I can’t accomplish what I want inside of BorisFX Continuum while using Mocha inside of Media Composer or Blackmagic’s DaVinci Resolve, I will jump over to After Effects and try my hand there. I don’t practice as much corner pinning as I would like, so I often forget the intricacies when tracking in Mocha and copying Corner Pin or Transform Data to After Effects. This is where the new King Pin Tracker can ease any difficulties, especially when performing corner pinning on relatively simple objects but still needing the ability to keyframe positions or perform a planar track without using multiple plugins or applications.

The Spot Clone Tracker is exactly what is says it is. Much like Resolve’s Patch Replace, Spot Clone Tracker allows you to track one area while replacing that same area with another area from the screen. In addition, Spot Clone Tracker has options to flip vertical, flip horizontal, add noise, and adjust brightness and color values. For such a seemingly simple tool, the Spot Clone Tracker is the darkhorse in this race. You’d be suprised how many clone and paint tools don’t have adjustments, like flipping and flopping or brightness changes. This is a great tool for quick dead pixel fixes and painting out GoPros where you don’t need to mask anything out. Although there is an option to “Respect Alpha.”

Optical Glow and Knoll Light Factory 3.1
Have you ever been in an editing session that needed police lights amplified or a nice glow on some text that the stock plugins just couldn’t get right? Optical Glow will solve this. In another amazing simple-yet-powerful Red Giant plugin, Optical Glow can be applied and gamma adjusted for video, log and linear levels right off the bat.

From there you can pick an inner tint, outer tint, overall glow color aka Colorize and set the vibrance. I really love the Falloff, Highlight Rolloff, and Highlights-Only functions that allow you to fine tune the glow and just how much it shows and affects. It’s so simple that it is hard to mess up, but the results speak for themselves and render out quicker than other glow plugins I am using.

Knoll Light Factory has been newly GPU accelerated in Version 3.1 to decrease render times when using the over 200 presets or customizing your own lens flares. Optical Glow and Knoll Light Factory really compliment each other.

Chromatic Displacement
Since watching an Andrew Kramer tutorial covering displacement, I always wanted to make a video that showed huge seismic blasts but didn’t really want to put the time into properly making chromatic displacement. Lucky for me, Red Giant has introduced Chromatic Displacement! Whether you want to quickly make rain drops appear on the camera lens or you want to add a seismic blast from a phaser, Chromatic Displacement will allow you to offset your background with a glass or mirror-like appearance, or even add a heatwave appearance quickly. Simply choose the layer you want to displace from and adjust parameters such as displacement amount, spread and spread chroma, or if you want to render using the CPU or GPU.

Shadow and ReflectionRed Giant packs Shadow and Reflection plugins into the VFX Suite as well. The Shadow plugin not only makes it easy to create shadows in front of or behind an object based on alpha channel or brightness, but best of all gives you an easy way to identify the point where the shadow should bend. The Shadow Bend option lets you identify where the bend exists, what color the Bend Axis should be, but also identify the type of seam and seam size and even allows for motion blur.

The Reflection plugin is very similar to the Shadow plugin and produces quick and awesome reflections without any After Effects wizardry. Just like Shadow, the Reflection plugin allows for a bend to be identified. Plus, you can adjust the softness of the reflection quickly and easily.

Summing Up
In the end, Red Giant always delivers great and useful plugins. VFX Suite is no different, and the only downside some might point to is the cost. While $999 is expensive, if compositing is a large portion of your business, the efficiency you gain may outweigh the costs.

Much like Shooter Suite does for online editors, Trapcode Suite does for VFX masters and Universe does for jacks of all trades, VFX Suite will take all of your ideas and help them blend seamlessly into your work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Quick Chat: Sinking Ship’s Matt Bishop on live-action/CG series

By Randi Altman

Toronto’s Sinking Ship Entertainment is a production, distribution and interactive company specializing in children’s live-action and CGI-blended programming. The company has 13 Daytime Emmys and a variety of other international awards on its proverbial mantel. Sinking Ship has over 175 employees across all its divisions, including its VFX and interactive studio.

Matt Bishop

Needless to say, the company has a lot going on. We decided to reach out to Matt Bishop, founding partner at Sinking Ship, to find out more.

Sinking Ship produces, creates visual effects and posts its own content, but are you also open to outside projects?
Yes, we do work in co-production with other companies or contract our post production service to shows that are looking for cutting-edge VFX.

Have you always created your own content?
Sinking Ship has developed a number of shows and feature films, as well as worked in co-production with production companies around the world.

What came first, your post or your production services? Or were they introduced in tandem?
Both sides of company evolved together as a way to push our creative visions. We started acquiring equipment on our first series in 2004, and we always look for new ways to push the technology.

Can you mention some of your most recent projects?
Some of our current projects include Dino Dana (Season 4), Dino Dana: The Movie, Endlings and Odd Squad Mobile Unit.

What is your typical path getting content from set to post?
We have been working with Red cameras for years, and we were the first company in Canada to shoot in 4K over a decade ago. We shoot a lot of content, so we create backups in the field before the media is sent to the studio.

Dino Dana

You work with a lot of data. How do you manage and keep all of that secure?
Backups, lots of backups. We use a massive LTO-7 tape robot and we have over a 2PB of backup storage on top of that. We recently added Qumulo to our workflow to ensure the most secure method possible.

What do you use for your VFX work? What about your other post tools?
We use a wide range of software, but our main tools in our creature department are Pixologic Zbrush and Foundry Mari, with all animation happening inside Autodesk Maya.

We also have a large renderfarm to handle the amount of shots, and our render engine of choice is Arnold, which is now an Autodesk project.  In post we use an Adobe Creative Cloud pipeline with 4K HDR color grading happening in DaVinci Resolve. Qumulo is going to be a welcome addition as we continue to grow and our outputs become more complex.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Cinnafilm 6.6.19

Axis provides 1,000 VFX shots for the TV series Happy!

UK-based animation and visual effects house Axis Studios has delivered 1,000 shots across 10 episodes on the second series of the UCP-produced hit Syfy show Happy!.

Based on Grant Morrison and Darick Robertson’s graphic novel, Happy! follows alcoholic ex-cop turned hitman Nick Sax (Christopher Meloni), who teams up with imaginary unicorn Happy (voiced by Patton Oswalt). In the second season, the action moves from Christmastime to “the biggest holiday rebranding of all time” and a plot to “make Easter great again,” courtesy of last season’s malevolent child-kidnapper, Sonny Shine (Christopher Fitzgerald).

Axis Studios, working across its three creative sites in Glasgow, Bristol, and London, collaborated with executive producer and director Brian Taylor and showrunner Patrick Macmanus to raise the bar on the animation of the fully CG character. The studio also worked on a host of supporting characters, including a “chain-smoking man-baby,” a gimp-like Easter Bunny and even a Jeff Goldblum-shaped cloud. Alongside the extensive animation work, the team’s VFX workload greatly increased from the first season — including two additional episodes, creature work, matte painting, cloud simulations, asset building and extensive effects and clean-up work.

Building on the success of the first season, the 100-person team of artists further developed the animation of the lead character, Happy!, improving the rig, giving more nuanced emotions and continually working to integrate him more in the real-world environments.


RPS editors talk workflow, creativity and Michelob Ultra’s Robots

By Randi Altman

Rock Paper Scissors (RPS) is a veteran editing house specializing in commercials, music videos and feature films. Founded by Oscar-winning editor Angus Wall (The Social Network, The Girl With the Dragon Tattoo), RPS has a New York office as well as a main Santa Monica location that it shares with sister companies A52, Elastic and Jax.

We recently reached out to RPS editor Biff Butler and his assistant editor Alyssa Oh (both Adobe Premiere users) to find out about how they work, their editing philosophy and their collaboration on the Michelob Ultra Robots spot that premiered during this year’s Super Bowl.

Let’s find out more about their process…

Rock Paper Scissors, Santa Monica

What does your job entail?
Biff Butler: Simply put, someone hands us footage (and a script) and we make something out of it. The job is to act as cheerleader for those who have been carrying the weight of a project for weeks, maybe months, and have just emerged from a potentially arduous shoot.

Their job is to then sell the work that we do to their clients, so I must hold onto and protect their vision, maintaining that initial enthusiasm they had. If the agency has written the menu, and the client has ordered the meal, then a director is the farmer and the editor the cook.

I frequently must remind myself that although I might have been hired because of my taste, I am still responsible for feeding others. Being of service to someone else’s creative vision is the name of the game.

What’s your workflow like?
Alyssa Oh: At the start of the project, I receive the footage from production and organize it to Biff’s specs. Once it’s organized, I pass it off and he watches all the footage and assembles an edit. Once we get deeper into the project, he may seek my help in other aspects of the edit, including sound design, pulling music, creating graphics, temporary visual effects and creating animations. At the end of the project, I prep the edits for finishing color, mix, and conform.

What would surprise people about being an editor?
Oh: When I started, I associated editorial with “footage.” It surprised me that, aside from editing, we play a large part in decision-making for music and developing sound design.

Butler: I’ve heard the editor described as the final writer in the process. A script can be written and rewritten, but a lot happens in the edit room once shots are on a screen. The reality of seeing what actually fits within the allotted time that the format allows for can shape decisions as can the ever-evolving needs of the client in question. Another aspect we get involved with is the music — it’s often the final ingredient to be considered, despite how important a role it plays.

Robots

What do you enjoy the most about your job?
Oh: By far, my favorite part is the people that I work with. We spend so much time together; I think it’s important to not just get along, but to also develop close relationships. I’m so grateful to work with people who I look forward to spending the day with.

At RPS, I’ve gained so many great friendships over the years and learn a lot from everyone around me —- not just in the aspect of editorial, but also from the people at companies that work alongside us — A52, Elastic and Jax.

Butler: At the risk of sounding corny, what turns me on most is collaboration and connection with other creative talents. It’s a stark contrast to the beginning of the job, which I also very much adore — when it’s just me and my laptop, watching footage and judging shots.

Usually we get a couple days to put something together on our own, which can be a peaceful time of exploration and discovery. This is when I get to formulate my own opinions and points of view on the material, which is good to establish but also is something I must be ready to let go of… or at least be flexible with. Once the team gets involved in the room — be it the agency or the director — the real work begins.

As I said before, being of service to those who have trusted me with their footage and ideas is truly an honorable endeavor. And it’s not just those who hire us, but also talents we get to join forces with on the audio/music side, effects, etc. On second thought, the free supply of sparkly water we have on tap is probably my favorite part. It’s all pretty great.

What’s the hardest part of the job?
Oh: For me, the hardest part of our job are the “peaks and valleys.” In other words, we don’t have a set schedule, and with each project, our work hours will vary.

Robots

Butler: I could complain about the late nights or long weekends or unpredictable schedules, but those are just a result of being employed, so I count myself fortunate that I even get to moan about that stuff. Perhaps one of the trickiest parts is in dealing with egos, both theirs and mine.

Inevitably, I serve as mediator between a creative agency and the director they hired, and the client who is paying for this whole project. Throw in the mix my own sense of ownership that develops, and there’s a silly heap of egos to manage. It’s a joy, but not everyone can be fully satisfied all the time.

If you couldn’t edit for a living, what would you do?
Oh: I think I would definitely be working in a creative field or doing something that’s hands-on (I still hope to own a pottery studio someday). I’ve always had a fondness for teaching and working with kids, so perhaps I’d do something in the teaching field.

Butler: I would be pursuing a career in directing commercials and documentaries.

Did you know from a young age that you would be involved in this industry?
Oh: In all honesty, I didn’t know that this would be my path. Originally, I wanted to go into
broadcast, specifically sports broadcasting. I had an interest in television production since
high school and learned a bit about editing along the way.

However, I had applied to work at RPS as a production assistant shortly after graduating and quickly gained interest in editing and never looked back!
Butler : I vividly recall seeing the movie Se7en in the cinema and being shell-shocked by the opening title sequence. The feeling I was left with was so raw and unfiltered, I remember thinking, “That is what I want to do.” I wasn’t even 100 percent sure what that was. I knew I wanted to put things together! It wasn’t even so much a mission to tell stories, but to evoke emotion — although storytelling is most often the way to get there.

Robots

At the same time, I was a kid who grew up under the spell of some very effective marketing campaigns — from Nike, Jordan, Gatorade — and knew that advertising was a field I would be interesting in exploring when it came time to find a real job.

As luck would have it, in 2005 I found myself living in Los Angeles after the rock band I was in broke up, and I walked over to a nearby office an old friend of mine had worked at, looking for a job. She’d told me it was a place where editors worked. Turns out, that place was where many of my favorite ads were edited, and it was founded by the guy who put together that Se7en title sequence. That place was Rock Paper Scissors, and it’s been my home ever since.

Can you guys talk about the Michelob Ultra Robots spot that first aired during the Super Bowl earlier this year? What was the process like?
Butler: The process involved a lot of trust, as we were all looking at frames that didn’t have any of the robots in — they were still being created in CG — so when presenting edits, we would have words floating on screen reading “Robot Here” or “Robot Runs Faster Now.”

It says a lot about the agency in that it could hold the client’s hand through our rough edit and have them buy off on what looked like a fairly empty edit. Working with director Dante Ariola at the start of the edit helped to establish the correct rhythm and intention of what would need to be conveyed in each shot. Holding on to those early decisions was paramount, although we clearly had enough human performances to rest are hats on too.

Was there a particular cut that was more challenging than the others?
Butler: The final shot of the spot was a battle I lost. I’m happy with the work, especially the quality of human reactions shown throughout. I’m also keen on the spot’s simplicity. However, I had a different view of how the final shot would play out — a closer shot would have depicted more emotion and yearning in the robot’s face, whereas where we landed left the robot feeling more defeated — but you can’t win them all.

Robots

Did you feel extra stress knowing that the Michelob spot would air during the Super Bowl?
Butler: Not at all. I like knowing that people will see the work and having a firm airdate reduces the likelihood that a client can hem and haw until the wheels fall off. Thankfully there wasn’t enough time for much to go wrong!

You’ve already talked about doing more than just editing. What are you often asked to do in addition to just editing?
Butler: Editors are really also music supervisors. There can be a strategy to it, also knowing when to present a track you really want to sell through. But really, it’s that level of trust between myself and the team that can lead to some good discoveries. As I mentioned before, we are often tasked with just providing a safe and nurturing environment for people to create.

Truly, anybody can sit and hit copy and paste all day. I think it’s my job to hold on to that initial seed or idea or vision, and protect it through the final stages of post production. This includes ensuring the color correction, finishing and sound mix all reflect intentions established days or weeks ahead when we were still fresh enough in our thinking to be acting on instinct.

I believe that as creative professionals, we are who we are because of our instincts, but as a job drags on and on, we are forced to act more with our heads than our hearts. There is a stamina that is required, making sure that what ends up on the TV is representative of what was initially coming out of that instinctual artistic expression.

Does your editing hat change depending on the type of project you are cutting?
Butler: No, not really. An edit is an edit. All sessions should involve laughter and seriousness and focus and moments to unwind and goof off. Perhaps the format will determine the metaphorical hat, or to be more specific, the tempo.

Selecting shots for a 30- or 60-second commercial is very different than chasing moments for a documentary or long-form narrative. I’ll often remind myself to literally breathe slower when I know a shot needs to be long, and the efficiency with which I am telling a story is of less importance than the need to be absorbed in a moment.

Can you name some of your favorite technology?
Oh: My iPhone and all the apps that come with it; my Kindle, which allows me to be as indecisive as I want when it comes to picking a book and traveling; my laptop; and noise-cancelling headphones!

Butler: The carbonation of water, wireless earphones and tiny solid-state hard drives.


Zoic in growth mode, adds VFX supervisor Wanstreet, ups Overstrom

VFX house Zoic Studios has made changes to its creative team, adding VFX supervisor Chad Wanstreet to its Culver City studio and promoting Nate Overstrom to creative director in its New York studio.

Wanstreet has nearly 15 years of experience in visual effects, working across series, feature film, commercial and video game projects. He comes to Zoic from FuseFX, where he worked on television series including NBC’s Timeless, Amazon Prime’s The Tick, Marvel Agents of S.H.I.E.L.D. for ABC and Starz’s Emmy-winning series Black Sails.

Overstrom has spent over 15 years of his career with Zoic, working across the Culver City and New York City studios, earning two Emmy nominations and working on top series including Banshee, Maniac and Iron Fist. He is currently the VFX supervisor on Cinemax’s Warrior.

The growth of the creative department is accompanied by the promotion of several Zoic lead artists to VFX supervisors, with Andrew Bardusk, Matt Bramante, Tim Hanson and Billy Spradlin stepping up to lead teams on a wide range of episodic work. Bardusk just wrapped Season 4 of DC’s Legends of Tomorrow, Bramante just wrapped Noah Hawley’s upcoming feature film Lucy in the Sky, Hanson just completed Season 2 of Marvel’s Cloak & Dagger, and Spradlin just wrapped Season 7 of CW’s Arrow.

This news comes on the heels of a busy start of the year for Zoic across all divisions, including the recent announcement of the company’s second development deal — optioning New York Times best-selling author Michael Johnston’s fantasy novel Soleri for feature film and television adaptation. Zoic also added Daniel Cohen as executive producer, episodic and series in New York City, and Lauren F. Ellis as executive producer, episodic and series in Culver City.

Main Image Caption: (L-R) Chad Wanstreet and Nate Overstrom


UK’s Jellyfish adds virtual animation studio and Kevin Spruce

London-based visual effects and animation studio Jellyfish Pictures is opening of a new virtual animation facility in Sheffield. The new site is the company’s fifth studio in the UK, in addition to its established studios in Fitzrovia, Central London; Brixton; South London; and Oval, South London. This addition is no surprise considering Jellyfish created one of Europe’s first virtual VFX studios back in 2017.

With no hardware housed onsite, Jellyfish Pictures’ Sheffield studio — situated in the city center within the Cooper Project Complex — will operate in a completely PC-over-IP environment. With all technology and pipeline housed in a centrally-based co-location, the studio is able to virtualize its distributed workstations through Teradici’s remote visualization solution, allowing for total flexibility and scalability.

The Sheffield site will sit on the same logical LAN as the other four studios, providing access to the company’s software-defined storage (SDS) from Pixit Media, enabling remote collaboration and support for flexible working practices. With the rest of Jellyfish Pictures’ studios all TPN-accredited, the Sheffield studio will follow in their footsteps, using Pixit Media’s container solution within PixStor 5.

The innovative studio will be headed up by Jellyfish Pictures’ newest appointment, animation director Kevin Spruce. With a career spanning over 30 years, Spruce joins Jellyfish from Framestore, where he oversaw a team of 120 as the company’s head of animation. During his time at Framestore, Spruce worked as animation supervisor on feature films such as Fantastic Beasts and Where to Find Them, The Legend of Tarzan and Guardians of the Galaxy. Prior to his 17-year stint at Framestore, Spruce held positions at Canadian animation company, Bardel Entertainment and Spielberg-helmed feature animation studio Amblimation.

Jellyfish Pictures’ northern presence will start off with a small team of animators working on the company’s original animation projects, with a view to expand its team and set up with a large feature animation project by the end of the year.

“We have multiple projects coming up that will demand crewing up with the very best talent very quickly,” reports Phil Dobree, CEO of Jellyfish Pictures. “Casting off the constraints of infrastructure, which traditionally has been the industry’s way of working, means we are not limited to the London talent pool and can easily scale up in a more efficient and economical way than ever before. We all know London, and more specifically Soho, is an expensive place to play, both for employees working here and for the companies operating here. Technology is enabling us to expand our horizon across the UK and beyond, as well as offer talent a way out of living in the big city.”

For Spruce, the move made perfect sense: “After 30 years working in and around Soho, it was time for me to move north and settle in Sheffield to achieve a better work life balance with family. After speaking with Phil, I was excited to discover he was interested in expanding his remote operation beyond London. With what technology can offer now, the next logical step is to bring the work to people rather than always expecting them to move south.

“As animation director for Jellyfish Pictures Sheffield, it’s my intention to recruit a creative team here to strengthen the company’s capacity to handle the expanding slate of work currently in-house and beyond. I am very excited to be part of this new venture north with Jellyfish. It’s a vision of how creative companies can grow in new ways and access talent pools farther afield.”

 


Amazon’s Good Omens: VFX supervisor Jean-Claude Deguara

By Randi Altman

Good versus evil. It’s a story that’s been told time and time again, but Amazon’s Good Omens turns that trope on its head a bit. With Armageddon approaching, two unlikely heroes and centuries-long frenemies— an angel (Michael Sheen) and demon (David Tennant) — team up to try to fight off the end of the world. Think buddy movie, but with the fate of the world at stake.

In addition to Tennant and Sheen, the Good Omens cast is enviable — featuring Jon Hamm, Michael McKean, Benedict Cumberbatch and Nick Offerman, just to name a few. The series is based on the 1990 book by Terry Pratchett and Neil Gaiman.

Jean-Claude Degaura

As you can imagine, this six-part end-of-days story features a variety of visual effects, from creatures to environments to particle effects and fire. London’s Milk was called on to provide 650 visual effects shots, and its co-founder Jean-Claude Deguara supervised all.

He was also able to talk directly with Gaiman, which he says was a huge help. “Having access to Neil Gaiman as the author of Good Omens was just brilliant, as it meant we were able to ask detailed questions to get a more detailed brief when creating the VFX and receive such insightful creative feedback on our work. There was never a question that couldn’t be answered. You don’t often get that level of detail when you’re developing the VFX.”

Let’s find out more about Deguara’s process and the shots in the show as he walks us through his collaboration and creating some very distinctive characters.

Can you talk about how early you got involved on Good Omens?
We were involved right at the beginning, pre-script. It’s always the best scenario for VFX to be involved at the start, to maximize planning time. We spent time with director Douglas Mackinnon, breaking down all six scripts to plan the VFX methodology — working out and refining how to best use VFX to support the storytelling. In fact, we stuck to most of what we envisioned and we continued to work closely with him throughout the project.

How did getting involved when you did help the process?
With the sheer volume and variety of work — 650 shots, a five-month post production turnaround and a crew of 60 — the planning and development time in preproduction was essential. The incredibly wide range of work spanned multiple creatures, environments and effects work.

Having constant access to Neil as author and showrunner was brilliant as we could ask for clarification and more details from him directly when creating the VFX and receive immediate creative feedback. And it was invaluable to have Douglas working with us to translate Neil’s vision in words onto the screen and plan out what was workable. It also meant I was able to show them concepts the team were developing back in the studio while we were on set in South Africa. It was a very collaborative process.

It was important to have strong crew across all VFX disciplines as they worked together on multiple sequences at the same time. So you’re starting in tracking on one, in effects on another and compositing and finishing everything off on another. It was a big logistical challenge, but certainly the kind that we relish and are well versed in at Milk.

Did you do previs? If so, how did that help and what did you use?
We only used previs to work out how to technically achieve certain shots or to sell an idea to Douglas and Neil. It was generally very simple, using gray scale animation with basic geometry. We used it to do a quick layout of how to rescale the dog to be a bigger hellhound, for example.

You were on set supervising… can you talk about how that helped?
It was a fast-moving production with multiple locations in the UK over about six months, followed by three months in South Africa. It was crucial for the volume and variety of VFX work required on Good Omens that I was across all the planning and execution of filming for our shots.

Being on set allowed me to help solve various problems as we went along. I could also show Neil and Douglas various concepts that were being developed back in the studio, so that we could move forward more quickly with creative development of the key sequences, particularly the challenging ones such as Satan and the Bentley.

What were the crucial things to ensure during the shoot?
Making sure all the preparation was done meticulously for each shot — given the large volume and variety of the environments and sets. I worked very closely with Douglas on the shoot so we could have discussions to problem-solve where needed and find creative solutions.

Can you point to an example?
We had multiple options for shots involving the Bentley, so our advance planning and discussions with Douglas involved pulling out all the car sequences in the series scripts and creating a “mini script” specifically for the Bentley. This enabled us to plan which assets (the real car, the art department’s interior car shell or the CG car) were required and when.

You provided 650 VFX shots. Can you describe the types of effects?
We created everything from creatures (Satan exploding up out of the ground; a kraken; the hellhound; a demon and a snake) to environments (heaven – a penthouse with views of major world landmarks, a busy Soho street); feathered wings for Michael Sheen’s angel Aziraphale and David Tennant’s demon Crowley, and a CG Bentley in which Tennant’s Crowley hurtles around London.

We also had a large effects team working on a whole range of effects over the six episodes — from setting the M25 and the Bentley on fire to a flaming sword to a call center filled with maggots to a sequence in which Crowley (Tennant) travels through the internet at high speed.

Despite the fantasy nature of the subject matter, it was important to Gaiman that the CG elements did not stand out too much. We needed to ensure the worlds and characters were always kept grounded in reality. A good example is how we approached heaven and hell. These key locations are essentially based around an office block. Nothing too fantastical, but they are, as you would expect, completely different and deliberately so.

Hell is the basement, which was shot in a disused abattoir in South Africa, whilst heaven is a full CG environment located in the penthouse with a panoramic view over a cityscape featuring landmarks such as the Eiffel Tower, The Shard and the Pyramids.

You created many CG creatures. Can you talk about the challenges of that and how you accomplished them?
Many of the main VFX features, such as Satan (voiced by Benedict Cumberbatch), appear only once in the six-part series as the story moves swiftly toward the apocalypse. So we had to strike a careful balance between delivering impact yet ensuring they were immediately recognizable and grounded in reality. Given our fast five-month post- turnaround, we had our key teams working concurrently on creatures such as a kraken; the hellhound; a small, portly demon called Usher who meets his demise in a bath of holy water; and the infamous snake in the Garden of Eden.

We have incorporated Ziva VFX into our pipeline, which ensured our rigging and modeling teams maximized the development and build phases in the timeframe. For example, the muscle, fat and skin simulations are all solved on the renderfarm; the animators can publish a scene and then review the creature effects in dailies the next day.

We use our proprietary software CreatureTools for rigging all our creatures. It is a modular rigging package, which allows us to very quickly build animation rigs for previs or blocking and we build our deformation muscle and fat rigs in Ziva VFX. It means the animators can start work quickly and there is a lot of consistency between the rigs.

Can you talk about the kraken?
The kraken pays homage to Ray Harryhausen and his work on Clash of the Titans. Our team worked to create the immense scale of the kraken and take water simulations to the next level. The top half of the kraken body comes up out of the water and we used a complex ocean/water simulation system that was originally developed for our ocean work on the feature film Adrift.

Can you dig in a bit more about Satan?
Near the climax of Good Omens, Aziraphale, Crowley and Adam witness the arrival of Satan. In the early development phase, we were briefed to highlight Satan’s enormous size (about 400 feet) without making him too comical. He needed to have instant impact given that he appears on screen for just this one long sequence and we don’t see him again.

Our first concept was pretty scary, but Neil wanted him simpler and more immediately recognizable. Our concept artist created a horned crown, which along with his large, muscled, red body delivered the look Neil had envisioned.

We built the basic model, and when Cumberbatch was cast, the modeling team introduced some of his facial characteristics into Satan’s FACS-based blend shape set. Video reference of the actor’s voice performance, captured on a camera phone, helped inform the final keyframe animation. The final Satan was a full Ziva VFX build, complete with skeleton, muscles, fat and skin. The team set up the muscle scene and fat scene in a path to an Alembic cache of the skeleton so that they ended up with a blended mesh of Satan with all the muscle detail on it.

We then did another skin pass on the face to add extra wrinkles and loosen things up. A key challenge for our animation team — lead by Joe Tarrant — lay in animating a creature of the immense scale of Satan. They needed to ensure the balance and timing of his movements felt absolutely realistic.

Our effects team — lead by James Reid — layered multiple effects simulations to shatter the airfield tarmac and generate clouds of smoke and dust, optimizing setups so that only those particles visible on camera were simulated. The challenge was maintaining a focus on the enormous size and impact of Satan while still showing the explosion of the concrete, smoke and rubble as he emerges.

Extrapolating from live-action plates shot at an airbase, the VFX team built a CG environment and inserted live action of the performers into otherwise fully digital shots of the gigantic red-skinned devil bursting out of the ground.

And the hellhound?
Beelzebub (Anna Maxwell Martin) sends the antichrist (a boy named Adam) a giant hellhound. By giving the giant beast a scary name, Adam will set Armageddon in motion. In reality, Adam really just wants a loveable pet and transforms the hellhound into a miniature hound called, simply, Dog.

A Great Dane performed as the hellhound, photographed in a forest location while a grip kept pace with a small square of bluescreen. The Milk team tracked the live action and performed a digital head and neck replacement. Sam Lucas modeled the head in Autodesk Maya, matching the real dog’s anatomy before stretching its features into grotesquery. A final round of sculpting followed in Pixologic ZBrush, with artists refining 40-odd blend shapes for facial expression.

Once our rigging team got the first iteration of the blend shapes, they passed the asset off to animation for feedback. They then added an extra level of tweaking around the lips. In the creature effects phase, they used Ziva VFX to add soft body jiggle around the bottom of the lips and jowls.

What about creating the demon Usher?
One of our favorite characters was the small, rotund, quirky demon creature called Usher. He is a fully rigged CG character. Our team took a fully concepted image and adapted it to the performance and physicality of the actor. To get the weight of Usher’s rotund body, the rigging team — lead by Neil Roche — used Ziva VFX to run a soft body simulation on the fatty parts of the creature, which gave him a realistic jiggle. They then added a skin simulation using Ziva’s cloth solver to give an extra layer of wrinkling across Usher’s skin. Finally they used nCloth in Maya to simulate his sash and medals.

Was one more challenging/rewarding than the others?
Satan, because of his huge scale and the integrated effects.

Out of all of the effects, can you talk about your favorite?
The CG Bentley without a doubt! The digital Bentley featured in scenes showing the car tearing around London and the countryside at 90 miles per hour. Ultimately, Crowley drives through hell fire on the M25, it catches fire and burns continuously as he heads toward the site of Armageddon. The production located a real Bentley 3.5 Derby Coupe Thrupp & Maberly 1934, which we photo scanned and modeled in intricate detail. We introduced subtle imperfections to the body panels, ensuring the CG Bentley had the same handcrafted appearance as the real thing and would hold up in full-screen shots, including continuous transitions from the street through a window to the actors in an interior replica car.

In order to get the high speed required, we shot plates on location from multiple cameras, including on a motorbike to achieve the high-speed bursts. Later, production filled the car with smoke and our effects team added CG fire and burning textures to the exterior of our CG car, which intensified as he continued his journey.

You’ve talked about the tight post turnaround? How did you show the client shots for approval?
Given the volume and wide range of work required, we were working on a range of sequences concurrently to maximize the short post window — and align our teams when they were working on similar types of shot.

We had constant access to Neil and Douglas throughout the post period, which was crucial for approvals and feedback as we developed key assets and delivered key sequences. Neil and Douglas would visit Milk regularly for reviews toward delivery of the project.

What tools did you use for the VFX?
Amazon (AWS) for cloud rendering, Ziva for creature rigging, Maya, Nuke, Houdini for effects and Arnold for rendering.

What haven’t I asked that is important to touch on?
Our work on Soho, in which Michael Sheen’s Aziraphale bookshop is situated. Production designer Michael Ralph created a set based on Soho’s Berwick Street, comprising a two-block street exterior constructed up to the top of the first story, with the complete bookshop — inside and out — standing on the corner.

Four 20-x-20-foot mobile greenscreens helped our environment team complete the upper levels of the buildings and extend the road into the far distance. We photo scanned both the set and the original Berwick Street location, combining the reference to build digital assets capturing the district’s unique flavor for scenes during both day and nighttime.


Before and After: Soho

Mackinnon wanted crowds of people moving around constantly, so on shooting days crowds of extras thronged the main section of street and a steady stream of vehicles turned in from a junction part way down. Areas outside this central zone remained empty, enabling us to drop in digital people and traffic without having to do takeovers from live-action performers and cars. Milk had a 1,000-frame cycle of cars and people that it dropped into every scene. We kept the real cars always pulling in round the corner and devised it so there was always a bit of gridlock going on at the back.

And finally, we relished the opportunity to bring to life Neil Gaiman and Douglas Mackinnon’s awesome apocalyptic vision for Good Omens. It’s not often you get to create VFX in a comedy context. For example, the stuff inside the antichrist’s head: whatever he thinks of becomes reality. However, for a 12-year-old child, this means reality is rather offbeat.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


Behind the Title: Ntropic Flame artist Amanda Amalfi

NAME: Amanda Amalfi

COMPANY: Ntropic (@ntropic)

CAN YOU DESCRIBE YOUR COMPANY?
Ntropic is a content creator producing work for commercials, music videos and feature films as well as crafting experiential and interactive VR and AR media. We have offices in San Francisco, Los Angeles, New York City and London. Some of the services we provide include design, VFX, animation, color, editing, color grading and finishing.

WHAT’S YOUR JOB TITLE?
Senior Flame Artist

WHAT DOES THAT ENTAIL?
Being a senior Flame artist involves a variety of tasks that really span the duration of a project. From communicating with directors, agencies and production teams to helping plan out any visual effects that might be in a project (also being a VFX supervisor on set) to the actual post process of the job.

Amanda worked on this lipstick branding video for the makeup brand Morphe.

It involves client and team management (as you are often also the 2D lead on a project) and calls for a thorough working knowledge of the Flame itself, both in timeline management and that little thing called compositing. The compositing could cross multiple disciplines — greenscreen keying, 3D compositing, set extension and beauty cleanup to name a few. And it helps greatly to have a good eye for color and to be extremely detail-oriented.

WHAT MIGHT SURPRISE PEOPLE ABOUT YOUR ROLE?
How much it entails. Since this is usually a position that exists in a commercial house, we don’t have as many specialties as there would be in the film world.

WHAT’S YOUR FAVORITE PART OF THE JOB?
First is the artwork. I like that we get to work intimately with the client in the room to set looks. It’s often a very challenging position to be in — having to create something immediately — but the challenge is something that can be very fun and rewarding. Second, I enjoy being the overarching VFX eye on the project; being involved from the outset and seeing the project through to delivery.

WHAT’S YOUR LEAST FAVORITE?
We’re often meeting tight deadlines, so the hours can be unpredictable. But the best work happens when the project team and clients are all in it together until the last minute.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
The evening. I’ve never been a morning person so I generally like the time right before we leave for the day, when most of the office is wrapping up and it gets a bit quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably a tactile art form. Sometimes I have the urge to create something that is tangible, not viewed through an electronic device — a painting or a ceramic vase, something like that.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved films that were animated and/or used 3D elements growing up and wanted to know how they were made. So I decided to go to a college that had a computer art program with connections in the industry and was able to get my first job as a Flame assistant in between my junior and senior years of college.

ANA Airlines

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Most recently I worked on a campaign for ANA Airlines. It was a fun, creative challenge on set and in post production. Before that I worked on a very interesting project for Facebook’s F8 conference featuring its AR functionality and helped create a lipstick branding video for the makeup brand Morphe.

IS THERE A PROJECT THAT YOU ARE MOST PROUD OF?
I worked on a spot for Vaseline that was a “through the ages” concept and we had to create looks that would read as from 1880s, 1900, 1940s, 1970s and present day, in locations that varied from the Arctic to the building of the Brooklyn Bridge to a boxing ring. To start we sent the digitally shot footage with our 3D and comps to a printing house and had it printed and re-digitized. This worked perfectly for the ’70s-era look. Then we did additional work to age it further to the other eras — though my favorite was the Arctic turn-of-the-century look.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Flame… first and foremost. It really is the most inclusive software — I can grade, track, comp, paint and deliver all in one program. My monitors — the 4K Eizo and color-calibrated broadcast monitor, are also essential.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Mostly Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? 
I generally have music on with clients, so I will put on some relaxing music. If I’m not with clients, I listen to podcasts. I love How Did This Get Made and Conan O’Brien Needs a Friend.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Hiking and cooking are two great de-stressors for me. I love being in nature and working out and then going home and making a delicious meal.


NYC’s The-Artery expands to larger space in Chelsea

The-Artery has expanded and moved into a new 7,500-square-foot space in Manhattan’s Chelsea neighborhood. Founded by chief creative officer Vico Sharabani, The-Artery will use this extra space while providing visual effects, post supervision, offline editorial, live action and experience design and development across multiple platforms.

According to Sharabani, the new space is not only a response to the studio’s growth, but allows The-Artery to foster better collaboration and reinforce its relationships with clients and creative partners. “As a creative studio, we recognize how important it is for our artists, producers and clients to be working in a space that is comfortable and supportive of our creative process,” he says. “The extraordinary layout of this new space, the size, the lighting and even our location, allows us to provide our clients with key capabilities and plays an important part in promoting our mission moving forward.”

Recent The-Artery projects include 2018’s VR-enabled production for Mercedez-Benz, their work on Under Armour’s “Rush” campaign and Beyonce’s Coachella documentary, Homecoming.

They have also worked on feature films like Netflix’s Beasts of No Nation, Wes Anderson’s Oscar-winning Grand Budapest Hotel and the crime caper Ocean’s 8.

The-Artery’s new studio features a variety of software including Flame, Houdini, Cinema 4D, 3ds Max, Maya, the Adobe Creative Cloud suite of tools, Avid Media Composer, Shotgun for review and approval and more.

The-Artery features a veteran team of talented team of artists and creative collaborators, including a recent addition — editor and former Mad River Post owner Michael Elliot. “Whether they are agencies, commercial and film directors or studios, our clients always work directly with our creative directors and artists, collaborating closely throughout a project,” says Sharabani.

Main Image: Vico Sharabani (far right) and team in their new space.

Phosphene’s visual effects for Showtime’s Escape at Dannemora

By Randi Altman

The Showtime limited series Escape at Dannemora is based on the true story of two inmates (David Sweat and Richard Matt) who escape from an Upstate New York prison. They were aided by Tilly, a female prison employee, whose husband also worked at Clinton Correctional Facility. She helped run the tailor shop where both men worked and had an intimate relationship with both men.

Matt Griffin

As we approach Emmy season, we thought it was a good time to reach out to the studio that provided visual effects for the Ben Stiller-directed miniseries, which was nominated for a Golden Globe for best television limited series or movie. Escape at Dannemora stars Patricia Arquette, Benicio Del Toro and Paul Dano.

New York City-based Phosphene was called on to create a variety of visual effects, including turning five different locations into the Clinton Correctional Facility, the maximum security prison where the escape took place. We spoke with VFX producer Matt Griffin and VFX supervisor Djuna Wahlrab to find out more.

How early did you guys get involved in the project? Were there already set plans for the types of VFX needed? How much input did Phosphene have?
Matt Griffin: There were key sequences that were discussed with us very early on. The most crucial among them were Sweat’s Run, which was a nine-minute “oner” that opened Episode 5; the gruesome death scene of Broome County Sheriff’s Deputy Kevin Tarsia and an ambitious crane shot that revealed the North Yard in the prison.

Djuna Wahlrab

What were the needs of the filmmakers and how did your studio fill that need?
Were you on set supervising?
Griffin: Ben Stiller and the writers had a very clear vision for these challenging sequences, and therefore had a very realistic understanding of how ambitious the VFX would be. They got us involved right at the start so we could be as collaborative as possible with production in preparing the methodology for execution.

In that same spirit, they had us supervise the majority of the shoot, which positioned us to be involved as the natural shifts and adjustments of production arose day to day. It was amazing to be creative problem solvers with the whole team and not just reacting to what happened once in post.

I know that creating the prison was a big part — taking pieces of a few different prisons to make one?
Djuna Wahlrab: Clinton Correctional is a functioning prison, so we couldn’t shoot the whole series within its premises — instead we filmed in five different locations. We shot at a decommissioned prison in Pittsburgh, the prison’s tailor shop was staged in an old warehouse in Brooklyn, and the Honor Block (where our characters were housed) and parts of the prison bowels were built on a stage in Queens. Remaining pieces under the prison were shot in Yonkers, New York in an active water treatment plant. Working closely with production designer Mark Ricker, we tackled the continuity across all these locations.

The upper courts overlook the town.

We knew the main guard tower visible from the outside of Clinton Correctional was crucial, so we always planned to carry that through to Pittsburgh. Scenes taking place just inside the prison wall were also shot in Pittsburgh, and it was not as long as Clinton so we extended the depth of those shots.

While the surrounding mountainside terrain is on beautiful display from the North Yard, it’s also felt from the ground among the buildings within the prison. When looking down the length of the streets, you can see the sloping side of the mountain just over the wall. These scenes were filmed in Pittsburgh, so what you see beyond those walls is actually a bustling hilly city with water towers and electric lines and highways, so we had to adjust to match the real location.

Can you talk about the shot that had David Sweat crawling through pipes in the basement of the prison?
Wahlrab: For what we call Sweat’s Run — because we were creating a “oner” out of 17 discrete pieces — preproduction was crucial. The previs went far beyond a compositional guide. Using blueprints from three different locations and plans for the eventual stage set, orthographic views were created with extremely detailed planning for camera rigging and hand-off points. Drawing on this early presentation, Matt Pebler and the camera department custom-built many of the rigs required for our constricted spaces and meticulous overlapping sections.

The previs was a common language for all departments at the start, but as each piece of the run was filmed, the previs was updated with completed runs and the requirements would shift. Shooting one piece of the run would instantly lock in requirements for the other connecting pieces, and we’d have to determine a more precise plan moving forward from that point. It took a high level of collaboration and flexibility from all departments to constantly narrow the margin for what level of precision was required from everyone.

Sweat preparing for escape.

Can you talk about the scene where Sweat runs over the sheriff’s deputy Tarsia?
Wahlrab: Special effects had built a rig for a partial car that would be safe to “run over” a stunt man. A shell of a vehicle was suspended from an arm off a rigged tactical truck, so that they moved in parallel. Sweat’s stunt car floated a few feet off the ground. The shell had a roof, windows, a windshield, a hood and a driver’s seat. Below that the sides, grill and wheels of the car were constructed of a soft foam. The stunt man for Tarsia was rigged with wires so they could control his drag beneath the car.

In this way, we were able to get the broad strokes of the stunt in-camera. Though the car needed to be almost completely replaced with CG, its structure took the first steps to inform the appropriate environmental re-lighting needed for the scene. The impact moment was a particular challenge because, of course, the foam grill completely gave way to Tarsia’s body. We had to simulate the cracking of the bumper and the stamp of the blood from Tarsia’s wounds. We also had to reimagine how Tarsia’s body would have moved with this rigid impact.

Tarsia’s death

For Tarsia himself, in addition to augmenting the chosen take, we used alt takes from the shoot for various parts of the body to recreate a Tarsia with more appropriate physical reactions to the trauma we were simulating. There was also a considerable amount of hand painting this animation to help it all mesh together. We added blood on the wheels, smok,  and animated pieces of the broken bumper, all of which helped to ground Tarsia in the space.

You also made the characters look younger. Can you talk about what tools you used for this particular effect?
Wahlrab: Our goal was to support this jump in time, but not distract by going too far. Early on, we did tests where we really studied the face of each actor. From this research, we determined targeted areas for augmentation, and the approach really ended up being quite tailored for each character.

We broke down the individual regions of the face. First, we targeted wrinkles with tailored defocusing. Second, we reshaped recessed portions of the face, mostly with selective grading. In some cases, we retextured the skin on top of this work. At the end of all of this, we had to reintegrate this into the grainy 16mm footage.

Can you talk about all the tools you used?
Griffin: At Phosphene, we use Foundry Nuke Studio and Autodesk 3ds Max. For additional support, we rely on Mocha Pro, 3DEqualizer and PF Track, among many others.


Before and After: Digital snow.

Any other VFX sequences that you can talk about?
Wahlrab: As with any project, weather continuity was a challenge. Our prison was represented by five locations, but it took many more than that to fill out the lives of Tilly and Lyle beyond their workplace. Because we shot a few scenes early on with snow, we were locked into that reality in every single location going forward. The special FX team would give us practical snow in the areas with the most interaction, and we were charged with filling out much of the middle and background. For the most part, we relied on photography, building custom digital matte paintings for each shot. We spent a lot of time upstate in the winter, so I found myself pulling off the road in random places in search of different kinds of snow coverage. It became an obsession, figuring out the best way to shoot the same patch of snow from enough angles to cover my needs for different shots, at different times of day, not entirely knowing where we’d need to use it.

What was the most challenging shots?
Wahlrab: Probably the most challenging location to shoot was the North Yard within the prison. Clinton Correctional is a real prison in Dannemora, New York. It’s about 20 miles south of the Canadian border, set into the side of this hill in what is really a beautiful part of the country.This was the inmates outdoor space, divided into terraces overlooking the whole town of Dannemora and the valley beyond. Though the production value of shooting in an active prison was amazing, it also presented quite a few logistical challenges. For safety (ours as well as the prisoners), the gear allowed in was quite restricted. Many of the tools I rely on had to be left behind. Then, load-in required a military grade inspection by the COs, who examined every piece of our equipment before it could enter or exit. The crew was afforded no special privileges for entering the prison and we were shuffled through the standard intake. It was time consuming, and very much limited how long we’d be able to shoot that day once inside.


Before and After: Cooking fires in the upper courts.

Production did the math and balanced the crew and cast load-in with the coverage required. We had 150 background extras for the yard, but in reality, the average number of inmates, even on the coldest of days, was 300. Also, we needed the yard to have snow on the ground for continuity. Unfortunately it was an unseasonably warm day, and after the first few hours, the special effects snow that was painstakingly created and placed during the night was completely melted. Special effects was also charged with creating cook fire for the stoves in each court, but they could only bring in so much fuel. Our challenge was clear — fill out the background inmate population, add snow and cook fire smoke… everywhere.

The biggest challenge in this location was the shot Ben conceived of that would reveal of the enormity of the North Yard. It was this massive crane shot that began at the lowest part of the yard and panned to the upper courts. It slowly pulls out and cranes up to reveal the entire outdoor space. It’s really a beautiful way to introduce us to the North Yard, revealing one terraced level at a time until you have the whole space in view. It’s one of my favorite moments in the show.

Some shots outside the prison involved set extensions.

There’s this subtext about the North Yard and its influence on Sweat and Matt. Out in the yard, the inmates have a bit more autonomy. With good behavior, they have some ownership over the courts and are given the opportunity to curate these spaces. Some garden, many cook meals, and our characters draw and paint. For those lucky enough to be in the upper courts, they have this beautiful view beyond the walls of the prison, and you can almost forget you are locked up.

I think we’re meant to wonder, was it this autonomy or this daily reminder of the outside world beyond the prison walls that fueled their intense devotion to the escape? This location is a huge story piece, and I don’t think it would have been possible to truly render the scale of it all without the support of visual effects.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years.