Category Archives: VFX

VFX house Jamm adds Flame artist Mark Holden

Santa Monica-based visual effects boutique Jamm has added veteran Flame artist Mark Holden to its roster. Holden comes to Jamm with over 20 years of experience in post production, including stints in London and Los Angeles.

It didn’t take long for Holden to dive right in at Jamm; he worked on Space 150’s Buffalo Wild Wings Super Bowl campaign directed by the Snorri Bros. and starring Brett Favre. The Super Bowl teaser kicked off the pre-game.

Holden is known not only for his visual effects talent, but also for turning projects around under tight deadlines and offering his clients as many possible solutions within the post process. This has earned him work with leading agencies such as Fallon, Mother, Saatchi & Saatchi, Leo Burnett, 180, TBWA/Chiat/Day, Goodby Silverstein & Partners, Deutsch, David & Goliath, and Team One. He has worked with brands including Lexus, Activision, Adidas, Chevy, Geico, Grammys, Kia, Lyft, Pepsi, Southwest Airlines, StubHub, McDonald’s, Kellogg’s, Stella Artois, Silk, Heineken and Olay.

 

A Conversation: Jungle Book’s Oscar-nominated Rob Legato

By Randi Altman

Rob Legato’s resume includes some titles that might be considered among the best visual effects films of all time: Titanic, Avatar, Hugo, Harry Potter and the Sorcerer’s Stone, Apollo 13 and, most recently, The Jungle Book. He has two Oscars to his credit (Titanic, Hugo) along with two other nominations (Apollo 13 and this year for The Jungle Book). And while Martin Scorsese’s The Wolf of Wall Street and The Aviator don’t scream effects, he worked on those as well.

While Legato might be one of the most prodigious visual effects supervisors of all time, he never intended for this to be his path. “The magic of movies, in general, was my fascination more than anything else,” he says, and that led to him studying cinematography and directing at Santa Barbara’s Brooks Institute. They provided intensive courses on the intricacies of working with cameras and film.

Rob Legato worked closely with Technicolor and MPC to realize Jon Favreau’s vision for The Jungle Book, which is nominated for a VFX Oscar this year.

It was this technical knowledge that came in handy at his first job, working as a producer at a commercials house. “I knew that bizarre, esoteric end of the business, and that became known among my colleagues.” So when a spot came in that had a visual effect in it, Legato stepped up. “No one knew how to do it, and this was before on-set visual effects supervisors worked on commercials. I grabbed the camera and I figured out a way of doing it.”

After working on commercials, Legato transitioned to longer-form work, specifically television. He started on the second season of The Twilight Zone series, where he got the opportunity to shoot some footage. He was hoping to direct an episode, but the show got cancelled before he had a chance.

Legato then took his experience to Star Trek at a time when they were switching from opticals to a digital post workflow. “There were very few people then who had any kind of visual effects and live-action experience in television. I became second-unit director and ultimately directed a few shows. It was while working on Next Generation and Deep Space Nine that I learned how to mass produce visual effects on as big a scale as television allows, and that led me to Digital Domain.”

It was at Digital Domain where Legato transitioned to films, starting with Interview With the Vampire. He served as visual effects supervisor on this one. “Director Neil Jordan asked me to do the second unit. I got along really well with DP Philippe Roussselot and was able to direct live-action scenes and personally direct and photograph anything that was not live-action related — including the Tom Cruise puppet that looked like he’s bleeding to death.” This led to Apollo 13 on which he was VFX supervisor.

On set for Hugo (L-R): Martin Scorsese, DP Bob Richardson and Rob Legato.

“I thought as a director did, and I thought as a cameraman, so I was able to answer my own questions. This made it easy to communicate with directors and cameramen, and that was my interest. I attacked everything from the perspective of, ‘If I were directing this scene, what would I do?’ It then became easy for me to work with directors who weren’t very fluent in the visual effects side. And because I shot second unit too, especially on Marty Scorsese’s movies, I could determine what the best way of getting that image was. I actually became quite a decent cameraman with all this practice emulating Bob Richardson’s extraordinary work, and I studied the masters (Marty and Bob) and learned how to emulate their work to blend into their sequences seamlessly. I was also able to maximize the smaller dollar amount I was given by designing both second unit direction and cinematography together to maximize my day.”

Ok, let’s dig in a bit deeper with Legato, a card-carrying member of the ASC, and find out how he works with directors, his workflow and his love for trying and helping to create new technology in order to help tell the story.

Over the years you started to embrace virtual production. How has that technology evolved over the years?
When I was working on Harry Potter, I had to previs a sequence for time purposes, and we used a computer. I would tell the CG animators where to put the camera and lights, but there was something missing — a lot of times you get inspired by what’s literally in front of you, which is ever-changing in realtime. We were able to click the mouse and move it where we needed, but it was still missing this other sense of life.

For example, when I did Aviator, I had to shoot the plane crash; something I’d never done before, and I was nervous. It was a Scorsese film, so it was a given that it was to be beautifully designed and photographed. I didn’t have a lot of money, and I didn’t want to blow my opportunity. On Harry Potter and Titanic we had a lot of resources, so we could fix a mistake pretty easily. Here, I had one crack at it, and it had to be a home run.

So I prevised it, but added a realtime live-action pan and tilt wheels so we could operate and react in realtime — so instead of using a mouse, I was basically using what we use on a stage. It was a great way of working. I was doing the entire scene from one vantage point. I then re-staged it, put a different lens on it and shot the same exact scene from another angle. Then I could edit it as you would a real sequence, just as if I had all the same angles I would have if I had photographed it conventionally and produced a full set of multi-angle live-action dailies.

You edit as well?
I love editing. I would operate the shot and then cut it in the Avid, instantly. All of a sudden I was able to build a sequence that had a certain photographic and editorial personality to it — it felt like there was someone quite specific shooting it.

Is that what you did for Avatar?
Yes. Cameron loves to shoot, operate and edit. He has no fear of technology. I told him what I did on Aviator and that I couldn’t afford to add the more expensive, but extremely flexible, motion capture to it. So on Avatar instead of only the camera having live pan and tilt wheels, it could also be hand-held — you could do Steadicam shots, you could do running shots, you could do hand-held things, anything you wanted, including adding a motion capture live performance by an actor. You could easily stage them, or a representation of that character, in any place or scale in the scene, because in Avatar the characters were nine feet tall. You could preview the entire movie in a very free form and analog way. Jim loved the fact he could impart his personality — the way he moves the camera, the way he frames, the way he cuts — and that the CG-created film would bear the unmistakable stamp of his distinctive live-action movies.

You used the “Avatar-way” on Jungle Book, yes?
Yes. It wasn’t until Jungle Book that I could afford the Avatar-way — a full-on stage with a lot of people to man it. I was able to take what I gave to Jim on Avatar and do it myself with the bells and whistles and some improvements that gave it a life-like sensibility of what could have been an animated film. Instead it became a live film because we used a live-action analog methodology of acquiring images and choosing which one was the right, exact moment per the cut.

The idea behind virtual cinematography is that you shoot it like you would a regular movie. All the editors, cameramen or directors who’ve never done this before are now sort of operating the way they would have if it were real. This very flavor and personality starts to rub off on the patina of the film and begins to feel like a real movie; not animated or computer generated one.

Our philosophy on Jungle Book was we would not make the computer camera do anything that a real camera could not do, so we limited the way we could move it and how fast we could move it, so it wouldn’t defy any kind of gravity. That went part and parcel with the animation and movement of the animals and the actor performing stunts that only a human can accomplish.

So you are in a sense limiting what you can do with the technology?
There was an operator behind the camera and behind the wheels, massaging and creating the various compositional choices that generally are not made in a computer. They’re not just setting keyframes, and because somebody’s behind the camera, this sense of live-action-derived movement is consistent from shot to shot to shot. It’s one person doing it, whereas normally on a CG film, there are as many as 50 people who are placing cameras on different characters within the same scene.

You have to come up with these analog methodologies that are all tied together without even really knowing it. Your choices at the end of the day end up being strictly artistic choices. We’d sort of tap into that for Jungle Book and it’s what Jim tapped into when he did Avatar. The only difference between Avatar and our film is that we set our film in an instantly recognizable place so everybody can judge whether it’s photorealistic or not.

When you start a film, do you create your own system or use something off the shelf?
With every film there is a technology advance. I typically take whatever is off-the-shelf and glue it together with something not necessarily designed to work in unison. Each year you perfect it. The only way to really keep on top of technology is by being on the forefront of it, as opposed to waiting for it to come out. Usually, we’re doing things that haven’t been done before, and invariably it causes something new and innovative.

We’re totally revamping what we did on Jungle Book to achieve the same end on my next film for Disney, but we hope to make it that much better, faster and more intuitive. We are also taking advantage of VR tools to make our job easier, more creative and faster. The faster you can create options, the more iterations you get. More iterations get you a better product sooner and help you elevate the art form by taking it to the next level.

Technology is always driven by the story. We ask ourselves what we want to achieve. What kind of shot do we want to create that creates a mood and a tone? Then once we decide what that is, we figure out what technology we need to invent, or coerce into being, to actually produce it. It’s always driven that way. For example, on Titanic, the only way I could tell that story and make these magic transitions from the Titanic to the wreck and from the wreck back to the Titanic, was by controlling the water, which was impossible. We needed to make computer-generated water that looked realistic, so we did.

THE JUNGLE BOOK (Pictured) BAGHEERA and MOWGLI. ©2016 Disney Enterprises, Inc. All Rights Reserved.CG water was a big problem back then.
But now that’s very commonplace. The water work in Jungle Book is extraordinary compared to the crudeness of what we did on Titanic, but we started on that path, and then over the years other people took over and developed it further.

Getting back to Marty Scorsese, and how you work with him. How does having his complete trust make you better at what you do?
Marty is not as interested in the technical side as Jim is. Jim loves all this stuff, and he likes to tinker and invent. Marty’s not like that. Marty likes to tinker with emotions and explore a performance editorially. His relationship with me is, “I’m not going to micro-manage you. I’m going to tell you what feeling I want to get.” It’s very much like how he would talk to an actor about what a particular scene is about. You then start using your own creativity to come up with the idea he wants, and you call on your own experience and interpretation to realize it. You are totally engaged, and the more engaged you are, the more creative you become in terms of what the director wants to tell his story. Tell me what you want, or even don’t want, and then I’ll fill in the blanks for you.

Marty is an incredible cinema master — it’s not just the performance, it’s not just the camera, it’s not just the edit, it’s all those things working in concert to create something new. His encouragement for somebody like me is to do the same and then only show him something that’s working. He can then put his own creative stamp on it as well once he sees the possibilities properly presented. If it’s good, he’s going to use it. If it’s not good, he’ll tell you why, but he won’t tell you how to if fix it. He’ll tell you why it doesn’t feel right for the scene or what would make it more eloquent. It’s a very soft, artistic push in his direction of the film. I love working with him for this very reason.

You too surround yourself with people you can trust. Can you talk about this for just a second?
I learned early on to surround myself with geniuses. You can’t be afraid of hiring people that are smarter than you are because they bring more to the party. I want to be the lowest common denominator, not the highest. I’ll start with my idea, but if someone else can do it better, I want it to be better. I can show them what I did and tell them to make it better, and they’ll go off and come up with something that maybe I wouldn’t have thought of, or the collusion between you and them creates a new gem.

When I was doing Titanic someone asked me how I did what I did. My answer was that I hired geniuses and told them what I wanted to accomplish creatively. I hire the best I can find, the smartest, and I listen. Sometimes I use it, sometimes I don’t. Sometimes the mistake of somebody literally misunderstanding what you meant delivers something that you never thought of. It’s like, “Wow, you completely misunderstood what I said, but I like that better, so we’re going to do that.”

Part and parcel of doing this is that you’re a little fearless. It’s like, “Well, that sounds good. There’s no proof to it, but we’re going to go for it,” as opposed to saying, “Well, no one has done it before so we better not try it. That’s what I learned from Cameron and Marty and Bob Zemeckis. They’re fearless.

Can you mention what you’re working on now, or no?
I’m working on Lion King.

G-Tech 6-15
Deluxe VFX

Craig Zerouni joins Deluxe VFX as head of technology

Deluxe has named Craig Zerouni as head of technology for Deluxe Visual Effects. In this role, he will focus on continuing to unify software development and systems architecture across Deluxe’s Method studios in Los Angeles, Vancouver, New York and India, and its Iloura studios in Sydney and Melbourne, as well as LA’s Deluxe VR.

Based in LA and reporting to president/GM of Deluxe VFX and VR Ed Ulbrich, Zerouni will lead VFX and VR R&D and software development teams and systems worldwide, working closely with technology teams across Deluxe’s Creative division.

Zerouni has been working in media technology and production for nearly three decades, joining Deluxe most recently from DreamWorks, where he was director of technology at its Bangalore, India-bsed facility overseeing all technology. Prior to that he spent nine years at Digital Domain, where he was first head of R&D responsible for software strategy and teams in five locations across three countries, then senior director of technology overseeing software, systems, production technology, technical directors and media systems. He has also directed engineering, products and teams at software/tech companies Silicon Grail, Side Effects Software and Critical Path. In addition, he was co-founder of London-based computer animation company CFX.

Zerouni’s work has contributed to features including Tron: Legacy, Iron Man 3, Maleficent, X-Men: Days of Future Past, Ender’s Game and more than 400 commercials and TV IDs and titles. He is a member of BAFTA, ACM/SIGGRAPH, IEEE and the VES. He has served on the AMPAS Digital Imaging Technology Subcommittee and is the author of the technical reference book “Houdini on the Spot.”

Says Ulbrich on the new hire: “Our VFX work serves both the features world, which is increasingly global, and the advertising community, which is increasingly local. Behind the curtain at Method, Iloura, and Deluxe, in general, we have been working to integrate our studios to give clients the ability to tap into integrated global capacity, technology and talent anywhere in the world, while offering a high-quality local experience. Craig’s experience leading global technology organizations and distributed development teams, and building and integrating pipelines is right in line with our focus.”


The A-List: Lego Batman Movie director Chris McKay

By Iain Blair

Three years ago, The Lego Movie became an “everything is awesome” monster hit that cleverly avoided the pitfalls of feeling like a corporate branding exercise thanks to the deft touch and tonal dexterity of the director/writer/animator/producer team of Phil Lord and Christopher Miller.

Now busy working on a Han Solo spinoff movie, they handed over the directing reins on the follow-up, The Lego Batman Movie, to Chris McKay, who served as animation director and editor on the first one. And he hit the ground running on this one, which seriously — and hilariously — tweak’s Batman’s image.

Chris McKay

This time out, Batman stars in his own big-screen adventure, but there are big changes brewing in Gotham City. If he wants to save the city from The Joker’s hostile takeover, Batman may have to drop the lone vigilante thing, try to work with others and maybe, just maybe, learn to lighten up (somber introspection only goes so far when you’re a handsome billionaire with great cars and gadgets, who gets to punch people in the face with no repercussions).

Will Arnett voices Batman, Zach Galifianakis is The Joker, Michael Cera is orphan Dick Grayson, Rosario Dawson is Barbara Gordon, and Ralph Fiennes voices Alfred.

Behind the scenes, production designer Grant Freckelton and editor David Burrows also return from The Lego Movie, joined by editors Matt Villa and John Venzon. Lorne Balfe was composer, and feature animation was, again, by Animal Logic. The Warner Bros. film was released in 3D, 2D and IMAX.

I recently talked to McKay about making the film and how the whole process was basically all about the post.

The Lego Movie made nearly half a billion dollars and was a huge critical success as well. Any pressure there?
(Laughs) A lot, because of all that success, and asking, “How do we top it?” Then it’s Batman, with all his fans, and DC is very particular as he’s one of their crown jewels. But at the same time, the studio and DC were great partners and understood all the challenges.

So how did you pitch the whole idea?
As Jerry Maguire, directed by Michael Mann, with a ton of jokes in it. They got on board with that and saw what I was doing with the animatic, as well as the love I have for Batman and this world.

Once you were green-lit, you began on post, right?
Exactly right, because post is everything in animation. The whole thing is post. You start in post and end in post. When we pitched this, we didn’t even have a script, just a three- to four-page treatment. They liked the idea and said, “OK, let’s do it.” So we needed to write a script, and get the storyboard and editorial teams to work immediately, because there was no way we could get it finished in time if we didn’t.

It was originally scheduled to come out in May — almost three years from the time we pitched it, but then they moved the release date up to February, so it got even crazier. So we began getting all the key people involved, like [editor/writer] Dave Burrows at Animal Logic, who cut the first one with me, and developing the opening set piece.

You got an amazing cast, including Will Arnett as Batman again, and such unlikely participants as Mariah Carey, Michael Cera, Ralph Fiennes and Apple’s Siri. How tough was that?
We were very lucky because everyone was a fan, and when they saw that the first one wasn’t just a 90-minute toy commercial, they really wanted to be in it. Mariah was so charming and funny, and apart from her great singing voice, she has a really great speaking voice — and she was great at improv and very playful. Ralph has done some comedy, but I wasn’t sure he’d want to do something like this, but he got it immediately, and his voice was perfect. Michael Cera doesn’t do this kind of thing at all. Like Ralph, he’s an artist who usually does smaller movies and more personal stuff, and people told us, “You’re not going to get Ralph or Cera,” but Will reached out to Cera (they worked together on Arrested Development) and he came on.

As for Siri, it was a joke we tried to make work in the first movie but couldn’t, so we went back to it, and it turned into a great partnership with Apple. So that was a lot of fun for me, playing around with pop culture in that way, as the whole computer thing is part of Batman’s world anyway.

Phil Lord and Chris Miller have been very busy directing the upcoming, untitled Han Solo Star Wars movie, but as co-producers on this weren’t they still quite involved?
Very. I’d ask them for advice all the time and they would give notes since I was running a lot of stuff past them. They ended up writing several of my favorite lines in this; they gave me so much of their time, pitched jokes and let me do stuff with the animation I wanted to do. They’re very generous.

Sydney-based Animal Logic, the digital design, animation and effects company whose credits include Moulin Rouge!, Happy Feet and Walking With Dinosaurs did all the animation again. What was involved?
As I wanted to use Burrows, that would require us having an editorial team down there, and the studio wasn’t crazy about that. But he’s a fantastic editor and storyteller, and I also wanted to work with Grant Freckelton, who was the production designer on the first one, as well as lighting supervisor Craig Welch — all these team members at Animal Logic who were so good. In the end, we had over 400 people working on this for two and a half years — six months faster than the first one.

So Animal Logic began on it on day one, and I didn’t wait for a script. It was just me, Dave and the storyboard teams in LA and Sydney, and Grant’s design team. I showed them the treatment and said, “Here’s the scenes I want to do,” and we began with paintings and storyboards. The first act in animatic form and the script both landed at the same time in November 2014, and then we pitched where the rest of the movie would go and what changes we would make. So it kept going in tandem like that. There was no traditional screenwriting process. We’d just bring writers in and adjust as we went. So we literally built the screenplay in post — and we could do that because animation is like filmmaking in slow motion, and we had great storytellers in post, like Burrows.

You also used two other editors — Matt Villa and John Venzon. How did that work?
Matt’s very accomplished. He’s cut three of Baz Luhrmann’s films — The Great Gatsby, Moulin Rouge! and Australia — and he cut Russell Crowe’s The Water Diviner as well as
the animated features Happy Feet Two and Legend of the Guardians: The Owls of Ga’Hoole, so he came in to help. We also brought in other writers, and we would all be doing the voices. I was Batman and Matt would do the side characters. We literally built it as we went, with some storyboard artists from the first film, plus others we gathered along the way. The edit was crucial because of the crazy deadline.

Last summer we added John, who has also cut animated features, including Storks, Flushed Away, Shark Tale and South Park: Bigger, Longer and Uncut, because we needed to move some editorial to LA last July for five months, and he helped out with all the finishing. It was a 24/7 effort by that time, a labor of love.

Let’s talk about the VFX. Fair to say the whole film’s one big VFX sequence?
You’re right. Every single frame is a VFX shot. It’s mind blowing! You’re constantly working on it at the same time you’re writing and editing and so on, and it takes a big team of very focused animators and producers to do it.

What about the sound and music? Composer Lorne Balfe did the scores for Michael Bay’s 13 Hours: The Secret Soldiers of Benghazi, the animated features Penguins of Madagascar and Home, as well as Terminator Genisys. How important was the score?
It was crucial. He actually worked on the Dark Knight movies, so I knew he could do all the operatic, serious stuff as well as boy’s adventure stuff for Robin, and he was a big part of making it sound like a real Batman movie. We recorded the score in Sydney and Vienna, and did the mix on the lot at Warners with a great team that included effects mixer Gregg Landaker and sound designer Wayne Pashley from Big Bang Sound in Sydney.

Did the film turn out the way you hoped?
I wish we had those extra two months, but it’s the movie I wanted to make — it’s good for kids and adults, and it’s a big, fun Batman movie that looks at him in a way that the other Batman movies can’t.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Ingenuity Studios helps VFX-heavy spot get NASCAR-ready

Hollywood-based VFX house Ingenuity Studios recently worked on a 60-second Super Bowl spot for agency Pereira & O’Dell promoting Fox Sports’ coverage of the Daytona 500, which takes place on February 26. The ad, directed by Joseph Kahn, features people from all over the country gearing up to watch the Daytona 500, including footage from NASCAR races, drivers and, for some reason, actor James Van Der Beek.

The Ingenuity team had only two weeks to turn around this VFX-heavy spot, called Daytona Day. Some CG elements include a giant robot, race cars and crowds. While they were working on the effects, Fox was shooting footage in Charlotte, North Carolina and Los Angeles.

“When we were initially approached about this project we knew the turnaround would be a challenge,” explains creative director/VFX supervisor Grant Miller. “Editorial wasn’t fully locked until Thursday before the big game! With such a tight deadline preparing as much as we could in advance was key.”

Portions of the shoot took place at the Daytona Speedway, and since it was an off day the stadium and infield were empty. “In preparation, our CG team built the entire Daytona stadium while we were still shooting, complete with cheering CG crowds, RVs filling the interior, pit crews, etc.,” says Miller. “This meant that once shots were locked we simply needed to track the camera, adjust the lighting and render all the stadium passes for each shot.”

Additional shooting took place at the Charlotte Motor Speedway, Downtown Los Angeles and Pasadena, California.

In addition to prepping CG for set extensions, Ingenuity also got a head start on the giant robot that shows up halfway through the commercial.  “Once the storyboards were approved and we were clear on the level of detail required, we took our ‘concept bot’ out of ZBrush, retopologized and unwrapped it, then proceeded to do surfacing and materials in Substance Painter. While we had some additional detailing to do, we were able to get the textures 80 percent completed by applying a variety of procedural materials to the mesh, saving a ton of manual painting.”

Other effects work included over 40 CG NASCAR vehicles to fill the track, additional cars for the traffic jam and lots of greenscreen and roto work to get the scenes shot in Charlotte into Daytona. There was also a fair bit of invisible work that included cleaning up sets, removing rain, painting out logos, etc.

Other tools used include Autodesk’s Maya, The Foundry’s Nuke and BorisFX’s Mocha.


Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.


New cineSync offerings include iOS app, updated security features

Cospective has released the latest version of its remote review and approval solution — cineSync 4.0 introduces a new iOS app, an overhauled video playback system, deeper production tracking integrations with Shotgun and ftrack. Enhanced security features are being offered via its new cineSync Pro Studio product, including on-demand watermarking via a new integration with MediaSilo’s Safestream.

Shotgun viewer

cineSync 4.0 has been developed in conjunction with the security departments of several major studios. This has resulted in the creation of cineSync Pro Studio, in addition to cineSync and cineSync Pro. cineSync Pro Studio’s integration with MediaSilo’s SafeStream watermarking technology allows for automated, on-demand, individual watermarking of all review files. All guests in the review will receive customizable files watermarked with their name, cineSync session key, IP address and the review time/date. The process is fast and efficient, due to Safestream’s scalable architecture.

For additional review security, guests are authenticated in advance — only guests who have been approved will have review access. This allows maximum control over who has access to review material, while imposing as few technical hurdles as possible. All reviews are also tracked in the management portal, allowing admins to see when reviews occurred and who was involved.

“We worked closely with the world’s biggest production studios on cineSync Pro Studio. Security was a key concern,” explains Cospective CEO Rory McGregor. “In working in close collaboration with MediaSilo and their SafeStream technology, we’ve met these concerns and more, delivering a tool that’s not only better positioned to deliver efficient, streamlined reviews, but also to do so in with the highest possible level of security and in a completely dependable environment.”

iPhone playlist interface

Also new from Cospective is a cineSync app for iOS. It allows guests to join cineSync reviews from mobile devices. The app integrates seamlessly with Shotgun and ftrack, meaning review information and media can be pushed securely to mobile devices. Files are automatically deleted at the end of the review, but all drawings and saved frames can be saved back to Shotgun or ftrack by the session host.

Let’s dig into some cineSync 4.0, which is what the company is calling an overhaul of its video playback system. QuickTime has been retired and replaced by a new, adaptable video architecture. This means that cineSync 4.0 can support a wide array of video formats, resolutions, and frame-rates across all platforms.

cineSync 4.0 features a deeper integration with production tracking tools Shotgun, ftrack and NIM. Users can browse and load media playlists directly from these applications, and access seamless transfer and recording of review information, saved frames and other feedback. cineSync 4.0 is available now.

Cospective will continue to roll out new features for cineSync (which won an Oscar by the way), cineSync Pro and cineSync Pro Studio over the coming months. Users with a valid subscription to an existing cineSync package, will be eligible to upgrade to the latest version.

The price will increase depending on the amount of features that tool includes. All cineSync products have security measures, but Pro Studio is especially thorough as the only tool with watermarking and guest authentication.

Here are the baseline costs for each product: cineSync, 12 months for 10 users is $1,599; cineSync Pro, 12 months for 10 users is $4,999; and cineSync Pro Studio, 12 months for 10 users is $8,000


Quick Chat: Freefolk US executive producer Celia Williams

By Randi Altman

A few months back, UK-based post house Finish purchased VFX studio Realise and renamed the company Freefolk. They also expanded into the US with a New York City-based studio. Industry vet Celia Williams, who was most recently head of production at agency Arnold NY, is heading up Freefolk US. To find out more about the recently rebranded entity, we reached out to Williams.

Can you describe Freefolk? What kind of services do you offer?
Freefolk is a team of creative artists, technicians and problem solvers who use post production as their tool box. We offer services including high-end FilmLight Baselight color grading, remote grading, 2D and 3D visual effects, final conform, shoot supervision, animation, data management and direction of special projects. We work across the mediums of advertising, film, TV and digital content.

L-R: Celia Williams, Paul Harrison and Jason Watts.

What spurred on Freefolk’s expansion to the US?
Having carved out a reputation in London over the last 13 years as a commercials post house, the expansion to the US seemed like a natural progression for the founders, allowing them to export a boutique service and high-quality work rather than becoming another large machine in London.

Will you be offering the same services in both locations?
The services we offer in London will all be represented in New York. Color grading plays such an important role in the process these days, so we are spearheading with a Baselight suite driven by Paul Harrison and 2D VFX department being set up by Jason Watts.

Will you share staff between New York and the UK?
Yes, there will be a sharing of resources and, obviously, experience across the offices. A great thing about opening in New York is being able to offer our staff the experience of working in a foreign city. It also gives clients who are increasingly working across multiple markets a seamless global service.

Why the rebrand from Finish to Freefolk?
The rebrand from Finish to Freefolk came about as part of the expansion into the US and the acquisition of Realise. It was also a timely opportunity to express one of the core values of the company, and the way it values its staff and clients — Freefolk is about the people involved in the process.

What does the acquisition of Realise mean to the company?
Realise has brought a wealth of experience and talent to the table. They combine creative skill and technical understanding in equal measure. They are known in both commercials and now film and TV for offering very specialized capabilities with Side Effects Houdini and customized software.

We have just completed VFX work on 400 shots over 10 episodes of NBC’s Emerald City TV series (due to be released early 2017) and have just embarked on our next long-form project. It’s really exciting to be expanding into other mediums such as TV, film, installation work, projection mapping and other experimental and experiential arenas.

You have an ad agency background. From your own experience how important is that to clients?
It’s extremely important and comforting, actually. Understanding what the producers and creatives are challenged with on a daily basis gives me the ability to offer workable solutions to their problems in a very collaborative way. They don’t have to wonder if I “get” where they’re coming from. Frankly, I do.

I think that it’s emotionally helpful as well. To know someone can be an understanding shoulder to lean on and is taking their concerns seriously is beyond important. Everyone is working at breakneck speed in our industry, which can lead to a lack of humanity in our interactions. One of the main reasons I was attracted to working with Freefolk is that they are deeply dedicated to keeping that humanity and personal touch in the way they do business.

The way that post companies service agencies has changed due to the way that products are now being marketed — online ads, social media, VR. Can you talk about that?
To be well informed and prepped as early on in the process as you can be is key. And to truly partner with the producers and creatives, as much as they need or want, is critical. What might work in one medium may be less impactful in another, so from the get-go, how do we plan to ensure all deliverables are strong, and to offer insights into new technology that might impact the outcome? It’s all about sharing and collaboration.

I may be one of the few people who’ve never really panicked about the different ways we deliver final work — our industry has always been about change, which is what keeps it interesting. At the end of the day, it’s always been about delivering content, in one form or another. So you need to know your final deliverables list and plan accordingly.


Fantastic Beasts VFX workflow employs previs and postvis

By Daniel Restuccio

Warner Bros’ Fantastic Beasts and Where to Find Them is considered by some a Harry Potter prequel and by others an entirely new J.K. Rowling franchise. Filled with nearly 1,500 VFX shots, this live-action, CG and character-driven movie put a huge emphasis on pre-pro and established the increased role of postvis in the film’s visual effects post pipeline.

For the film’s overall visual effects supervisors, Tim Burke and Christian Manz, it was a family reunion of sorts, reteaming with many of the companies and individuals that worked on the Harry Potter movies, including director David Yates and producers David Heyman, Steve Kloves, J.K. Rowling and Lionel Wigram.

According to Manz, one of the most significant aspects of this film was how visual effects were integrated into the story from the very beginning. The direction from Yates was very clear: “Make things fantastic, but not fantasy.” For every creature design presented, Yates would ask, “What would be the story behind that creature? What would that character do if the audience saw it from one moment to the next?” Says Manz, “It all had to work to support the story, but not be the story.”

Manz feels that this movie speaks to a new way of storytelling with VFX. “Visual effects is now a part of that filmmaking and storytelling team rather than being the guys who stick in everything afterwards.”

Starting in January 2015, while Burke was busy as VFX supervisor on The Legend of Tarzan, Manz worked with Framestore animation director Pablo Grillo, a Framestore art and animation team and a group of freelance concept and previs artists doing creature development and scene design. Over eight months of design sessions they created 18 main animal types based on hundreds of variations, and worked with the Framestore art department to conceive the turn-of-the-century New York City sets and set extensions.

“Of course, there were creatures we tested that didn’t make the film,” says Framestore animator Andras Ormos, “but it was about the process of whittling it down, and that was the real joy of working on this project. The creative input stretched beyond the post production stage, deciding what worked and what wouldn’t in the overall film.”

“J.K. Rowling’s wonderful script was filled with characters and creatures,” explains Grillo. “Having seen how animation is such a big part of the process in a film like this, we decided that it was important to be involved from the concept stage onwards.” The character development and scene work sessions were so impressive they actually influenced subsequent drafts of the script.

Burke came on full-time in June 2015, and they split the movie in half. Manz took the lead developing the world inside Newt’s magical case, and Burke did the “Obscurus” and the third act. Principal photography took place from August 2015 to January 2016, and they took turns on set supervising their own and each other’s VFX sequences.

With Framestore and Double Negative taking the lead, the shots were spread out among nine main VFX and three previs/postvis companies including: Cinesite, Image Engine, Method Studios, Milk Visual Effects, Moving Picture Company, Nvizible, Proof, Rodeo FX, Secret Lab, The Third Floor and others. Burke says they divided the work by “the strengths of the companies and without overlapping them too much.”

Framestore
Framestore took on the majority of the complex character animation: the Niffler, Gnarlack, the Erumpent and Picket Bowtruckle, as well as many goblins and elves. Grillo first tackled Niffler, described by Rowling as “a long-snouted, burrowing creature native to Britain with a penchant for anything shiny.” The creature design was a mash-up of a spiny anteater, platypus and mole and went through hundreds of iterations and many animated prototypes. Framestore used the skin and muscle rigging toolkit Flesh and Flex developed for Tarzan on Niffler’s magic “loot stuffing” pouch.

Framestore

Framestore

The reason the audience is so delighted when this character first appears, explains Ormos, is that “this scene is driven by the relationship between Newt and the Niffler. There was a history we had to get across — the fact that the Niffler was notorious for escaping and pick-pocketing, and that Newt was going through the motions in trying to catch him. They understood each other and there were little looks, a language in their movement.”

Gnarlack, an American, cigar-chewing, snarky goblin, voiced and facial mocaped by actor Ron Perlman, “is one of the best digital humanoids yet,” reports Grillo. Perlman donned a Vicon Cara 3D facial motion capture headset, surrounded by four high-resolution, high-speed witness cameras. According to Framestore VFX supervisor Andy Kind, Perlman also sat in front of 98 cameras for a facial action coding shape (FACS) session so the team could sculpt the face directly in 3D.

“We created CG characters for the giants, elves and band ensemble,” says Kind. “Then we gave them crazy instruments, including a sousaphone/trumpet concoction.”

A 17-foot carbon fiber puppet, built by Handspring Puppet, substituted for the amorous rhinoceros Erumpent during the Central Park chase scene. It was switched out with the CG version later and dynamic simulations of shattering ice, explosive snow and water effects were added to the concluding shots. There’s this liquid, light-filled sack on the Erumpent’s forehead that Manz says, “made her slightly more unusual than a normal creature.”

“There was an awful lot of digital environment as well as the beast itself,” continues Manz. “David Yates fell in love with the postvis for this scene. It was great to be able to play with shots and offer up suggestions for the edit. It was a very organic way of filmmaking.”

Newt’s pocket-hiding creature sidekick, Picket Bowtruckle, took two months and 200 versions to get right. “We were told that Picket moved too slowly at first and that he appeared too old. We played with the speed but kept his movements graceful,” explains Manz. “He didn’t really have any facial animation, but he does blow a raspberry at one point. In the end, we added more shots to get Pickett’s story to go through, as everyone just loved him.”

MPC
The Moving Picture Company (MPC) completed more than 220 shots and created the Demiguise, Occamy and Billiwig, as well as 3D set extensions of period Manhattan.

For Demiguise’s long, flowing hair and invisibility effect, MPC used their Furtility groom technology. According to MPC VFX supervisor Ferran Domenech, using Furtility “allows for the hair to move naturally and interact with the creature’s arms, legs and the environment around it.” Demiguise was animated using enhanced mocap with keyframed facial expressions.

MPC

MPC built the large feathered dragon-snake Occamy in sections to fill the real and CG extended attic. They used Furtility once more, this time to add feathers, and they augmented the code so that in the climatic fight scene they could scale the giant version of the creature down to mouse-size. MPC’s effects team then used its in-house Kali destruction technology to wreck the attic.

Finally, MPC worked on the Billiwig, a magical bug that can change its flight mode from dragonfly to propeller plane. “This little creature has lots of character and was great fun to bring to life,” reports Domenech.

Previs and Postvis
A major technical advance for Fantastic Beasts can be found in the workflow. It’s been 15 years since the first Harry Potter movie and five years since Deathly Hallows. Over that time Burke had designed a very efficient, streamlined, mostly film-based VFX workflow.

“In the past, we were always stuck at the point where when we shot the film, it was put into editorial, they cut it and then gave it back to us — quite often with big holes where creatures would exist or environments needed to be placed,” describes Burke. “Then we would have to involve the facilities to use their real power to push things through and start blocking out all of the characters. This took quite a bit of time and would always slow the process down, and time is really the key difference with everything we do these days.”

In the past, says Burke, he might wait two months to see an early block of an animated character, “which always then restricts what you can do at the back end or restricts the director’s ability to make creative changes.”

Thankfully this wasn’t the case with Fantastic Beasts. “In the middle of the shoot, Christian and I started supervising the postvis of the scenes we’d already shot,” he explains. They assembled a 50-artist in-house postvis team comprised of members of The Third Floor, Proof and Framestore. While some of the movie was prevised, all of the movie was postvised.

“The path from previs to postvis varied from sequence to sequence,” explains Peter McDonald, previs/postvis supervisor for The Third Floor, London. “At one end of the scale, we had sequences that never really survived through shooting, while at the other end we had sequences that were followed shot-for-shot during the shoot and subsequent editorial process.”

Third Floor

Third Floor postvis

“As an example,” he continues, “the Demiguise and Occamy scene in the department store attic was heavily prevised. The final previs was a pretty polished and spectacular piece in its own right with some relatively sophisticated animation and a highly refined edit. This previs edit was taken onto the stage, with printouts of the shots being referenced as the shoot day progressed. What later came back our way for postvis was very similar to what had been designed in the previs, which was very satisfying from our point of view. It’s nice to know that previs can help drive a production at this level of fidelity!”

One of the benefits of this process was having a movie during editorial that had no “holes” where VFX shots were to later appear. The “postvis” was so good that it was used during audience screenings before the VFX shots were actually built and rendered.

“There were a couple of factors that elevated the postvis,” says McDonald. “Probably the primary one was integration between Framestore’s team with our team at The Third Floor London. Having them present and being supervised by Pablo Grillo guaranteed that the work we were putting together was being judged from almost a “finals” point of view, as Pablo and his artists would also be the ones finishing the job in post. It meant that our postvis wasn’t a throw away — it was the first step in the post production pipeline. This philosophy was present beyond the personnel involved. We also had creature rigs that could be translated with their animation down the line.”

Subway rampage previs

Third Floor’s previs of subway rampage.

One example of a scene that followed through from previs to postvis were parts of the Obscurus rampage in the subway. “Pablo and I worked very closely with artists at both Framestore and The Third Floor on this ‘sequence within a sequence,” says McDonald. “We started with bifrost fluid simulations created in Maya by our own senior asset builder Chris Dawson. We then had our animators distort these simulations into the virtual subway set. Through iteration, we developed the choreography of the piece and further refined the overall rhythm and shape of the piece with our previs editor. This previs then became the template for what was shot on the actual set with Eddie Redmayne and Colin Farrell in the roles of Newt and Graves. When the plate edit returned to us for postvis, we were pretty much able to drop the same distorted simulations onto the plates. The camera angles and cutting pattern in the previs edit had been followed very closely by the live-action unit. We then animated a layer of environment destruction and comped it into the shots to help tie it all together.”

During postvis, says Manz, “A character could go into a shot within a day or two. You would track the shot plate, put the character in, light it and stick it back into editorial. That sort of turn around, that in-house work that we did was the big, big difference with how the film worked. It allowed us to feed all that stuff to David Yates.”

Yates showed his director’s cut to the studio with every single shot of the film blocked out. There were no empty spaces. “We even got the environments in so he had a representation of every street,” says Manz. They completed a three-hour assembly of the film in about five months.

Creatively, it was very liberating, which enabled them to do additional shoots just to enhance the movie. Burke says they were designing and changing shots right up to the end. The final reveal of Jonny Depp as dark wizard Gellert Grindelwald came together all at the last minute.

Fantastic Beasts is like a Harry Potter movie because it exists in the J.K. Rowling story universe and is part of the Harry Potter lore. “Where it’s similar to Potter in terms of the filmmaking,” says Manz, “ is in making it feel very real and not fantasy. What I always enjoyed about the Potter films was they really felt like they were set in a real version of the UK; you could almost believe that magical sort of school existed.”

Third Floor previs

How Fantastic Beasts is different, says Burke, is that it is set in turn-of-the-century New York City, a real city, and not the magical school environment of Hogwarts. “We were dealing with adults,” continues Burke, “we’re not talking about a child growing and running everything through a child’s life. We’re talking about a series of adults. In that sense, it felt when we were making it like were making a film for adults, which obviously has great appeal to children as well. But I do feel it’s more of a film for grown-ups in terms of its storyline, and the issues it’s challenging and discussing.”

Someone told Manz that somebody “felt like Fantastic Beasts was made for the audience that read the books and watched those films and has now grown up.”

IMDB lists four new Fantastic Beasts movies in development. Burke and Manz are already in pre-production on the second, penciled in for a November 16, 2018 release date. “I think it’s fair to say,” says Burke, “that we’ll obviously be trying to expand on the universe that we’ve started to create. Newt will be there with his animals and various other characters, which are going to bring a lot more interest as the story evolves.”

Manz predicts, “It’s just trying to push the believability of the interactions and that world even further. The first film was all about creating that new world, and now it’s out there. It will be a new city (Paris) so we’ll have that challenge again, but we’ll build on what we’ve learned. You don’t often get an opportunity to work with the same team of people, so that’s going to be the great for the second one.”

 

 

Jim Hagarty Photography

Blue Sky Studios’ Mikki Rose named SIGGRAPH 2019 conference chair

Mikki Rose has been named conference chair of SIGGRAPH 2019. Fur technical director at Greenwich, Connecticut-based Blue Sky Studios, Rose chaired the Production Sessions during SIGGRAPH 2016 this past July in Anaheim and has been a longtime volunteer and active member of SIGGRAPH for the last 15 years.

Rose has worked on such film as The Peanuts Movie and Hotel Transylvania. She refers to herself a “CG hairstylist” due to her specialization in fur at Blue Sky Studios — everything from hair to cloth to feathers and even vegetation. She studied general CG production at college and holds BS degrees in Computer Science and Digital Animation from Middle Tennessee State University as well as an MFA in Digital Production Arts from Clemson University. Prior to Blue Sky, she lived in California and held positions with Rhythm & Hues Studios and Sony Pictures Imageworks.

“I have grown to rely on each SIGGRAPH as an opportunity for renewal of inspiration in both my professional and personal creative work. In taking on the role of chair, my goal is to provide an environment for those exact activities to others,” said Rose. “Our industries are changing and developing at an astounding rate. It is my task to incorporate new techniques while continuing to enrich our long-standing traditions.”

SIGGRAPH 2019 will take place in Los Angeles from July 29 to August 2, 2019.


Main Image: SIGGRAPH 2016 — Jim Hagarty Photography