Category Archives: VFX

Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.

Alkemy X adds creative director Geoff Bailey

Alkemy X, which offers live-action production, design, high-end VFX and post services, has added creative director Geoff Bailey to its New York office, which has now almost doubled in staff. The expansion comes after Alkemy X served as the exclusive visual effects company on M. Night Shyamalan’s Split.

Alkemy X and Bailey started collaborating in 2016 when the two worked together on a 360 experiential film project for EY (formerly Ernst & Young) and brand consultancy BrandPie. Bailey was creative director on the project, which was commissioned for EY’s Strategic Growth Forum held in Palm Desert, California, last November. The project featured Alkemy X’s live-action, VFX, animation, design and editorial work.

“I enjoy creating at the convergence of many disciplines and look forward to leveraging my branding knowledge to support Alkemy X’s hybrid creation pipeline — from ideation and strategy, to live-action production, design and VFX,” says Bailey.

Most recently, Bailey was a creative director at Loyalkaspar, where he creatively led the launch campaign for A&E’s Bates Motel. He also served as creative director/designer on the title sequence for the American launch of A&E’s The Returned, and as CD/director on a series of launch spots for the debut of Vice Media’s TV channel Viceland.

Prior to that, Bailey freelanced for several New York design firms as a director, designer and animator. His freelance résumé includes work for HBO, Showtime, Hulu, ABC, Cinemax, HP, Jay-Z, U2, Travel Channel, Comedy Central, CourtTV, Fuse, AMC Networks, Kiehl’s and many more. Bailey holds an MFA in film production from Columbia University.

G-Tech 6-15

Swedish post/VFX company Chimney opens in LA

Swedish post company Chimney has opened a Los Angeles facility, its first in the US, but one of their 12 offices in eight countries. Founded in Stockholm in 1995, Chimney produces over 6,000 pieces for more than 60 countries each year, averaging 1,000 projects and 10,000 VFX shots. The company, which is privately held by 50 of its artists, is able to offer 24-hour service thanks to its many locations around the world.

When asked why Chimney decided to open an office in LA, founder Henric Larsson said, “It was not the palm trees and beaches that made us open up in LA. We’re film nerds and we want to work with the best talent in the world, and where do we find the top directors, DPs, ADs, CDs and producers if not in the US?”

The Chimney LA crew.

The Chimney LA team was busy from the start, working with Team One to produce two Lexus campaigns, including one that debuted during the Super Bowl. For the Lexus Man & Machine Super Bowl Spot, they took advantage of the talent at sister facilities in Poland and Sweden.

Chimney also reports that it has signed with Shortlist Mgmt, joining other companies like RSA, Caviar, Tool and No6 Editorial. Charlie McBrearty, founding partner of Shortlist Mgmt, says that Chimney has “been on our radar for quite some time, and we are very excited to be part of their US expansion. Shortlist is no stranger to managing director Jesper Palsson, and we are thrilled to be reunited with him after our past collaboration through Stopp USA.”

Tools used for VFX include Autodesk’s Flame and Maya, The Foundry’s Nukea and Adobe After Effects. Audio is via Avid Pro Tools. Color is done in Digital Vision’s Nucoda. For editing they call on Avid Media Composer, Apple Final Cut and Adobe Premiere


Quick Chat: Brent Bonacorso on his Narrow World

Filmmaker Brent Bonacorso has written, directed and created visual effects for The Narrow World, which examines the sudden appearance of a giant alien creature in Los Angeles and the conflicting theories on why it’s there, what its motivations are, and why it seems to ignore all attempts at human interaction. It’s told through the eyes of three people with differing ideas of its true significance. Bonacorso shot on a Red camera with Panavision Primo lenses, along with a bit of Blackmagic Pocket Cinema Camera for random B-roll.

Let’s find out more…

Where did the idea for The Narrow World come from?
I was intrigued by the idea of subverting the traditional alien invasion story and using that as a way to explore how we interpret the world around us, and how our subconscious mind invisibly directs our behavior. The creature in this film becomes a blank canvas onto which the human characters project their innate desires and beliefs — its mysterious nature revealing more about the characters than the actual creature itself.

As with most ideas, it came to me in a flash, a single image that defined the concept. I was riding my bike along the beach in Venice, and suddenly in my head saw a giant Kaiju as big as a skyscraper sitting on the sand, gazing out at the sea. Not directly threatening, not exactly friendly either, with a mutual understanding with all the tiny humans around it — we don’t really understand each other at all, and probably never will. Suddenly, I knew why he was here, and what it all meant. I quickly sketched the image and the story followed.

What was the process like bringing the film to life as an independent project?
After I wrote the script, I shot principal photography with producer Thom Fennessey in two stages – first with the actor who plays Raymond Davis (Karim Saleh) and then with the actress playing Emily Field (Julia Cavanaugh).

I called in a lot of favors from my friends and connections here in LA and abroad — the highlight was getting some amazing Primo lenses and equipment from Panavision to use because they love Magdalena Górka’s (the cinematographer) work. Altogether it was about four days of principal photography, a good bit of it guerrilla style, and then shooting lots of B-roll all over the city.

Kacper Sawicki, head of Papaya Films which represents me for commercial work in Europe, got on board during post production to help bring The Narrow World to completion. Friends of mine in Paris and Luxembourg designed and textured the creature, and I did the lighting and animation in Maxon Cinema 4D and compositing in Adobe After Effects.

Our editor was the genius Jack Pyland (who cut on Adobe Premiere), based in Dallas. Sound design and color grading (via Digital Vision’s Nucoda) were completed by Polish companies Głośno and Lunapark, respectively. Our composer was Cedie Janson from Australia. So even though this was an indie project, it became an amazing global collaborative effort.

Of course, with any no-budget project like this, patience is key — lack of funds is offset by lots of time, which is free, if sometimes frustrating. Stick with it — directing is a generally a war of attrition, and it’s won by the tenacious.

As a director, how did you pull off so much of the VFX work yourself, and what lessons do you have for other directors?
I realized early on in my career as a director that the more you understand about post, and the more you can do yourself, the more you can control the scope of the project from start to finish. If you truly understand the technology and what is possible with what kind of budget and what kind of manpower, it removes a lot of barriers.

I taught myself After Effects and Cinema 4D in graphic design school, and later I figured out how to make those tools work for me in visual effects and to stretch the boundaries of the short films I was making. It has proved invaluable in my career — in the early stages I did most of the visual effects in my work myself. Later on, when I began having VFX companies do the work, my knowledge and understanding of the process enabled me to communicate very efficiently with the artists on my projects.

What other projects do you have on the horizon?
In addition to my usual commercial work, I’m very excited about my first feature project coming up this year through Awesomeness Films and DreamWorks — You Get Me, starring Bella Thorne and Halston Sage.


VFX house Jamm adds Flame artist Mark Holden

Santa Monica-based visual effects boutique Jamm has added veteran Flame artist Mark Holden to its roster. Holden comes to Jamm with over 20 years of experience in post production, including stints in London and Los Angeles.

It didn’t take long for Holden to dive right in at Jamm; he worked on Space 150’s Buffalo Wild Wings Super Bowl campaign directed by the Snorri Bros. and starring Brett Favre. The Super Bowl teaser kicked off the pre-game.

Holden is known not only for his visual effects talent, but also for turning projects around under tight deadlines and offering his clients as many possible solutions within the post process. This has earned him work with leading agencies such as Fallon, Mother, Saatchi & Saatchi, Leo Burnett, 180, TBWA/Chiat/Day, Goodby Silverstein & Partners, Deutsch, David & Goliath, and Team One. He has worked with brands including Lexus, Activision, Adidas, Chevy, Geico, Grammys, Kia, Lyft, Pepsi, Southwest Airlines, StubHub, McDonald’s, Kellogg’s, Stella Artois, Silk, Heineken and Olay.

 


A Conversation: Jungle Book’s Oscar-Winner Rob Legato

By Randi Altman

Rob Legato’s resume includes some titles that might be considered among the best visual effects films of all time: Titanic, Avatar, Hugo, Harry Potter and the Sorcerer’s Stone, Apollo 13 and, most recently, The Jungle Book. He has three Oscars to his credit (Titanic, Hugo, The Jungle Book) along with one other nomination (Apollo 13). And while Martin Scorsese’s The Wolf of Wall Street and The Aviator don’t scream effects, he worked on those as well.

While Legato might be one of the most prodigious visual effects supervisors of all time, he never intended for this to be his path. “The magic of movies, in general, was my fascination more than anything else,” he says, and that led to him studying cinematography and directing at Santa Barbara’s Brooks Institute. They provided intensive courses on the intricacies of working with cameras and film.

Rob Legato worked closely with Technicolor and MPC to realize Jon Favreau’s vision for The Jungle Book, which is nominated for a VFX Oscar this year.

It was this technical knowledge that came in handy at his first job, working as a producer at a commercials house. “I knew that bizarre, esoteric end of the business, and that became known among my colleagues.” So when a spot came in that had a visual effect in it, Legato stepped up. “No one knew how to do it, and this was before on-set visual effects supervisors worked on commercials. I grabbed the camera and I figured out a way of doing it.”

After working on commercials, Legato transitioned to longer-form work, specifically television. He started on the second season of The Twilight Zone series, where he got the opportunity to shoot some footage. He was hoping to direct an episode, but the show got cancelled before he had a chance.

Legato then took his experience to Star Trek at a time when they were switching from opticals to a digital post workflow. “There were very few people then who had any kind of visual effects and live-action experience in television. I became second-unit director and ultimately directed a few shows. It was while working on Next Generation and Deep Space Nine that I learned how to mass produce visual effects on as big a scale as television allows, and that led me to Digital Domain.”

It was at Digital Domain where Legato transitioned to films, starting with Interview With the Vampire. He served as visual effects supervisor on this one. “Director Neil Jordan asked me to do the second unit. I got along really well with DP Philippe Roussselot and was able to direct live-action scenes and personally direct and photograph anything that was not live-action related — including the Tom Cruise puppet that looked like he’s bleeding to death.” This led to Apollo 13 on which he was VFX supervisor.

On set for Hugo (L-R): Martin Scorsese, DP Bob Richardson and Rob Legato.

“I thought as a director did, and I thought as a cameraman, so I was able to answer my own questions. This made it easy to communicate with directors and cameramen, and that was my interest. I attacked everything from the perspective of, ‘If I were directing this scene, what would I do?’ It then became easy for me to work with directors who weren’t very fluent in the visual effects side. And because I shot second unit too, especially on Marty Scorsese’s movies, I could determine what the best way of getting that image was. I actually became quite a decent cameraman with all this practice emulating Bob Richardson’s extraordinary work, and I studied the masters (Marty and Bob) and learned how to emulate their work to blend into their sequences seamlessly. I was also able to maximize the smaller dollar amount I was given by designing both second unit direction and cinematography together to maximize my day.”

Ok, let’s dig in a bit deeper with Legato, a card-carrying member of the ASC, and find out how he works with directors, his workflow and his love for trying and helping to create new technology in order to help tell the story.

Over the years you started to embrace virtual production. How has that technology evolved over the years?
When I was working on Harry Potter, I had to previs a sequence for time purposes, and we used a computer. I would tell the CG animators where to put the camera and lights, but there was something missing — a lot of times you get inspired by what’s literally in front of you, which is ever-changing in realtime. We were able to click the mouse and move it where we needed, but it was still missing this other sense of life.

For example, when I did Aviator, I had to shoot the plane crash; something I’d never done before, and I was nervous. It was a Scorsese film, so it was a given that it was to be beautifully designed and photographed. I didn’t have a lot of money, and I didn’t want to blow my opportunity. On Harry Potter and Titanic we had a lot of resources, so we could fix a mistake pretty easily. Here, I had one crack at it, and it had to be a home run.

So I prevised it, but added a realtime live-action pan and tilt wheels so we could operate and react in realtime — so instead of using a mouse, I was basically using what we use on a stage. It was a great way of working. I was doing the entire scene from one vantage point. I then re-staged it, put a different lens on it and shot the same exact scene from another angle. Then I could edit it as you would a real sequence, just as if I had all the same angles I would have if I had photographed it conventionally and produced a full set of multi-angle live-action dailies.

You edit as well?
I love editing. I would operate the shot and then cut it in the Avid, instantly. All of a sudden I was able to build a sequence that had a certain photographic and editorial personality to it — it felt like there was someone quite specific shooting it.

Is that what you did for Avatar?
Yes. Cameron loves to shoot, operate and edit. He has no fear of technology. I told him what I did on Aviator and that I couldn’t afford to add the more expensive, but extremely flexible, motion capture to it. So on Avatar instead of only the camera having live pan and tilt wheels, it could also be hand-held — you could do Steadicam shots, you could do running shots, you could do hand-held things, anything you wanted, including adding a motion capture live performance by an actor. You could easily stage them, or a representation of that character, in any place or scale in the scene, because in Avatar the characters were nine feet tall. You could preview the entire movie in a very free form and analog way. Jim loved the fact he could impart his personality — the way he moves the camera, the way he frames, the way he cuts — and that the CG-created film would bear the unmistakable stamp of his distinctive live-action movies.

You used the “Avatar-way” on Jungle Book, yes?
Yes. It wasn’t until Jungle Book that I could afford the Avatar-way — a full-on stage with a lot of people to man it. I was able to take what I gave to Jim on Avatar and do it myself with the bells and whistles and some improvements that gave it a life-like sensibility of what could have been an animated film. Instead it became a live film because we used a live-action analog methodology of acquiring images and choosing which one was the right, exact moment per the cut.

The idea behind virtual cinematography is that you shoot it like you would a regular movie. All the editors, cameramen or directors who’ve never done this before are now sort of operating the way they would have if it were real. This very flavor and personality starts to rub off on the patina of the film and begins to feel like a real movie; not animated or computer generated one.

Our philosophy on Jungle Book was we would not make the computer camera do anything that a real camera could not do, so we limited the way we could move it and how fast we could move it, so it wouldn’t defy any kind of gravity. That went part and parcel with the animation and movement of the animals and the actor performing stunts that only a human can accomplish.

So you are in a sense limiting what you can do with the technology?
There was an operator behind the camera and behind the wheels, massaging and creating the various compositional choices that generally are not made in a computer. They’re not just setting keyframes, and because somebody’s behind the camera, this sense of live-action-derived movement is consistent from shot to shot to shot. It’s one person doing it, whereas normally on a CG film, there are as many as 50 people who are placing cameras on different characters within the same scene.

You have to come up with these analog methodologies that are all tied together without even really knowing it. Your choices at the end of the day end up being strictly artistic choices. We’d sort of tap into that for Jungle Book and it’s what Jim tapped into when he did Avatar. The only difference between Avatar and our film is that we set our film in an instantly recognizable place so everybody can judge whether it’s photorealistic or not.

When you start a film, do you create your own system or use something off the shelf?
With every film there is a technology advance. I typically take whatever is off-the-shelf and glue it together with something not necessarily designed to work in unison. Each year you perfect it. The only way to really keep on top of technology is by being on the forefront of it, as opposed to waiting for it to come out. Usually, we’re doing things that haven’t been done before, and invariably it causes something new and innovative.

We’re totally revamping what we did on Jungle Book to achieve the same end on my next film for Disney, but we hope to make it that much better, faster and more intuitive. We are also taking advantage of VR tools to make our job easier, more creative and faster. The faster you can create options, the more iterations you get. More iterations get you a better product sooner and help you elevate the art form by taking it to the next level.

Technology is always driven by the story. We ask ourselves what we want to achieve. What kind of shot do we want to create that creates a mood and a tone? Then once we decide what that is, we figure out what technology we need to invent, or coerce into being, to actually produce it. It’s always driven that way. For example, on Titanic, the only way I could tell that story and make these magic transitions from the Titanic to the wreck and from the wreck back to the Titanic, was by controlling the water, which was impossible. We needed to make computer-generated water that looked realistic, so we did.

THE JUNGLE BOOK (Pictured) BAGHEERA and MOWGLI. ©2016 Disney Enterprises, Inc. All Rights Reserved.CG water was a big problem back then.
But now that’s very commonplace. The water work in Jungle Book is extraordinary compared to the crudeness of what we did on Titanic, but we started on that path, and then over the years other people took over and developed it further.

Getting back to Marty Scorsese, and how you work with him. How does having his complete trust make you better at what you do?
Marty is not as interested in the technical side as Jim is. Jim loves all this stuff, and he likes to tinker and invent. Marty’s not like that. Marty likes to tinker with emotions and explore a performance editorially. His relationship with me is, “I’m not going to micro-manage you. I’m going to tell you what feeling I want to get.” It’s very much like how he would talk to an actor about what a particular scene is about. You then start using your own creativity to come up with the idea he wants, and you call on your own experience and interpretation to realize it. You are totally engaged, and the more engaged you are, the more creative you become in terms of what the director wants to tell his story. Tell me what you want, or even don’t want, and then I’ll fill in the blanks for you.

Marty is an incredible cinema master — it’s not just the performance, it’s not just the camera, it’s not just the edit, it’s all those things working in concert to create something new. His encouragement for somebody like me is to do the same and then only show him something that’s working. He can then put his own creative stamp on it as well once he sees the possibilities properly presented. If it’s good, he’s going to use it. If it’s not good, he’ll tell you why, but he won’t tell you how to if fix it. He’ll tell you why it doesn’t feel right for the scene or what would make it more eloquent. It’s a very soft, artistic push in his direction of the film. I love working with him for this very reason.

You too surround yourself with people you can trust. Can you talk about this for just a second?
I learned early on to surround myself with geniuses. You can’t be afraid of hiring people that are smarter than you are because they bring more to the party. I want to be the lowest common denominator, not the highest. I’ll start with my idea, but if someone else can do it better, I want it to be better. I can show them what I did and tell them to make it better, and they’ll go off and come up with something that maybe I wouldn’t have thought of, or the collusion between you and them creates a new gem.

When I was doing Titanic someone asked me how I did what I did. My answer was that I hired geniuses and told them what I wanted to accomplish creatively. I hire the best I can find, the smartest, and I listen. Sometimes I use it, sometimes I don’t. Sometimes the mistake of somebody literally misunderstanding what you meant delivers something that you never thought of. It’s like, “Wow, you completely misunderstood what I said, but I like that better, so we’re going to do that.”

Part and parcel of doing this is that you’re a little fearless. It’s like, “Well, that sounds good. There’s no proof to it, but we’re going to go for it,” as opposed to saying, “Well, no one has done it before so we better not try it. That’s what I learned from Cameron and Marty and Bob Zemeckis. They’re fearless.

Can you mention what you’re working on now, or no?
I’m working on Lion King.


Deluxe VFX

Craig Zerouni joins Deluxe VFX as head of technology

Deluxe has named Craig Zerouni as head of technology for Deluxe Visual Effects. In this role, he will focus on continuing to unify software development and systems architecture across Deluxe’s Method studios in Los Angeles, Vancouver, New York and India, and its Iloura studios in Sydney and Melbourne, as well as LA’s Deluxe VR.

Based in LA and reporting to president/GM of Deluxe VFX and VR Ed Ulbrich, Zerouni will lead VFX and VR R&D and software development teams and systems worldwide, working closely with technology teams across Deluxe’s Creative division.

Zerouni has been working in media technology and production for nearly three decades, joining Deluxe most recently from DreamWorks, where he was director of technology at its Bangalore, India-bsed facility overseeing all technology. Prior to that he spent nine years at Digital Domain, where he was first head of R&D responsible for software strategy and teams in five locations across three countries, then senior director of technology overseeing software, systems, production technology, technical directors and media systems. He has also directed engineering, products and teams at software/tech companies Silicon Grail, Side Effects Software and Critical Path. In addition, he was co-founder of London-based computer animation company CFX.

Zerouni’s work has contributed to features including Tron: Legacy, Iron Man 3, Maleficent, X-Men: Days of Future Past, Ender’s Game and more than 400 commercials and TV IDs and titles. He is a member of BAFTA, ACM/SIGGRAPH, IEEE and the VES. He has served on the AMPAS Digital Imaging Technology Subcommittee and is the author of the technical reference book “Houdini on the Spot.”

Says Ulbrich on the new hire: “Our VFX work serves both the features world, which is increasingly global, and the advertising community, which is increasingly local. Behind the curtain at Method, Iloura, and Deluxe, in general, we have been working to integrate our studios to give clients the ability to tap into integrated global capacity, technology and talent anywhere in the world, while offering a high-quality local experience. Craig’s experience leading global technology organizations and distributed development teams, and building and integrating pipelines is right in line with our focus.”


The A-List: Lego Batman Movie director Chris McKay

By Iain Blair

Three years ago, The Lego Movie became an “everything is awesome” monster hit that cleverly avoided the pitfalls of feeling like a corporate branding exercise thanks to the deft touch and tonal dexterity of the director/writer/animator/producer team of Phil Lord and Christopher Miller.

Now busy working on a Han Solo spinoff movie, they handed over the directing reins on the follow-up, The Lego Batman Movie, to Chris McKay, who served as animation director and editor on the first one. And he hit the ground running on this one, which seriously — and hilariously — tweak’s Batman’s image.

Chris McKay

This time out, Batman stars in his own big-screen adventure, but there are big changes brewing in Gotham City. If he wants to save the city from The Joker’s hostile takeover, Batman may have to drop the lone vigilante thing, try to work with others and maybe, just maybe, learn to lighten up (somber introspection only goes so far when you’re a handsome billionaire with great cars and gadgets, who gets to punch people in the face with no repercussions).

Will Arnett voices Batman, Zach Galifianakis is The Joker, Michael Cera is orphan Dick Grayson, Rosario Dawson is Barbara Gordon, and Ralph Fiennes voices Alfred.

Behind the scenes, production designer Grant Freckelton and editor David Burrows also return from The Lego Movie, joined by editors Matt Villa and John Venzon. Lorne Balfe was composer, and feature animation was, again, by Animal Logic. The Warner Bros. film was released in 3D, 2D and IMAX.

I recently talked to McKay about making the film and how the whole process was basically all about the post.

The Lego Movie made nearly half a billion dollars and was a huge critical success as well. Any pressure there?
(Laughs) A lot, because of all that success, and asking, “How do we top it?” Then it’s Batman, with all his fans, and DC is very particular as he’s one of their crown jewels. But at the same time, the studio and DC were great partners and understood all the challenges.

So how did you pitch the whole idea?
As Jerry Maguire, directed by Michael Mann, with a ton of jokes in it. They got on board with that and saw what I was doing with the animatic, as well as the love I have for Batman and this world.

Once you were green-lit, you began on post, right?
Exactly right, because post is everything in animation. The whole thing is post. You start in post and end in post. When we pitched this, we didn’t even have a script, just a three- to four-page treatment. They liked the idea and said, “OK, let’s do it.” So we needed to write a script, and get the storyboard and editorial teams to work immediately, because there was no way we could get it finished in time if we didn’t.

It was originally scheduled to come out in May — almost three years from the time we pitched it, but then they moved the release date up to February, so it got even crazier. So we began getting all the key people involved, like [editor/writer] Dave Burrows at Animal Logic, who cut the first one with me, and developing the opening set piece.

You got an amazing cast, including Will Arnett as Batman again, and such unlikely participants as Mariah Carey, Michael Cera, Ralph Fiennes and Apple’s Siri. How tough was that?
We were very lucky because everyone was a fan, and when they saw that the first one wasn’t just a 90-minute toy commercial, they really wanted to be in it. Mariah was so charming and funny, and apart from her great singing voice, she has a really great speaking voice — and she was great at improv and very playful. Ralph has done some comedy, but I wasn’t sure he’d want to do something like this, but he got it immediately, and his voice was perfect. Michael Cera doesn’t do this kind of thing at all. Like Ralph, he’s an artist who usually does smaller movies and more personal stuff, and people told us, “You’re not going to get Ralph or Cera,” but Will reached out to Cera (they worked together on Arrested Development) and he came on.

As for Siri, it was a joke we tried to make work in the first movie but couldn’t, so we went back to it, and it turned into a great partnership with Apple. So that was a lot of fun for me, playing around with pop culture in that way, as the whole computer thing is part of Batman’s world anyway.

Phil Lord and Chris Miller have been very busy directing the upcoming, untitled Han Solo Star Wars movie, but as co-producers on this weren’t they still quite involved?
Very. I’d ask them for advice all the time and they would give notes since I was running a lot of stuff past them. They ended up writing several of my favorite lines in this; they gave me so much of their time, pitched jokes and let me do stuff with the animation I wanted to do. They’re very generous.

Sydney-based Animal Logic, the digital design, animation and effects company whose credits include Moulin Rouge!, Happy Feet and Walking With Dinosaurs did all the animation again. What was involved?
As I wanted to use Burrows, that would require us having an editorial team down there, and the studio wasn’t crazy about that. But he’s a fantastic editor and storyteller, and I also wanted to work with Grant Freckelton, who was the production designer on the first one, as well as lighting supervisor Craig Welch — all these team members at Animal Logic who were so good. In the end, we had over 400 people working on this for two and a half years — six months faster than the first one.

So Animal Logic began on it on day one, and I didn’t wait for a script. It was just me, Dave and the storyboard teams in LA and Sydney, and Grant’s design team. I showed them the treatment and said, “Here’s the scenes I want to do,” and we began with paintings and storyboards. The first act in animatic form and the script both landed at the same time in November 2014, and then we pitched where the rest of the movie would go and what changes we would make. So it kept going in tandem like that. There was no traditional screenwriting process. We’d just bring writers in and adjust as we went. So we literally built the screenplay in post — and we could do that because animation is like filmmaking in slow motion, and we had great storytellers in post, like Burrows.

You also used two other editors — Matt Villa and John Venzon. How did that work?
Matt’s very accomplished. He’s cut three of Baz Luhrmann’s films — The Great Gatsby, Moulin Rouge! and Australia — and he cut Russell Crowe’s The Water Diviner as well as
the animated features Happy Feet Two and Legend of the Guardians: The Owls of Ga’Hoole, so he came in to help. We also brought in other writers, and we would all be doing the voices. I was Batman and Matt would do the side characters. We literally built it as we went, with some storyboard artists from the first film, plus others we gathered along the way. The edit was crucial because of the crazy deadline.

Last summer we added John, who has also cut animated features, including Storks, Flushed Away, Shark Tale and South Park: Bigger, Longer and Uncut, because we needed to move some editorial to LA last July for five months, and he helped out with all the finishing. It was a 24/7 effort by that time, a labor of love.

Let’s talk about the VFX. Fair to say the whole film’s one big VFX sequence?
You’re right. Every single frame is a VFX shot. It’s mind blowing! You’re constantly working on it at the same time you’re writing and editing and so on, and it takes a big team of very focused animators and producers to do it.

What about the sound and music? Composer Lorne Balfe did the scores for Michael Bay’s 13 Hours: The Secret Soldiers of Benghazi, the animated features Penguins of Madagascar and Home, as well as Terminator Genisys. How important was the score?
It was crucial. He actually worked on the Dark Knight movies, so I knew he could do all the operatic, serious stuff as well as boy’s adventure stuff for Robin, and he was a big part of making it sound like a real Batman movie. We recorded the score in Sydney and Vienna, and did the mix on the lot at Warners with a great team that included effects mixer Gregg Landaker and sound designer Wayne Pashley from Big Bang Sound in Sydney.

Did the film turn out the way you hoped?
I wish we had those extra two months, but it’s the movie I wanted to make — it’s good for kids and adults, and it’s a big, fun Batman movie that looks at him in a way that the other Batman movies can’t.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Ingenuity Studios helps VFX-heavy spot get NASCAR-ready

Hollywood-based VFX house Ingenuity Studios recently worked on a 60-second Super Bowl spot for agency Pereira & O’Dell promoting Fox Sports’ coverage of the Daytona 500, which takes place on February 26. The ad, directed by Joseph Kahn, features people from all over the country gearing up to watch the Daytona 500, including footage from NASCAR races, drivers and, for some reason, actor James Van Der Beek.

The Ingenuity team had only two weeks to turn around this VFX-heavy spot, called Daytona Day. Some CG elements include a giant robot, race cars and crowds. While they were working on the effects, Fox was shooting footage in Charlotte, North Carolina and Los Angeles.

“When we were initially approached about this project we knew the turnaround would be a challenge,” explains creative director/VFX supervisor Grant Miller. “Editorial wasn’t fully locked until Thursday before the big game! With such a tight deadline preparing as much as we could in advance was key.”

Portions of the shoot took place at the Daytona Speedway, and since it was an off day the stadium and infield were empty. “In preparation, our CG team built the entire Daytona stadium while we were still shooting, complete with cheering CG crowds, RVs filling the interior, pit crews, etc.,” says Miller. “This meant that once shots were locked we simply needed to track the camera, adjust the lighting and render all the stadium passes for each shot.”

Additional shooting took place at the Charlotte Motor Speedway, Downtown Los Angeles and Pasadena, California.

In addition to prepping CG for set extensions, Ingenuity also got a head start on the giant robot that shows up halfway through the commercial.  “Once the storyboards were approved and we were clear on the level of detail required, we took our ‘concept bot’ out of ZBrush, retopologized and unwrapped it, then proceeded to do surfacing and materials in Substance Painter. While we had some additional detailing to do, we were able to get the textures 80 percent completed by applying a variety of procedural materials to the mesh, saving a ton of manual painting.”

Other effects work included over 40 CG NASCAR vehicles to fill the track, additional cars for the traffic jam and lots of greenscreen and roto work to get the scenes shot in Charlotte into Daytona. There was also a fair bit of invisible work that included cleaning up sets, removing rain, painting out logos, etc.

Other tools used include Autodesk’s Maya, The Foundry’s Nuke and BorisFX’s Mocha.

Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.