Category Archives: VFX

An image scientist weighs in about this year’s SciTech winners

While this year’s Oscar broadcast was unforgettable due to the mix up in naming the Best Picture, many in the industry also remember actors Leslie Mann and John Cho joking about how no one understands what the SciTech Awards are about. Well, Shed’s SVP of imaging science, Matthew Tomlinson, was kind enough to answer some questions about the newest round of winners and what the technology means to the industry.

As an image scientist, what was the most exciting thing about this year’s Oscars’ Scientific and Technical Awards?
As an imaging scientist, I was excited about the five digital cameras — Viper, Genesis, Sony 65, Red Epic and Arri — that received accolades. I’ve been working with each of these cameras for years, and each of them has had a major impact in the industry. They’ve pioneered the digital revolution and have set a very high standard for future cameras that appear on the market.

The winners of the 2017 SciTech Awards. Credit: Todd Wawrychuk/A.M.P.A.S.

Another exciting aspect is that you actually have access to your “negative” with digital cameras and, if need be, you can make adjustments to that negative after you’ve exposed it. It’s an incredibly powerful option that we haven’t even realized the full potential of yet.

From an audience perspective, even though they’ll never know it, the facial performance capture solving system developed by ILM, as well as the facial performance-based software from Digital Domain and Sony Pictures Imageworks, is incredibly exciting. The industry is continuously pushing the boundaries of the scope of the visual image. As stories become more expansive, this technology helps the audience to engage with aliens or creatures that are created by a computer but based on the actions, movements and emotions of an actor. This is helping blur the lines between reality and fantasy. The best part is that these tools help tell stories without calling attention to themselves.

Which category or discipline saw the biggest advances from last year to this year? 
The advancements in each technology that received an award this year are based on years of work behind the scenes that led up to this moment. I will say that from an audience perspective, the facial animation advancements were significant this past year. We’re reaching a point where audiences are unaware major characters are synthetic or modified. It’s really mind blowing when you think about it.

Sony’s Toshihiko Ohnishi.

Which of the advancements will have the biggest impact on the work that you do, specifically?
The integration of digital cameras and intermixing various cameras into one project. It’s pretty common nowadays to see the Sony, Alexa and Red camera all used on the same project. Each one of these cameras comes with its own inherent colorspace and particular attributes, but part of my job is to make sure they can all work together — that we can interweave the various files they create — without the colorist having to do a lot of technical heavy lifting. Part of my job as an Imaging Scientist is handling the technicalities so that when creatives, such as the director, cinematographer and colorist, come together they can concentrate on the art and don’t have to worry about the technical aspects much at all.

Are you planning to use, or have you already begun using, any of these innovations in your work?

The digital cameras are very much part of my everyday life. Also, in working with a VFX house, I like to provide the knowledge and tools to help them view the imagery as it will be seen in the DI. The VFX artist spends an incredible amount of time and effort on every pixel they work on and it’s a real goal of mine to make sure that the work that they create is the best it can be throughout the DI.

Flavor Detroit welcomes VFX artist/designer Scott Stephens

Twenty-year industry veteran Scott Stephens has joined Flavor Detroit as senior VFX artist/designer. Previously the lead designer at Postique, Stephens has been a key part of the post boutique Section 8 as co-founder and lead designer since its launch in 2001.

Known for his work with top brands and directors on major commercial campaigns for Blue Cross Blue Shield (BCBS), Chrysler, Expedia, Food Network, Mazda and Six Flags, to name but a few, Stephens also brings vast experience creating content that maximizes unique environments and screens of all sizes.

Recent projects include the Amazon Kindle release in Times Square, the Ford Focus theatrical release for the Electric Music Festival, BCBS media for the Pandora app, Buick’s multi-screen auto show installations and the Mount St. Helens installation for the National Park Service.

MTI 4.28

The A-list — Kong: Skull Island director Jordan Vogt-Roberts

By Iain Blair

Plucky explorers! Exotic locations! A giant ape! It can only mean one thing: King Kong is back… again. This time, the new Warner Bros. and Legendary Pictures’ Kong: Skull Island re-imagines the origin of the mythic Kong in an original adventure from director Jordan Vogt-Roberts (The Kings of Summer).

Jordan Vogt-Roberts

With an all-star cast that includes Tom Hiddleston, Samuel L. Jackson, Oscar-winner Brie Larson, John Goodman and John C. Reilly, it follows a diverse team of explorers as they venture deep into an uncharted island in the Pacific — as beautiful as it is treacherous — unaware that they’re crossing into the domain of the mythic Kong.

The legendary Kong was brought to life on a whole new scale by Industrial Light & Magic, with two-time Oscar-winner Stephen Rosenbaum (Avatar, Forrest Gump) serving as visual effects supervisor.

To fully immerse audiences in the mysterious Skull Island, Vogt-Roberts, his cast and filmmaking team shot across three continents over six months, capturing its primordial landscapes on Oahu, Hawaii — where shooting commenced on October 2015 — on Australia’s Gold Coast and, finally, in Vietnam, where production took place across multiple locations, some of which have never before been seen on film. Kong: Skull Island was released worldwide in 2D, 3D and IMAX beginning March 10.

I spoke with Vogt-Roberts about making the film and his love of post.

What’s the eternal appeal of doing a King Kong movie?
He’s King Kong! But the appeal is also this burden, as you’re playing with film history and this cinematic icon of pop culture. Obviously, the 1933 film is this impeccable genre story, and I’m a huge fan of creature features and people like Ray Harryhausen. I liked the idea of taking my love for all that and then giving it my own point of view, my sense of style and my voice.

With just one feature film credit, you certainly jumped in the deep end with this — pun intended — monster production, full of complex moving parts and cutting-edge VFX. How scary was it?
Every movie is scary because I throw myself totally into it. I vanish from the world. If you asked my friends, they would tell you I completely disappear. Whether it’s big or small, any film’s daunting in that sense. When I began doing shorts and my own stuff, I did shooting, the lighting, the editing and so on, and I thrived off all that new knowledge, so even all the complex VFX stuff wasn’t that scary to me. The truly daunting part is that a film like this is two and a half years of your life! It’s a big sacrifice, but I love a big challenge like this was.

What were the biggest challenges, and how did you prepare?
How do you make it special —and relevant in 2017? I’m a bit of a masochist when it comes to a challenge, and when I made the jump to The Kings of Summer it really helped train me. But there are certain things that are the same as they always are, such as there’s never enough time or money or daylight. Then there are new things on a movie of this size, such as the sheer endurance you need and things you simply can’t prepare yourself for, like the politics involved, all the logistics and so on. The biggest thing for me was, how do I protect my voice and point of view and make sure my soul is present in the movie when there are so many competing demands? I’m proud of it, because I feel I was able to do that.

How early on did you start integrating post and all the VFX?
Very early on — even before we had the script ready. We had concept artists and began doing previs and discussing all the VFX.

Did you do a lot of previs?
I’m not a huge fan of it. Third Floor did it and it’s a great tool for communicating what’s happening and how you’re going to execute it, but there’s also that danger of feeling like you’re already making the movie before you start shooting it. Think of all the great films like Blade Runner and the early Star Wars films, all shot before they even had previs, whereas now it’s very easy to become too reliant on it; you can see a movie sequence where it just feels like you’re watching previs come to life. It’s lost that sense of life and spontaneity. We only did three previs sequences — some only partially — and I really stressed with the crew that it was only a guide.

Where did you do the post?
It was all done at Pivotal in Burbank, and we began cutting as we shot. The sound mix was done at Skywalker and we did our score in London.

Do you like the post process?
I love post. I love all aspects of production, but post is where you write the film again and where it ceases being what was on the page and what you wanted it to be. Instead you have to embrace what it wants to be and what it needs to be. I love repurposing things and changing things around and having those 3am breakthroughs! If we moved this and use that shot instead, then we can cut all that.

You had three editors — Richard Pearson, Bob Murawski and Josh Schaeffer. How did that work?
Rick and Bob ran point, and Rick was the lead. Josh was the editor who had done The Kings of Summer with me, and my shorts. He really understands my montages and comedy. It was so great that Rick and Bob were willing to bring him on, and they’re all very different editors with different skills — and all masters of their craft. They weren’t on set, except for Hawaii. Once we were really globe-trotting, they were in LA cutting.

VFX play a big role. Can you talk about working on them with VFX supervisor Jeff White and ILM, who did the majority of the effects work?
He ran the team there, and they’re all amazing. It was a dream come true for me. They’re so good at taking kernels of ideas and turning them into reality. I was able to do revisions as I got new ideas. Creating Kong was the big one, and it was very tricky because the way he moves isn’t totally realistic. It’s very stylized, and Jeff really tapped into my animé and videogame sensibility for all that. We also used Hybride and Rodeo for some shots.

What was the hardest VFX sequence to do?
The helicopter sequence was really very difficult, juggling the geography of that, with this 100-foot creature and people spread all over the island, and also the final battle sequence. The VFX team and I constantly asked ourselves, “Have we seen this before? Is it derivative? Is it redundant?” The goal was to always keep it fresh and exciting.

Where did you do the DI?
At Fotokem with colorist Dave Cole who worked on The Lord of the Rings and so many others. I love color, and we did a lot of very unusual stuff for a movie like this, with a lot of saturation.

Did the film turn out the way you hoped?
A movie never quite turns out the way you hope or think it will, but I love the end result and I feel it represents my voice. I’m very proud of what we did.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Recreating history for Netflix’s The Crown

By Randi Altman

If you, like me, binge-watched Netflix’s The Crown, you are now considerably better educated on the English monarchy, have a very different view of Queen Elizabeth, and were impressed with the show’s access to Buckingham Palace.

Well, it turns out they didn’t actually have access to the Palace. This is where London-based visual effects house One of Us came in. While the number of shots provided for the 10-part series varied, the average was 43 per episode.

In addition to Buckingham Palace, One of Us worked on photoreal digital set extensions, crowd replications and environments, including Downing Street and London Airport. The series follows a young Elizabeth who inherits the crown after her father, King George VI, dies. We see her transition from a vulnerable young married lady to a more mature woman who takes her role as head monarch very seriously.

We reached out to One of Us VFX supervisor Ben Turner to find out more.

How early did you join the production?
One of Us was heavily involved during an eight-month pre-production process, until shooting commenced in July 2015.

Ben Turner

Did they have clear vision of what they needed VFX vs. practical?
As we were involved from the pre-production stage, we were able to engage in discussions about how best to approach shooting the scenes with the VFX work in mind. It was important to us and the production that actors interacted with real set pieces and the VFX work would be “thrown away” in the background, not drawing attention to itself.

Were you on set?
I visited all relevant locations, assisted on set by Jon Pugh who gathered all VFX data required. I would attend all recces at these locations, and then supervise on the shoot days.

Did you do previs? If so, what software did you use?
We didn’t do much previs in the traditional sense. We did some tech-vis to help us figure out how best to film some things, such as the arrivals at the gates of Buckingham Palace and the Coronation sequence. We also did some concept images to help inform the shoot and design of some scenes. This work was all done in Autodesk Maya, The Foundry’s Nuke and Adobe Photoshop.

Were there any challenges in working in 4K? Did your workflow change at all, and how much of your work currently is in 4K?
Working in 4K didn’t really change our workflow too much. At One of Us, we are used to working on film projects that come in all different shapes and sizes (we recently completed work on Terrance Mallick’s Voyage of Time in IMAX 5K), but for The Crown we invested in the infrastructure that enabled us to take it in our stride — larger and faster disks to hold the huge amounts of data, as well as a new 4K monitor to review all the work.

     

What were some of your favorite, or most challenging, VFX for the show?
The most challenging work was the kind of shots that many people are already very familiar with. So the Queen’s Coronation, for example, was watched by 20 million people in 1953, and with Buckingham Palace and Downing Street being two of the most famous and recognizable addresses in the world, there wasn’t really anywhere for us to hide!

Some of my favorite shots are the ones where we were recreating real events for which there are amazing archive references, such as the tilt down on the scaffolding at Westminster Abbey on the eve of the Coronation, or the unveiling of the statue of King George VI.

     

Can you talk about the tools you used, and did you create any propriety tools during the workflow?
We used Enwaii and Maya for photogrammetry, Photoshop for digital matte painting and Nuke for compositing. For crowd replication we created our own in-house 2.5D tool in Nuke, which was a card generator that gave the artist a choice of crowd elements, letting them choose the costume, angle, resolution and actions required.

What are you working on now?
We are currently hard at work on Season 2 of The Crown, which is going to be even bigger and more ambitious, so watch this space! Recent work also includes King Arthur: Legend Of The Sword (Warner Bros.) and Assassin’s Creed (New Regency).


Dog in the Night director/DP Fletcher Wolfe

By Cory Choy

Silver Sound Showdown Music + Video Festival is unique in two ways. First, it is both a music video festival and battle of the bands at the same time. Second, every year we pair up the Grand Prize-winners, director and band, and produce a music video with them. The budget is determined by the festival’s ticket sales.

I conceived of the festival, which is held each year at Brooklyn Bowl, as a way to both celebrate and promote artistic collaboration between the film and music communities — two crowds that just don’t seem to intersect often enough. One of the most exciting things for me is then working with extremely talented filmmakers and musicians who have more often than not met for the first time at our festival.

Dog in the Night (song written by winning band Side Saddle) was one of our most ambitious videos to date — using a combination of practical and post effects. It was meticulously planned and executed by director/cinematographer Fletcher Wolfe, who was not only a pleasure to work with, but was gracious enough to sit down with me for a discussion about her process and the experience of collaborating.

What was your favorite part of making Dog in the Night?
As a music video director I consider it my first responsibility to get to know the song and its meaning very intimately. This was a great opportunity to stretch that muscle, as it was the first time I was collaborating with musicians who weren’t already close friends. In fact, I hadn’t even met them before the Showdown. I found it to be a very rewarding experience.

What is Dog in the Night about?
The song Dog in the Night is, quite simply, about a time when the singer Ian (a.k.a. Angler Boy) is enamored with a good friend, but that friend doesn’t share his romantic feelings. Of course, anyone who has been in that position (all of us?) knows that it’s never that simple. You can hear him holding out hope, choosing to float between friendship and possibly dating, and torturing himself in the process.

I decided to use dusk in the city to convey that liminal space between relationship labels. I also wanted to play on the nervous and lonely tenor of the track with images of Angler Boy surrounded by darkness, isolated in the pool of light coming from the lure on his head. I had the notion of an anglerfish roaming aimlessly in an abyss, hoping that another angler would find his light and end his loneliness. The ghastly head also shows that he doesn’t feel like he has anything in common with anybody around him except the girl he’s pining after, who he envisions having the same unusual head.

What did you shoot on?
I am a DP by trade, and always shoot the music videos I direct. It’s all one visual storytelling job to me. I shot on my Alexa Mini with a set of Zeiss Standard Speed lenses. We used the 16mm lens on the Snorricam in order to see the darkness around him and to distort him to accentuate his frantic wanderings. Every lens in the set weighed in at just 1.25lbs, which is amazing.

The camera and lenses were an ideal pairing, as I love the look of both, and their light weight allowed me to get the rig down to 11lbs in order to get the Snorricam shots. We didn’t have time to build our own custom Snorricam vest, so I found one that was ready to rent at Du-All Camera. The only caveats were that it could only handle up to 11lbs, and the vest was quite large, meaning we needed to find a way to hide the shoulders of the vest under Ian’s wardrobe. So, I took a cue from Requiem for a Dream and used winter clothing to hide the bulky vest. We chose a green and brown puffy vest that held its own shape over the rig-vest, and also suited the character.

I chose a non-standard 1.5:1 aspect ratio, because I felt it suited framing for the anglerfish head. To maximize resolution and minimize data, I shot 3.2K at a 1.78:1 aspect ratio and cropped the sides. It’s easy to build custom framelines in the Alexa Mini for accurate framing on set. On the Mini, you can also dial in any frame rate between 0.75-60fps (at 3.2K). Thanks to digital cinema cameras, it’s standard these days to over-crank and have the ability to ramp to slow motion in post. We did do some of that; each time Angler Boy sees Angler Girl, his world turns into slow motion.

In contrast, I wanted his walking around alone to be more frantic, so I did something much less common and undercranked to get a jittery effect. The opening shot was shot at 6fps with a 45-degree shutter, and Ian walked in slow motion to a recording of the track slowed down to quarter-time, so his steps are on the beat. There are some Snorricam shots that were shot at 6fps with a standard 180-degree shutter. I then had Ian spin around to get long motion blur trails of lights around him. I knew exactly what frame rate I wanted for each shot, and we wound up shooting at 6fps, 12fps, 24fps, 48fps and 60fps, each for a different emotion that Angler Boy is having.

Why practical vs. CG for the head?
Even though the fish head is a metaphor for Angler Boy’s emotional state, and is not supposed to be real, I wanted it to absolutely feel real to both the actor and the audience. A practical, and slightly unwieldy, helmet/mask helped Ian find his character. His isolation needed to be tangible, and how much he is drawn to Angler Girl as a kindred spirit needed to be moving. It’s a very endearing and relatable song, and there’s something about homemade, practical effects that checks both those boxes. The lonely pool of light coming from the lure was also an important part of the visuals, and it needed to play naturally on their faces and the fish mask. I wired Lite Gear LEDs into the head, which was the easy part. Our incredibly talented fabricator, Lauren Genutis, had the tough job — fabricating the mask from scratch!

The remaining VFX hurdle then was duplicating the head. We only had the time and money to make one and fit it to both actors with foam inserts. I planned the shots so that you almost never see both actors in the same shot at the same time, which kept the number of composited shots to a minimum. It also served to maintain the emotional disconnect between his reality and hers. When you do see them in the same shot, it’s to punctuate when he almost tells her how he feels. To achieve this I did simple split screens, using the Pen Tool in Premiere to cut the mask around their actions, including when she touches his knee. To be safe, I shot takes where she doesn’t touch his knee, but none of them conveyed what she was trying to tell him. So, I did a little smooshing around of the two shots and some patching of the background to make it so the characters could connect.

Where did you do post?
We were on a very tight budget, so I edited at home, and I always use Adobe Premiere. I went to my usual colorist, Vladimir Kucherov, for the grade. He used Blackmagic Resolve, and I love working with him. He can always see how a frame could be strengthened by a little shaping with vignettes. I’ll finally figure out what nuance is missing, and when I tell him, he’s already started working on that exact thing. That kind of shaping was especially helpful on the day exteriors, since I had hoped for a strong sunset, but instead got two flat, overcast days.

The only place we didn’t see eye to eye on this project was saturation — I asked him to push saturation farther than he normally would advise. I wanted a cartoon-like heightening of Angler Boy’s world and emotions. He’s going through a period in which he’s feeling very deeply, but by the time of writing the song he is able to look back on it and see the humor in how dramatic he was being. I think we’ve all been there.

What did you use VFX for?
Besides having to composite shots of the two actors together, there were just a few other VFX shots, including dolly moves that I stabilized with the Warp Stabilizer plug-in within Premiere. We couldn’t afford a real dolly, so we put a two-foot riser on a Dana Dolly to achieve wide push-ins on Ian singing. We were rushing to catch dusk between rainstorms, and it was tough to level the track on grass.

The final shot is a cartoon night sky composited with a live shot. My very good friend, Julie Gratz of Kaleida Vision, made the sky and animated it. She worked in Adobe After Effects, which communicates seamlessly with Premiere. Julie and I share similar tastes for how unrealistic elements can coexist with a realistic world. She also helped me in prep, giving feedback on storyboards.

Do you like the post process?
I never used to like post. I’ve always loved being on set, in a new place every day, moving physical objects with my hands. But, with each video I direct and edit I get faster and improve my post working style. Now I can say that I really do enjoy spending time alone with my footage, finding all the ways it can convey my ideas. I have fun combining real people and practical effects with the powerful post tools we can access even at home these days. It’s wonderful when people connect with the story, and then ask where I got two anglerfish heads. That makes me feel like a wizard, and who doesn’t like that?! A love of movie magic is why we choose this medium to tell our tales.


Cory Choy, Silver Sound Showdown festival director and co-founder of Silver Sound Studios, produced the video.


Andrew Bell named MD at Boston’s Brickyard VFX, Amy Appleton now EP

Andrew Bell has been named managing director of Brickyard VFX in Boston, known as Brickyard VFX Atlantic. The studio has also promoted long-time senior VFX producer Amy Appleton to executive producer.

Bell comes to Brickyard from MPC, where he spent more than 15 years, most recently as a managing director of MPC Los Angeles. He began his career in 1999 as a PA with MPC in London, working his way up through the company, handling production responsibilities for several years before coming to LA in 2008 to help open the studio’s first expansion facility in Los Angeles as head of production. Bell became the managing director in 2011, a role he held until fall 2016, when he began consulting for Apple.

Appleton has been at Brickyard for 10 years, producing hundreds of projects for clients such as Frito-Lay, Progressive Insurance, Cadillac, Columbia Sportswear and Royal Caribbean.

In addition to the management moves, Brickyard’s East Coast studio has recently upgraded its technology infrastructure and is installing a grand roof deck with views of the downtown Boston skyline.

“We have just received four Flame workstations featuring dual 12-core 3GHz processors and the new Nvidia P6000 graphics cards,” reports Brickyard founder Dave Waller. “For our VR department, we’ve upgraded to Cari 1.04 from The Foundry, and to improve efficiencies in our Data Center, we’re moving our archiving workflow over to LTO-7 tape system. This will more than double the speed at which we archive and restore our work. All these improvements will enable us to respond to our dear clients’ requests even faster.”


Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.


Alkemy X adds creative director Geoff Bailey

Alkemy X, which offers live-action production, design, high-end VFX and post services, has added creative director Geoff Bailey to its New York office, which has now almost doubled in staff. The expansion comes after Alkemy X served as the exclusive visual effects company on M. Night Shyamalan’s Split.

Alkemy X and Bailey started collaborating in 2016 when the two worked together on a 360 experiential film project for EY (formerly Ernst & Young) and brand consultancy BrandPie. Bailey was creative director on the project, which was commissioned for EY’s Strategic Growth Forum held in Palm Desert, California, last November. The project featured Alkemy X’s live-action, VFX, animation, design and editorial work.

“I enjoy creating at the convergence of many disciplines and look forward to leveraging my branding knowledge to support Alkemy X’s hybrid creation pipeline — from ideation and strategy, to live-action production, design and VFX,” says Bailey.

Most recently, Bailey was a creative director at Loyalkaspar, where he creatively led the launch campaign for A&E’s Bates Motel. He also served as creative director/designer on the title sequence for the American launch of A&E’s The Returned, and as CD/director on a series of launch spots for the debut of Vice Media’s TV channel Viceland.

Prior to that, Bailey freelanced for several New York design firms as a director, designer and animator. His freelance résumé includes work for HBO, Showtime, Hulu, ABC, Cinemax, HP, Jay-Z, U2, Travel Channel, Comedy Central, CourtTV, Fuse, AMC Networks, Kiehl’s and many more. Bailey holds an MFA in film production from Columbia University.


Swedish post/VFX company Chimney opens in LA

Swedish post company Chimney has opened a Los Angeles facility, its first in the US, but one of their 12 offices in eight countries. Founded in Stockholm in 1995, Chimney produces over 6,000 pieces for more than 60 countries each year, averaging 1,000 projects and 10,000 VFX shots. The company, which is privately held by 50 of its artists, is able to offer 24-hour service thanks to its many locations around the world.

When asked why Chimney decided to open an office in LA, founder Henric Larsson said, “It was not the palm trees and beaches that made us open up in LA. We’re film nerds and we want to work with the best talent in the world, and where do we find the top directors, DPs, ADs, CDs and producers if not in the US?”

The Chimney LA crew.

The Chimney LA team was busy from the start, working with Team One to produce two Lexus campaigns, including one that debuted during the Super Bowl. For the Lexus Man & Machine Super Bowl Spot, they took advantage of the talent at sister facilities in Poland and Sweden.

Chimney also reports that it has signed with Shortlist Mgmt, joining other companies like RSA, Caviar, Tool and No6 Editorial. Charlie McBrearty, founding partner of Shortlist Mgmt, says that Chimney has “been on our radar for quite some time, and we are very excited to be part of their US expansion. Shortlist is no stranger to managing director Jesper Palsson, and we are thrilled to be reunited with him after our past collaboration through Stopp USA.”

Tools used for VFX include Autodesk’s Flame and Maya, The Foundry’s Nukea and Adobe After Effects. Audio is via Avid Pro Tools. Color is done in Digital Vision’s Nucoda. For editing they call on Avid Media Composer, Apple Final Cut and Adobe Premiere

Quick Chat: Brent Bonacorso on his Narrow World

Filmmaker Brent Bonacorso has written, directed and created visual effects for The Narrow World, which examines the sudden appearance of a giant alien creature in Los Angeles and the conflicting theories on why it’s there, what its motivations are, and why it seems to ignore all attempts at human interaction. It’s told through the eyes of three people with differing ideas of its true significance. Bonacorso shot on a Red camera with Panavision Primo lenses, along with a bit of Blackmagic Pocket Cinema Camera for random B-roll.

Let’s find out more…

Where did the idea for The Narrow World come from?
I was intrigued by the idea of subverting the traditional alien invasion story and using that as a way to explore how we interpret the world around us, and how our subconscious mind invisibly directs our behavior. The creature in this film becomes a blank canvas onto which the human characters project their innate desires and beliefs — its mysterious nature revealing more about the characters than the actual creature itself.

As with most ideas, it came to me in a flash, a single image that defined the concept. I was riding my bike along the beach in Venice, and suddenly in my head saw a giant Kaiju as big as a skyscraper sitting on the sand, gazing out at the sea. Not directly threatening, not exactly friendly either, with a mutual understanding with all the tiny humans around it — we don’t really understand each other at all, and probably never will. Suddenly, I knew why he was here, and what it all meant. I quickly sketched the image and the story followed.

What was the process like bringing the film to life as an independent project?
After I wrote the script, I shot principal photography with producer Thom Fennessey in two stages – first with the actor who plays Raymond Davis (Karim Saleh) and then with the actress playing Emily Field (Julia Cavanaugh).

I called in a lot of favors from my friends and connections here in LA and abroad — the highlight was getting some amazing Primo lenses and equipment from Panavision to use because they love Magdalena Górka’s (the cinematographer) work. Altogether it was about four days of principal photography, a good bit of it guerrilla style, and then shooting lots of B-roll all over the city.

Kacper Sawicki, head of Papaya Films which represents me for commercial work in Europe, got on board during post production to help bring The Narrow World to completion. Friends of mine in Paris and Luxembourg designed and textured the creature, and I did the lighting and animation in Maxon Cinema 4D and compositing in Adobe After Effects.

Our editor was the genius Jack Pyland (who cut on Adobe Premiere), based in Dallas. Sound design and color grading (via Digital Vision’s Nucoda) were completed by Polish companies Głośno and Lunapark, respectively. Our composer was Cedie Janson from Australia. So even though this was an indie project, it became an amazing global collaborative effort.

Of course, with any no-budget project like this, patience is key — lack of funds is offset by lots of time, which is free, if sometimes frustrating. Stick with it — directing is a generally a war of attrition, and it’s won by the tenacious.

As a director, how did you pull off so much of the VFX work yourself, and what lessons do you have for other directors?
I realized early on in my career as a director that the more you understand about post, and the more you can do yourself, the more you can control the scope of the project from start to finish. If you truly understand the technology and what is possible with what kind of budget and what kind of manpower, it removes a lot of barriers.

I taught myself After Effects and Cinema 4D in graphic design school, and later I figured out how to make those tools work for me in visual effects and to stretch the boundaries of the short films I was making. It has proved invaluable in my career — in the early stages I did most of the visual effects in my work myself. Later on, when I began having VFX companies do the work, my knowledge and understanding of the process enabled me to communicate very efficiently with the artists on my projects.

What other projects do you have on the horizon?
In addition to my usual commercial work, I’m very excited about my first feature project coming up this year through Awesomeness Films and DreamWorks — You Get Me, starring Bella Thorne and Halston Sage.