Tag Archives: greenscreen

Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.

ILM’s Richard Bluff talks VFX for Marvel’s Doctor Strange

By Daniel Restuccio

Comic book fans have been waiting for over 30 years for Marvel’s Doctor Strange to come to the big screen, and dare I say it was worth the wait. This is in large part because of the technology now available to create the film’s stunning visual effects.

Fans have the option to see the film in traditional 2D, Dolby Cinema (worthy of an interstate or plane fare pilgrimage, in my opinion) and IMAX 3D. Doctor Strange, Marvel Studios’ 15th film offering, is also receiving good critical reviews and VFX Oscar buzz — it’s currently on the list of 20 films still in the running in the Visual Effects category for the 89th Academy Awards.

Marvel Doctor StrangeThe unapologetically dazzling VFX shots, in many cases directly inspired by the original comic visuals by Steve Dittko, were created by multiple visual effects houses, including Industrial Light & Magic, Luma Pictures, Lola VFX, Method Studios, Rise FX, Crafty Apes, Framestore, Perception and previs house The Third Floor. Check out our interview with the film’s VFX supervisor Stephane Ceretti.

Director Scott Derrickson said in in a recent Reddit chat that Doctor Strange is “a fantastical superhero movie.

“Watching the final cut of the film was deeply satisfying,” commented Derrickson. “A filmmaker cannot depend upon critical reviews or box office for satisfaction — even if they are good. The only true reward for any artist is to pick a worthy target and hit it. When you know you’ve hit your target that is everything. On this one, I hit my target.”

Since we got an overview of how the visual effects workflow went from Ceretti, we decided to talk to one of the studios that provided VFX for the film, specifically ILM and their VFX supervisor Richard Bluff.

Richard Bluff

According to Bluff, early in pre-production Marvel presented concept art, reference images and previsualization on “what were the boundaries of what the visuals could be.” After that, he says, they had the freedom to search within those bounds.

During VFX presentations with Marvel, they frequently showed three versions of the work. “They went with the craziest version to the point where the next time we would show three more versions and we continued to up the ante on the crazy,” recalls Bluff.

As master coordinator of this effort for ILM, Bluff encouraged his artists, “to own the visuals and try to work out how the company could raise the quality of the work or the designs on the show to another level. How could we introduce something new that remains within the fabric of the movie?”

As a result, says Bluff, they had some amazing ideas flow from individuals on the film. Jason Parks came up with the idea of traveling through the center of a subway train as it fractured. Matt Cowey invented the notion of continually rotating the camera to heighten the sense of vertigo. Andrew Graham designed the kaleidoscope-fighting arena “largely because his personal hobby is building and designing real kaleidoscopes.”

Unique to Doctor Strange is that the big VFX sequences are all very “self-contained.” For example, ILM did the New York and Hong Kong sequence, Luma did the Dark Dimension and Method did the multi-universe. ILM also designed and developed the original concept for the Eldridge Magic and provided all the shared “digital doubles” — CGI rigged and animatable versions of the actors — that tied sequences together. The digital doubles were customized to the needs of each VFX house.

Previs
In some movies previs material is generated and thrown away. Not so with Doctor Strange. What ILM did this time was develop a previs workflow where they could actually hang assets and continue to develop, so it became part of the shot from the earliest iteration.

There was extensive previs done for Marvel by The Third Floor as a creative and technical guide across the movie, and further iterations internal to ILM done by ILM’s lead visualization artist, Landis Fields.

Warning! Spoiler! Once Doctor Strange moves the New York fight scene into the mirror universe, the city starts coming apart in an M.C. Escher-meets-Chris Nolan-Inception kind of way. To make that sequence, ILM created a massive tool kit of New York set pieces and geometry, including subway cars, buildings, vehicles and fire escapes.

In the previs, Fields started breaking apart, duplicating and animating those objects, like the fire escapes, to tell the story of what a kaleidoscoping city would look like. The artists then fleshed out a sequence of shots, a.k.a. “mini beats.” They absorbed the previs into the pipeline by later switching out the gross geometry elements in Fields’ previs with the actual New York hero assets.

Strange Cam
Landis and the ILM team also designed and built what ILM dubbed the “strange cam,” a custom 3D printed 360 GoPro rig that had to withstand the rigors of being slung off the edge of skyscrapers. What ILM wanted to do was to be able to capture 360 degrees of rolling footage from that vantage point to be used as a moving background “plates” that could be reflected within the New York City glass buildings.

VFX, Sound Design and the Hong Kong
One of the big challenges with the Hong Kong sequence was that time was reversing and moving forward at the same time. “What we had to do was ensure the viewer understands that time is reversing throughout that entire sequence.” During the tight hand-to-hand action moments that are moving forward in time, there’s not really much screen space to show you time reversing in the background. So they designed the reversing destruction sequence to work in concert with the sound design. “We realized we had to move away from a continuous shower of debris toward rhythmic beats of debris being sucked out of frame.”

before-streetafter-street

Bluff says the VFX the shot count on the film — 1,450 VFX — was actually a lot less than Captain America: Civil War. From a VFX point of view, The Avengers movies lean on the assets generated in Iron Man and Captain America. The Thor movies help provide the context for what an Avengers movie would look and feel like. In Doctor Strange “almost everything in the movie had to be designed (from scratch) because they haven’t already existed in a previous Marvel film. It’s a brand-new character to the Marvel world.”

Bluff started development on the movie in October of 2014 and really started doing hands on work in February of 2016, frequently traveling between Vancouver, San Francisco and London. A typical day, working out of the ILM London office, would see him get in early and immediately deal with review requests from San Francisco. Then he would jump into “dailies” in London and work with them until the afternoon. After “nightlies” with London there was a “dailies” session with San Francisco and Vancouver, work with them until evening, hit the hotel, grab some dinner, come back around 11:30pm or midnight and do nightlies with San Francisco. “It just kept the team together, and we never missed a beat.”

2D vs. IMAX 3D vs. Dolby Cinema
Bluff saw the entire movie for the first time in IMAX 3D, and is looking forward to seeing it in 2D. Considering sequences in the movie are surreal in nature and Escher-like, there’s an argument that suggests that IMAX 3D is a better way to see it because it enhances the already bizarre version of that world. However, he believes the 2D and 3D versions are really “two different experiences.”

Dolby Cinema is the merging of Dolby Atmos — 128-channel surround sound — with the high dynamic range of Dolby Vision, plus really comfortable seats. It is, arguably, the best way to see a movie. Bluff says as far as VFX goes, high dynamic range information has been there for years. “I’m just thankful that exhibition technology is finally catching up with what’s always been there for us on the visual effects side.”

During that Reddit interview, Derrickson commented, “The EDR (Extended Dynamic Range) print is unbelievable — if you’re lucky enough to live where an EDR print is playing. As for 3D and/or IMAX, see it that way if you like that format. If you don’t, see it 2D.”

Doctor Strange is probably currently playing in a theater near you, but go see it in Dolby Cinema if you can.


In addition to being a West Coast correspondent for postPerspective, Daniel Restuccio is the multimedia department chair at California Lutheran University and former Walt Disney Imagineer.

Making ‘Being Evel’: James Durée walks us through post

Compositing played a huge role in this documentary film.

By Randi Altman

Those of us of a certain age will likely remember being glued to the TV as a child watching Evel Knievel jump his motorcycle over cars and canyons. It felt like the world held its collective breath, hoping that something horrible didn’t happen… or maybe wondering what it would be like if something did.

Well, Johnny Knoxville, of Jackass and Bad Grandpa fame, was one of those kids, as witnessed by, well, his career. Knoxville and Oscar-winning filmmaker Daniel Junge (Saving Face) combined to make Being Evel, a documentary on the daredevil’s life and career. Produced by Knoxville’s Dickhouse Productions (yup, that’s right) and HeLo, it premiered at Sundance this year.

Continue reading

Quick Chat: Randall Dark partners with Bulltiger for new media productions

By Randi Altman

Randall Dark has always been on my radar, almost since I started in this business all those years ago. To me he was the guy who was working in HD long before it turned into the high def we see on our televisions today. He truly was a pioneer of the format, but he’s also more than that. He’s a producer, a director, a cameraman and a company owner.

Recently, Randall Dark partnered with Bulltinger Productions’ CEO and founder, Stephen Brent, on a new film studio in North Austin.

Housing a 10,000-square-foot soundstage (also available for rent) with a 2,500-square-foot greenscreen, Bulltiger’s goal is creating compelling stories for all types of screens. Bulltiger Continue reading

DuArt adds 21 edit bays, two insert stages

 

NEW YORK — DuArt, a New York-based audio, video and digital media post studio, has added 21 additional edit bays, along with two new insert stages. The new edit bays, featuring a combination of Avid Media Composers and Apple Final Cut, were constructed on DuArt’s 7th floor, a fully renovated space that features seven-foot windows on all four exposures, 11-foot ceilings, exposed brick and appealing Midtown views. Each edit bay, which can also be quickly converted into production office space, provides fiber and Ethernet connectivity, as well as wireless and hard-wired Internet.

The new insert studios are ideal for shooting greenscreen shoots or interviews, with hair/makeup and wardrobe rooms next door. A freight elevator and loading dock are available for easy load in and out.

DuArt’s newest facilities have been used by clients including MTV2, Park Slope Productions, The Documentary Group and other short-term and long-term tenants.

This latest expansion complements the 54 existing DuArt production suites and edit bays. In May, the facility added three new audio production rooms for a total of seven – all of those suites are suitable for a wide range of VO uses, with an emphasis on audio books and television narration.

In addition to its complete list of post services, DuArt provides short, medium and long-term space and four-wall support to content production companies.