Tag Archives: Unreal Engine

Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.

Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.

IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.