Tag Archives: Epic Games

Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.

Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.

Epic Games launches Unreal Engine 4.20

Epic Games has introduced Unreal Engine 4.20, which allows developers to build even more realistic characters and immersive environments across games, film and TV, VR/AR/MR and enterprise applications. The Unreal Engine 4.20 release combines the latest realtime rendering advancements with improved creative tools, making it even easier to ship games across all platforms. With hundreds of optimizations, especially for iOS, Android and Nintendo Switch — which have been built for Fortnite and are now rolled into Unreal Engine 4.20 and released to all users — Epic is providing developers with the scalable tools they need for these types of projects.

Artists working in visual effects, animation, broadcast and virtual production will find enhancements for digital humans, VFX and cinematic depth of field, allowing them to create realistic images across all forms of media and entertainment. In the enterprise space, Unreal Studio 4.20 includes upgrades to the UE4 Datasmith plugin suite, such as SketchUp support, which make it easier to get CAD data prepped, imported and working in Unreal Engine.

Here are some key features of Unreal Engine 4.20:

A new proxy LOD system: Users can handle sprawling worlds via UE4’s production-ready Proxy LOD system for the easy reduction of rendering cost due to poly count, draw calls and material complexity. Proxy LOD offers big gains when developing for mobile and console platforms.

A smoother mobile experience: Over 100 mobile optimizations developed for Fortnite come to all 4.20 users, marking a major shift for easy “shippability” and seamless gameplay optimization across platforms. Major enhancements include improved Android debugging, mobile Landscape improvements, RHI thread on Android and occlusion queries on mobile.

Works better with Switch: Epic has improved Nintendo Switch development by releasing tons of performance and memory improvements built for Fortnite on Nintendo Switch to 4.20 users as well.

Niagara VFX (early access): Unreal Engine’s new programmable VFX editor, Niagara, is now available in early access and will help developers take their VFX to the next level. This new suite of tools is built from the ground up to give artists unprecedented control over particle simulation, rendering and performance for more sophisticated visuals. This tool will eventually replace the Unreal Cascade particle editor.

Cinematic depth of field: Unreal Engine 4.20 delivers tools for achieving depth of field at true cinematic quality in any scene. This brand-new implementation replaces the Circle DOF method. It’s faster, cleaner and provides a cinematic appearance through the use of a procedural bokeh simulation. Cinematic DOF also supports alpha channel and dynamic resolution stability, and has multiple settings for scaling up or down on console platforms based on project requirements. This feature debuted at GDC this year as part of the Star Wars “Reflections” demo by Epic, ILMxLAB and Nvidia.

Digital human improvements: In-engine tools now include dual-lobe specular/double Beckman specular models, backscatter transmission in lights, boundary bleed color subsurface scattering, iris normal slot for eyes and screen space irradiance to build the most cutting-edge digital humans in games and beyond.

Live record and replay: All developers now have access to code from Epic’s Fortnite Replay system. Content creators can easily use footage of recorded gameplay sessions to create incredible replay videos.

Sequencer cinematic updates: New features include frame accuracy, media tracking, curve editor/evaluation and Final Cut Pro 7 XML import/export.

Shotgun integration: Shotgun, a production management and asset tracking solution, is now supported. This will streamline workflows for Shotgun users in game development who are leveraging Unreal’s realtime performance. Shotgun users can assign tasks to specific assets within Unreal Engine.

Mixed reality capture support (early access): Users with virtual production workflows will now have mixed reality capture support that includes video input, calibration and in-game compositing. Supported webcams and HDMI capture devices allow users to pull real-world greenscreened video into the engine, and supported tracking devices can match your camera location to the in-game camera for more dynamic shots.

AR support: Unreal Engine 4.20 ships with native support for ARKit 2, which includes features for creating shared, collaborative AR experiences. Also included is the latest support for Magic Leap One, Google ARCore 1.2 support.

Metadata control: Import metadata from 3ds Max, SketchUp and other common CAD tools for the opportunity to batch process objects by property, or expose metadata via scripts. Metadata enables more creative uses of Unreal Studio, such as Python script commands for updating all meshes of a certain type, or displaying relevant information in interactive experiences.

Mesh editing tools: Unreal Engine now includes a basic mesh editing toolset for quick, simple fixes to imported geometry without having to fix them in the source package and re-import. These tools are ideal for simple touch-ups without having to go to another application. Datasmith also now includes a base Python script that can generate Level of Detail (LOD) meshes automatically.

Non-destructive re-import: Achieve faster iteration through the new parameter tracking system, which monitors updates in both the source data and Unreal Editor, and only imports changed elements. Previous changes to the scene within Unreal Editor are retained and reapplied when source data updates.

Epic Games, Nvidia team on enterprise solutions for VR app developers

Epic Games and Nvidia have teamed up to offer enterprise-grade solutions to help app developers create more immersive VR experiences.

To help ease enterprise VR adoption, Epic has integrated Nvidia Quadro professional GPUs into the test suite for Unreal Engine 4, the company’s realtime toolset for creating applications across PC, console, mobile, VR and AR platforms. This ensures Nvidia technologies integrate seamlessly into developers’ workflows, delivering results for everything from CAVEs and multi-projection systems through to enterprise VR and AR solutions.

“With our expanding focus on industries outside of games, we’ve aligned ourselves ever more closely with Nvidia to offer an enterprise-grade experience,” explains Marc Petit, GM of the Unreal Engine Enterprise business. “Nvidia Quadro professional GPUs empower artists, designers and content creators who need to work unencumbered with the largest 3D models and datasets, tackle complex visualization challenges and deliver highly immersive VR experiences.”

The Human Race

One project that has driven this effort is Epic’s collaboration with GM and The Mill on The Human Race, a realtime short film and mixed reality experience featuring a configurable Chevrolet Camaro ZL1, which was built using Nvidia Quadro pro graphics.

Says Bob Pette, VP of professional visualization at Nvidia, “Unreal, from version 4.16, is the first realtime toolset to meet Nvidia Quadro partner standards. Our combined solution provides leaders in these markets the reliability and performance they require for the optimum VR experience.”

Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.