NBCUni 7.26

Category Archives: VR

An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.

Khronos releases OpenXR 1.0 for cross-platform AR/VR

The Khronos Group has ratified and released the OpenXR 1.0 specification, along with publicly available implementations. OpenXR is a unifying, royalty-free open standard that provides high-performance, cross-platform access to virtual reality (VR) and augmented reality (AR) — collectively known as XR — platforms and devices. The new specification can be found on the Khronos website and via GitHub.

“The feedback from the community on the provisional specification released in March has been invaluable to getting us to this significant milestone,” says Brent Insko, OpenXR working group chair and lead XR architect at Intel. “Our work continues as we now finalize a comprehensive test suite, integrate key game engine support, and plan the next set of features to evolve a truly vibrant, cross-platform standard for XR platforms and devices. Now is the time for software developers to start putting OpenXR to work.”

After gathering feedback from the XR community during the public review of the provisional specification, improvements were made to the OpenXR input subsystem, game engine editor support and loader. With this 1.0 release, the working group will evolve the standard while maintaining full backward compatibility from this point onward, giving software developers and hardware vendors a solid foundation upon which to deliver portable user experiences.

OpenXR implementations are shipping this week, including the Monado OpenXR open source implementation from Collabora, the OpenXR runtime for Windows Mixed Reality headsets from Microsoft, an Oculus OpenXR implementation for Rift and Oculus Quest support. Epic Games also plans to release OpenXR 1.0 support in Unreal Engine.

NBCUni 7.26

OptiTrack reveals new skeletal solver

OptiTrack has a new skeletal solver that brings artifact-free, realtime character animation to its optical motion capture systems.

Key features of OptiTrack skeletal solver include:

– Accurate human movement tracking in realtime
– Major advances in solve quality and artifact-free streaming of character data
– Compatible with any OptiTrack system, including those used for live-action camera tracking, virtual camera tracking and virtual reality
– Supports industry-standard tools, including Epic Games’ Unreal Engine, Unity Technologies’ Unity realtime platform and Autodesk MotionBuilder
– Extremely low latency (less than 10 milliseconds)

As a complement to its new skeletal solver, OptiTrack has introduced an equally high-performing finger-tracking solution created in partnership with Manus VR. Embedded with OptiTrack’s signature pulse Active technology, Inertial Measurement Units (IMU) and bend sensors, the gloves deliver accurate, continuous finger-tracking data in real time that is fully compatible with existing character animation and VR pipelines when used with OptiTrack systems.


Bipolar Studio gives flight to Uber Air campaign

Tech company Uber has announced their latest transportation offering — an aerial ride-sharing service. With plans to begin service within cities as soon as 2023, Uber Elevate launched their marketing campaign today at the Uber Elevate Summit. Uber Elevate picked LA-based creative and production boutique Bipolar Studio to create their integrated campaign, which includes a centerpiece film, experiential VR installation, print stills and social content.

The campaign’s centerpiece film Airborne includes stunning imagery that is 100 percent CGI. Beginning on an aerial mass transit platform at Mission Bay HQ, the flight travels across the city of San Francisco, above landmarks like where the Warriors play and Transamerica Tower. Lights blink on the ground below and buildings appear as silhouettes in the far background. The Uber Air flight lands in Santa Clara on Uber’s skytower with a total travel time of 18 minutes — compared to an hour or more driving through rush hour traffic. Multi-floor docking will allow Uber Air to land up to 1000 eVTOLs (those futuristic-looking vehicles that hover, take off and land vertically) per hour.

At the Uber Elevate Summit, attendees had the opportunity to experience a full flight inside a built-out physical cabin via a high-fidelity four-minute VR installation. After the Summit, the installation will travel to Uber events globally. Still images and social media content will further extend the campaign’s reach.

Uber Elevates head of design, John Badalamenti, explains, “We worked shoulder-to-shoulder with Bipolar Studio to create an entirely photoreal VR flight experience, detailed at a high level of accuracy from the physics of flight and advanced flight patterns, down to the dust on the windows. This work represents a powerful milestone in communicating our vision through immersive storytelling and creates a foundation for design iteration that advances our perspective on the rider experience. Bipolar took things a step beyond that as well, creating Airborne, our centerpiece 2D film, enabling future Uber Air passengers to take in the breadth and novelty of the journey outside the cabin from the perspective of the skyways.”

Bipolar developed a bespoke AI-fueled pipeline that could capture, manage and process miles and miles of actual data, then faithfully mirror the real terrain, buildings, traffic and scattered people in cities. They then re-used the all-digital assets, which gave them full freedom to digitally scout the city for locations for “Airborne.” Shooting the spot, as with live-action production, they were able to place the CG camera anywhere in the digital city to capture the aircraft. This gave the team a lot of room to play.

For the animation work, they built a new system through Side Effects Houdini where the flight of the vehicle wasn’t animated but rather simulated with real physics. The team coded a custom plugin to be able to punch in a number for the speed of the aircraft, its weight, and direction, then have AI do everything else. This allowed them to see it turn on the flight path, respond to wind turbulence and even oscillate when taking off. It also allowed them to easily iterate, change variables and get precise dynamics. They could then watch the simulations play out and see everything in realtime.

City Buildings
To bring this to life, Bipolor had to entirely digitize San Francisco. They spent a lot of time creating a pipeline and built the entire city with miles and miles of actual data that matched the terrain and buildings precisely. They then detailed the buildings and used AI to generate moving traffic — and even people, if you can spot them — to fill the streets. Some of the areas required a LIDAR scan for rebuilding. The end result is an incredibly detailed digital recreation of San Francisco. Each of the houses is a full model with windows, walls and doors. Each of the lights in the distance is a car. Even Alcatraz is there. They took the same approach to Santa Clara.

Data Management
Bipolar rendered out 32-bit EXRs in 4K, with each frame having multiple layers for maximum control by the client in the comp stage. That gave them a ton of data and raw number of files to deal with. Thankfully, it wasn’t the studio’s first time dealing with massive amounts of data — their internal infrastructure is already setup to handle a high volume of data being worked on simultaneously. They were also able to use the SSDs on their servers, in certain cases, for a faster time in rendering comps and pre-comps.


Lenovo intros next-gen ThinkPads

Lenovo has launched the next generation of its ThinkPad P Series with the release of five new ThinkPads, including the ThinkPad P73, ThinkPad P53, ThinkPad P1 Gen 2 and ThinkPad P53s and P43s.

The ThinkPad P53 features the Nvidia Quadro RTX 5000 GPU with RT and Tensor cores, offering realtime raytracing and AI acceleration. It now features Intel Xeon and 9th Gen Core class CPUs with up to eight cores (including the Core i9) up to 128GB of memory and 6TB of storage.

This mobile workstation also boasts a new OLED touch display with Dolby Vision HDR for superb color and some of the deepest black levels ever. Building on the innovation behind the ThinkPad P1 power supply, Lenovo is also maximizing the portability of this workstation with a 35 percent smaller power supply. The ThinkPad P53 is designed to handle everything from augmented reality and VR content creation to the deployment of mobile AI or ISV workflows. The ThinkPad P53 will be available in July, starting at $1,799.

At 3.74 pounds and 17.2mm thin, Lenovo’s thinnest and lightest 15-inch workstation — the ThinkPad P1 Gen 2 — includes the latest Nvidia Quadro Turing T1000 and T2000 GPUs. The ThinkPad P1 also features eight-core Intel 9th Gen Xeon and Core CPUs and an OLED touch display with Dolby Vision HDR.

The ThinkPad P1 Gen 2 will be available at the end of June starting at $1,949.

With its 17.3-inch Dolby Vision 4K UHD screen and mobility with a 35% smaller power adaptor, Lenovo’s ThinkPad P73 offers users maximum workspace and mobility. Like the ThinkPad 53, it features the Intel Xeon and Core processors and the most powerful Nvidia Quadro RTX graphics. The ThinkPad P73 will be available in August starting at $1,849.

The ThinkPad P43s features a 14-inch chassis and will be available in July starting at $1,499.

Rounding out the line is the ThinkPad P53s which combines the latest Nvidia Quadro graphics and Intel Core processors — all in a thin and light chassis. The ThinkPad P53s will be available in June, starting at $1,499.

For the first time, Lenovo is adding new X-Rite Pantone Factory Color Calibration to the ThinkPad P1 Gen 2, ThinkPad P53 and ThinkPad P73. The unique factory color calibration profile is stored in the cloud to ensure more accurate recalibration. This profile allows for dynamic switching between color spaces, including sRGB, Adobe RGB and DCI-P3 to ensure accurate ISV application performance.

The entire ThinkPad portfolio is also equipped with advanced ThinkShield security features – from ThinkShutter to privacy screens to self-healing BIOS that recover when attacked or corrupted – to help protect users from every angle and give them the freedom to innovate fearlessly.


Apple offers augmented reality with Reality Composer

By Barry Goch

In addition to introducing the new MacPro and the Pro Display XDR, at its Worldwide Developers Conference (WWDC19), Apple had some pretty cool demos. The coolest, in my mind, was the Minecraft augmented reality presentation.

Across the street from the San Jose Convention Center, where the keynote was held, Apple set up “The Studio” in the San Jose Civic. One of the demos there was an AR experience with the new MacPro which in reality, you only saw the space frame of Apple’s tower, but in augmented reality you were able to animate an exploded view. The technology behind this demo is the just-announced ARKit3 and Reality Composer.

Apple had a couple of stations demoing Reality Composer in The Studio. Apple has applied its famous legacy of enabling content creators by making new technology easy to use. Case in point is Reality Composer. I’ve tried building AR experiences in other apps and it’s not very straightforward. You have to learn a new interface and coding as well — and use yet another app for targeting your AR environment into the real world. The demo I saw of Reality Composer made it look easy, working in Motion with drag-and-drop prebuilt behaviors built into the app, along with multiple ways to target your AR experience in the real world.

AR QuickLook technology is part of iOS, and you can even get an AR experience of the new MacPro and Pro Display XDR through Apple’s website. They also mentioned its new file for holding AR elements, usdz. Apple has created a tool to convert other 3D file formats to usdz.

With native AR support across Apple’s ecosystem, there is no better time to experiment and learn about augmented reality.


Barry Goch is a finishing artist at LA’s The Foundation and a UCLA Extension Instructor in post production. You can follow him on Twitter at @Gochya.


Dell intros two budget-friendly Precision mobile workstations

Dell is offering two new mobile workstations for designers and graphic artists who are looking for entry-level, workstation-class devices — Dell Precision 3540 and 3541. These budget-friendly machines offer a smaller footprint with high performance. Dell’s Precision line has traditionally been used for intensive workloads, such as machine learning and artificial intelligence, and these entry-level versions are designed to allow artists with smaller budgets access to the Precision line’s capabilities.

The Precision 3540 comes with the latest 4-core Intel Core 8th generation processors, up to 32GB of DDR4 memory, AMD Radeon Pro graphics with 2GB of dedicated memory and 2TB of storage. The Precision 3541 will offer additional power, with 9th generation 8-core Intel Core and 6-core Intel Xeon processor options. It will be available with Nvidia Quadro professional graphics with 4GB of dedicated memory. It will also have extreme battery life for on-the-go productivity.

Both models come with Thunderbolt 3 connectivity and optional features to enhance security, such as fingerprint and smartcard readers, an IR camera and a camera shutter. Both models also have a narrow-edge 15.6-inch display. The 3540 model weighs in at 4.04 pounds, and the 3541 model starts at 4.34 pounds.

The Dell Precision 3540 is available now on Dell.com starting at $799, while the Precision 3541 will be available in late May.


Marvel Studios’ Victoria Alonso to keynote SIGGRAPH 2019

Marvel Studios executive VP of production Victoria Alonso has been name keynote speaker for SIGGRAPH 2019, which will run from July 28 through August 1 in downtown Los Angeles. Registration is now open. The annual SIGGRAPH conference is a melting pot for researchers, artists and technologists, among other professionals.

“Victoria is the ultimate symbol of where the computer graphics industry is headed and a true visionary for inclusivity,” says SIGGRAPH 2019 conference chair Mikki Rose. “Her outlook reflects the future I envision for computer graphics and for SIGGRAPH. I am thrilled to have her keynote this summer’s conference and cannot wait to hear more of her story.”

One of few women in Hollywood to hold such a prominent title, Alonso’s dedication to the industry has been admired for a long time, leading to multiple awards and honors, including the 2015 New York Women in Film & Television Muse Award for Outstanding Vision and Achievement, the Advanced Imaging Society’s first female Harold Lloyd Award recipient, and the 2017 VES Visionary Award (another female first). A native of Buenos Aires, her career began in visual effects and included a four-year stint at Digital Domain.

Alonso’s film credits include productions such as Ridley Scott’s Kingdom of Heaven, Tim Burton’s Big Fish, Andrew Adamson’s Shrek, and numerous Marvel titles — Iron Man, Iron Man 2, Thor, Captain America: The First Avenger, Iron Man 3, Captain America: The Winter Soldier, Captain America: Civil War, Thor: The Dark World, Avengers: Age of Ultron, Ant-Man, Guardians of the Galaxy, Doctor Strange, Guardians of the Galaxy Vol. 2, Spider-Man: Homecoming, Thor: Ragnarok, Black Panther, Avengers: Infinity War, Ant-Man and the Wasp and, most recently, Captain Marvel.

“I’ve been attending SIGGRAPH since before there was a line at the ladies’ room,” says Alonso. “I’m very much looking forward to having a candid conversation about the state of visual effects, diversity and representation in our industry.”

She adds, “At Marvel Studios, we have always tried to push boundaries with both our storytelling and our visual effects. Bringing our work to SIGGRAPH each year offers us the opportunity to help shape the future of filmmaking.”

The 2019 keynote session will be presented as a fireside chat, allowing attendees the opportunity to hear Alonso discuss her life and career in an intimate setting.


Creating audio for the cinematic VR series Delusion: Lies Within

By Jennifer Walden

Delusion: Lies Within is a cinematic VR series from writer/director Jon Braver. It is available on the Samsung Gear VR and Oculus Go and Rift platforms. The story follows a reclusive writer named Elena Fitzgerald who penned a series of popular fantasy novels, but before the final book in the series was released, the author disappeared. Rumors circulated about the author’s insanity and supposed murder, so two avid fans decide to break into her mansion to search for answers. What they find are Elena’s nightmares come to life.

Delusion: Lies Within is based on an interactive play written by Braver and Peter Cameron. Interactive theater isn’t your traditional butts-in-the-seat passive viewing-type theater. Instead, the audience is incorporated into the story. They interact with the actors, search for objects, solve mysteries, choose paths and make decisions that move the story forward.

Like a film, the theater production is meticulously planned out, from the creature effects and stunts to the score and sound design. With all these components already in place, Delusion seemed like the ideal candidate to become a cinematic VR series. “In terms of the visuals and sound, the VR experience is very similar to the theatrical experience. With Delusion, we are doing 360° theater, and that’s what VR is too. It’s a 360° format,” explains Braver.

While the intent was to make the VR series match the theatrical experience as much as possible, there are some important differences. First, immersive theater allows the audience to interact with the actors and objects in the environment, but that’s not the case with the VR series. Second, the live theater show has branching story narratives and an audience member can choose which path he/she would like to follow. But in the VR series there’s one set storyline that follows a group who is exploring the author’s house together. The viewer feels immersed in the environment but can’t manipulate it.

L-R: Hamed_Hokamzadeh and Thomas Ouziel

According to supervising sound editor Thomas Ouziel from Hollywood’s MelodyGun Group, “Unlike many VR experiences where you’re kind of on rails in the midst of the action, this was much more cinematic and nuanced. You’re just sitting in the space with the characters, so it was crucial to bring the characters to life and to design full sonic spaces that felt alive.”

In terms of workflow, MelodyGun sound supervisor/studio manager Hamed Hokamzadeh chose to use the Oculus Developers Kit 2 headset with Facebook 360 Spatial Workstation on Avid Pro Tools. “Post supervisor Eric Martin and I decided to keep everything within FB360 because the distribution was to be on a mobile VR platform (although it wasn’t yet clear which platform), and FB360 had worked for us marvelously in the past for mobile and Facebook/YouTube,” says Hokamzadeh. “We initially concentrated on delivering B-format (2nd Order AmbiX) playing back on Gear VR with a Samsung S8. We tried both the Audio-Technica ATH-M50 and Shure SRH840 headphones to make sure it translated. Then we created other deliverables: quad-binaurals, .tbe, 8-channel and a stereo static mix. The non-diegetic music and voiceover was head-locked and delivered in stereo.”

From an aesthetic perspective, the MelodyGun team wanted to have a solid understanding of the audience’s live theater experience and the characters themselves “to make the VR series follow suit with the world Jon had already built. It was also exciting to cross our sound over into more of a cinematic ‘film world’ than was possible in the live theatrical experience,” says Hokamzadeh.

Hokamzadeh and Ouziel assigned specific tasks to their sound team — Xiaodan Li was focused on sound editorial for the hard effects and Foley, and Kennedy Phillips was asked to design specific sound elements, including the fire monster and the alchemist freezing.

Ouziel, meanwhile, had his own challenges of both creating the soundscape and integrating the sounds into the mix. He had to figure out how to make the series sound natural yet cinematic, and how to use sound to draw the viewer’s attention while keeping the surrounding world feeling alive. “You have to cover every movement in VR, so when the characters split up, for example, you want to hear all their footsteps, but we also had to get the audience to focus on a specific character to guide them through. That was one of the biggest challenges we had while mixing it,” says Ouziel.

The Puppets
“Chapter Three: Trial By Fire” provides the best example of how Ouziel tackled those challenges. In the episode, Virginia (Britt Adams) finds herself stuck in Marion’s chamber. Marion (Michael J. Sielaff) is a nefarious puppet master who is clandestinely controlling a room full of people on puppet strings; some are seated at a long dining table and others are suspended from the ceiling. They’re all moving their arms as if dancing to the scratchy song that’s coming from the gramophone.

The sound for the puppet people needed to have a wiry, uncomfortable feel and the space itself needed to feel eerily quiet but also alive with movement. “We used a grating metallic-type texture for the strings so they’d be subconsciously unnerving, and mixed that with wooden creaks to make it feel like you’re surrounded by constant danger,” says Ouziel.

The slow wooden creaks in the ambience reinforce the idea that an unseen Marion is controlling everything that’s happening. Braver says, “Those creaks in Marion’s room make it feel like the space is alive. The house itself is a character in the story. The sound team at MelodyGun did an excellent job of capturing that.”

Once the sound elements were created for that scene, Ouziel then had to space each puppet’s sound appropriately around the room. He also had to fill the room with music while making sure it still felt like it was coming from the gramophone. Ouziel says, “One of the main sound tools that really saved us on this one was Audio Ease’s 360pan suite, specifically the 360reverb function. We used it on the gramophone in Marion’s chamber so that it sounded like the music was coming from across the room. We had to make sure that the reflections felt appropriate for the room, so that we felt surrounded by the music but could clearly hear the directionality of its source. The 360pan suite helped us to create all the environmental spaces in the series. We pretty much ran every element through that reverb.”

L-R: Thomas Ouziel and Jon Braver.

Hokamzadeh adds, “The session got big quickly! Imagine over 200 AmbiX tracks, each with its own 360 spatializer and reverb sends, plus all the other plug-ins and automation you’d normally have on a regular mix. Because things never go out of frame, you have to group stuff to simplify the session. It’s typical to make groups for different layers like footsteps, cloth, etc., but we also made groups for all the sounds coming from a specific direction.”

The 360pan suite reverb was also helpful on the fire monster’s sounds. The monster, called Ember, was sound designed by Phillips. His organic approach was akin to the bear monster in Annihilation, in that it felt half human/half creature. Phillips edited together various bellowing fire elements that sounded like breathing and then manipulated those to match Ember’s tormented movements. Her screams also came from a variety of natural screams mixed with different fire elements so that it felt like there was a scared young girl hidden deep in this walking heap of fire. Ouziel explains, “We gave Ember some loud sounds but we were able to play those in the space using the 360pan suite reverb. That made her feel even bigger and more real.”

The Forest
The opening forest scene was another key moment for sound. The series is set in South Carolina in 1947, and the author’s estate needed to feel like it was in a remote area surrounded by lush, dense forest. “With this location comes so many different sonic elements. We had to communicate that right from the beginning and pull the audience in,” says Braver.

Genevieve Jones, former director of operations at Skybound Entertainment and producer on Delusion: Lies Within, says, “I love the bed of sound that MelodyGun created for the intro. It felt rich. Jon really wanted to go to the south and shoot that sequence but we weren’t able to give that to him. Knowing that I could go to MelodyGun and they could bring that richness was awesome.”

Since the viewer can turn his/her head, the sound of the forest needed to change with those movements. A mix of six different winds spaced into different areas created a bed of textures that shifts with the viewer’s changing perspective. It makes the forest feel real and alive. Ouziel says, “The creative and technical aspects of this series went hand in hand. The spacing of the VR environment really affects the way that you approach ambiences and world-building. The house interior, too, was done in a similar approach, with low winds and tones for the corners of the rooms and the different spaces. It gives you a sense of a three-dimensional experience while also feeling natural and in accordance to the world that Jon made.”

Bringing Live Theater to VR
The sound of the VR series isn’t a direct translation of the live theater experience. Instead, it captures the spirit of the live show in a way that feels natural and immersive, but also cinematic. Ouziel points to the sounds that bring puppet master Marion to life. Here, they had the opportunity to go beyond what was possible with the live theater performance. Ouziel says, “I pitched to Jon the idea that Marion should sound like a big, worn wooden ship, so we built various layers from these huge wooden creaks to match all his movements and really give him the size and gravitas that he deserved. His vocalizations were made from a couple elements including a slowed and pitched version of a raccoon chittering that ended up feeling perfectly like a huge creature chuckling from deep within. There was a lot of creative opportunity here and it was a blast to bring to life.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

IDEA launches to create specs for next-gen immersive media

The Immersive Digital Experiences Alliance (IDEA) will launch at the NAB 2019 with the goal of creating a suite of royalty-free specifications that address all immersive media formats, including emerging light field technology.

Founding members — including CableLabs, Light Field Lab, Otoy and Visby — created IDEA to serve as an alliance of like-minded technology, infrastructure and creative innovators working to facilitate the development of an end-to-end ecosystem for the capture, distribution and display of immersive media.

Such a unified ecosystem must support all displays, including highly anticipated light field panels. Recognizing that the essential launch point would be to create a common media format specification that can be deployed on commercial networks, IDEA has already begun work on the new Immersive Technology Media Format (ITMF).

ITMF will serve as an interchange and distribution format that will enable high-quality conveyance of complex image scenes, including six-degrees-of-freedom (6DoF), to an immersive display for viewing. Moreover, ITMF will enable the support of immersive experience applications including gaming, VR and AR, on top of commercial networks.

Recognized for its potential to deliver an immersive true-to-life experience, light field media can be regarded as the richest and most dense form of visual media, thereby setting the highest bar for features that the ITMF will need to support and the new media-aware processing capabilities that commercial networks must deliver.

Jon Karafin, CEO/co-founder of Light Field Lab, explains that “a light field is a representation describing light rays flowing in every direction through a point in space. New technologies are now enabling the capture and display of this effect, heralding new opportunities for entertainment programming, sports coverage and education. However, until now, there has been no common media format for the storage, editing, transmission or archiving of these immersive images.”

“We’re working on specifications and tools for a variety of immersive displays — AR, VR, stereoscopic 3D and light field technology, with light field being the pinnacle of immersive experiences,” says Dr. Arianne Hinds, Immersive Media Strategist at CableLabs. “As a display-agnostic format, ITMF will provide near-term benefits for today’s screen technology, including VR and AR headsets and stereoscopic displays, with even greater benefits when light field panels hit the market. If light field technology works half as well as early testing suggests, it will be a game-changer, and the cable industry will be there to help support distribution of light field images with the 10G platform.”

Starting with Otoy’s ORBX scene graph format, a well-established data structure widely used in advanced computer animation and computer games, IDEA will provide extensions to expand the capabilities of ORBX for light field photographic camera arrays, live events and other applications. Further specifications will include network streaming for ITMF and transcoding of ITMF for specific displays, archiving, and other applications. IDEA will preserve backwards-compatibility on the existing ORBX format.

IDEA anticipates releasing an initial draft of the ITMF specification in 2019. The alliance also is planning an educational seminar to explain more about the requirements for immersive media and the benefits of the ITMF approach. The seminar will take place in Los Angeles this summer.

Photo Credit: All Rights Reserved: Light Field Lab. Future Vision concept art of room-scale holographic display from Light Field Lab, Inc.

Behind the Title: Light Sail VR MD/EP Robert Watts

This creative knew as early as middle school that he wanted to tell stories. Now he gets to immerse people in those stories.

NAME: Robert Watts

COMPANY: LA-based Light Sail VR (@lightsailvr)

CAN YOU DESCRIBE YOUR COMPANY?
We’re an immersive media production company. We craft projects end-to-end in the VR360, VR180 and interactive content space, which starts from bespoke creative development all the way through post and distribution. We produce both commercial work and our own original IP — our first of which is called Speak of the Devil VR, which is an interactive, live-action horror experience where you’re a main character in your own horror movie.

WHAT’S YOUR JOB TITLE?
Managing Partner and Executive Producer

WHAT DOES THAT ENTAIL?
A ton. As a startup, we wear many hats. I oversee all production elements, acting as producer. I run operations, business development and the financials for the company. Then Matt Celia, my business partner and creative director, collaborates on the overall creative for each project to ensure the quality of the experience, as well as making sure it works natively (i.e.: is the best in) the immersive medium.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I’m very hands-on on set, almost to a fault. So I’ve ended up with some weird (fake) credits, such as fog team, stand-in, underwater videographer, sometimes even assistant director. I do whatever it takes to get the job done — that’s a producer’s job.

WHAT TOOLS DO YOU USE?
Excluding all the VR headsets and tech, on the producing side Google Drive and Dropbox are a producer’s lifeblood, as well as Showbiz Budgeting from Media Services.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love being on set watching the days and weeks of pre-production and development coalesce. There’s an energy on set that’s both fun and professional, and that truly shows the crew’s dedication and focus to get the job done. As the exec producer, it’s nice being able to strike a balance between being on set and being in the office.

Light Sail VR partners (L-R): Matt Celia and Robert Watts

WHAT’S YOUR LEAST FAVORITE?
Tech hurdles. They always seem to pop up. We’re a production company working on the edge of the latest technology, so something always breaks, and there’s not always a YouTube tutorial on how to fix it. It can really set back one’s day.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
We do “Light Sail Sandwich Club” at lunch and cater a smorgasbord of sandwich fixings and crafty services for our teams, contractors and interns. It’s great to take a break from the day and sit down and connect with our colleagues in a personal way. It’s relaxed and fun, and I really enjoy it.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I love what I do, but I also like giving back. I think I’d be using my project management skills in a way that would be a force for good, perhaps at an NGO or entity working on tackling climate change.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Middle school. My family watched a lot of television and films. I wanted to be an archaeologist after watching Indiana Jones, a paleontologist after Jurassic Park, a submarine commander after Crimson Tide and I fancied being a doctor after watching ER. I got into theater and video productions in high school, and I realized I could be in entertainment and make all those stories I loved as a kid.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
At the tail end of 2018, we produced 10 360-degree episodes for Refinery29 (Sweet Digs 360), 10 VR180 episodes (Get Glam, Hauliday) and VR180 spots for Bon Appetit and Glamour. We also wrapped on a music video that’s releasing this year.

On top of it all, we’ve been hard at work developing our next original, which we will reveal more details about soon. We’ve been busy! I’m extremely thankful for the wonderful teams that helped us make it all happen.

Now Your Turn

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I am very proud of the diversity project we did with Google, Google: Immerse, as well as our first original, Speak of the Devil. But I think our first original series Now Your Turn is the one I’m going to pick. It’s a five-episode VR180 series that features Geek & Sundry talent showcasing some amazing board games. It’s silly and fun, and we put in a number of easter eggs that make it even better when you’re watching in a headset. I’m proud of it because it’s an example of where the VR medium is going — series that folks tune into week to week.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Mac for work and music — I’m constantly listening to music while I work. My Xbox One is where I watch all my content and, lastly, my VIVE set up at home. I like to check out all the latest in VR, from experiences to gaming, and I even work out with it playing BoxVR or Beat Saber.

WHAT KIND OF MUSIC DO YOU LISTEN TO AT WORK?
My taste spans from classic rock to techno/EDM to Spanish guitar.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I try to have a work-life balance. I don’t set my email notifications to “push.” Instead, I make the choice of when I check my emails. I do it frequently enough I don’t ever feel I’m out of the loop, but that small choice helps me feel in control of all the hundreds of things that happen on a day-to-day basis.

I make time every night and on the weekends to spend time with my lovely wife, Jessica. When we’re not watching stuff, we’re seeing friends and playing board games — we’re big nerds. It’s important to have fun!

Sandbox VR partners with Vicon on Amber Sky 2088 experience

VR gaming company Sandbox VR has been partnering and working with Vicon motion capture tools to create next-generation immersive experiences. By using Vicon’s motion capture cameras and its location-based VR (LBVR) software Evoke, the Hong Kong-based Sandbox VR is working to transport up to six people at a time into the Amber Sky 2088 experience, which takes place in a future where the fate of humanity lies in the balance.

Sandbox VR’s adventures resemble movies where the players become the characters. With two proprietary AAA-quality games already in operation across Sandbox VR’s seven locations, for its third title, Amber Sky 2088, a new motion capture solution was needed. In the futuristic game, users step into the role of androids, granting players abilities far beyond the average human while still scaling the game to their actual movements. To accurately convey that for multiple users in a free-roam environment, precision tracking and flexible scalability were vital. For that, Sandbox VR turned to Vicon.

Set in the twilight of the 21st century, Amber Sky 2088 takes players to a futuristic version of Hong Kong, then through the clouds to the edge of space to fight off an alien invasion. Android abilities allow players to react with incredible strength and move at speeds fast enough to dodge bullets. And while the in-game action is furious, participants in the real-world — equipped with VR headsets —  freely roam an open environment as Vicon LBVR motion capture cameras track their movement.

Vicon’s motion capture cameras record every player movement, then send the data to its Evoke software, a solution introduced last year as part of its LBVR platform, Origin. Vicon’s solution offers  precise tracking, while also animating player motion in realtime, creating a seamless in-game experience. Automatic re-calibration also makes the experience’s operation easier than ever despite its complex nature, and the system’s scalability means fewer cameras can be used to capture more movement, making it cost-effective for large scale expansion.

Since its founding in 2016, Sandbox VR has been creating interactive experiences by combining motion capture technology with virtual reality. After opening its first location in Hong Kong in 2017, the company has since expanded to seven locations across Asia and North America, with six new sites on the way. Each 30- to 60-minute experience is created in-house by Sandbox VR, and each can accommodate up to six players at a time.

The recent partnership with Vicon is the first step in Sandbox VR’s expansion plans that will see it open over 40 experience rooms across 12 new locations around the world by the end of the year. In considering its plans to build and operate new locations, the VR makers chose to start with five systems from Vicon, in part because of the company’s collaborative nature.

Lowepost offering Scratch training for DITs, post pros

Oslo, Norway-based Lowepost, which offers an online learning platform for post production, has launched an Assimilate Scratch Training Channel targeting DITs and post pros. This training includes an extensive series of tutorials that help guide a post pro or DIT through the features of an entire Scratch workflow. Scratch products offer dailies to conform, color grading, visual effects, compositing, finishing, VR and live streaming.

“We’re offering in-depth training of Scratch via comprehensive tutorials developed by Lowepost and Assimilate,” says Stig Olsen, manager of Lowepost. “Our primary goal is to make Scratch training easily accessible to all users and post artists for building their skills in high-end tools that will advance their expertise and careers. It’s also ideal for DaVinci Resolve colorists who want to add another excellent conform, finishing and VR tool to their tool kit.”

Lowepost is offering three-month free access to the Scratch training. The first tutorial, Scratch Essential Training, is also available now. A free 30-day trial offer of Scratch is available via their website.

Lowepost’s Scratch Training Channel is available for an annual fee of $59 (US).

Behind the Title: Left Field Labs ECD Yann Caloghiris

NAME: Yann Caloghiris

COMPANY: Left Field Labs (@LeftFieldLabs)

CAN YOU DESCRIBE YOUR COMPANY?
Left Field Labs is a Venice-California-based creative agency dedicated to applying creativity to emerging technologies. We create experiences at the intersection of strategy, design and code for our clients, who include Google, Uber, Discovery and Estée Lauder.

But it’s how we go about our business that has shaped who we have become. Over the past 10 years, we have consciously moved away from the traditional agency model and have grown by deepening our expertise, sourcing exceptional talent and, most importantly, fostering a “lab-like” creative culture of collaboration and experimentation.

WHAT’S YOUR JOB TITLE?
Executive Creative Director

WHAT DOES THAT ENTAIL?
My role is to drive the creative vision across our client accounts, as well as our own ventures. In practice, that can mean anything from providing insights for ongoing work to proposing creative strategies to running ideation workshops. Ultimately, it’s whatever it takes to help the team flourish and push the envelope of our creative work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably that I learn more now than I did at the beginning of my career. When I started, I imagined that the executive CD roles were occupied by seasoned industry veterans, who had seen and done it all, and would provide tried and tested direction.

Today, I think that cliché is out of touch with what’s required from agency culture and where the industry is going. Sure, some aspects of the role remain unchanged — such as being a supportive team lead or appreciating the value of great copy — but the pace of change is such that the role often requires both the ability to leverage past experience and accept that sometimes a new paradigm is emerging and assumptions need to be adjusted.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team, and the excitement that comes from workshopping the big ideas that will anchor the experiences we create.

WHAT’S YOUR LEAST FAVORITE?
The administrative parts of a creative business are not always the most fulfilling. Thankfully, tasks like timesheeting, expense reporting and invoicing are becoming less exhaustive thanks to better predictive tools and machine learning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The early hours of the morning, usually when inspiration strikes — when we haven’t had to deal with the unexpected day-to-day challenges that come with managing a busy design studio.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be somewhere at the cross-section between an artist, like my mum was, and an engineer like my dad. There is nothing more satisfying than to apply art to an engineering challenge or vice versa.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school in France, and there wasn’t much room for anything other than school and homework. When I got my Baccalaureate, I decided that from that point onward that whatever I did, it would be fun, deeply engaging and at a place where being creative was an asset.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently partnered with ad agency RK Venture to craft a VR experience for the New Mexico Department of Transportation’s ongoing ENDWI campaign, which immerses viewers into a real-life drunk-driving scenario.

ENDWI

To best communicate and tell the human side of this story, we turned to rapid breakthroughs within volumetric capture and 3D scanning. Working with Microsoft’s Mixed Reality Capture Studio, we were able to bring every detail of an actor’s performance to life with volumetric performance capture in a way that previous techniques could not.

Bringing a real actor’s performance into a virtual experience is a game changer because of the emotional connection it creates. For ENDWI, the combination of rich immersion with compelling non-linear storytelling proved to affect the participants at a visceral level — with the goal of changing behavior further down the road.

Throughout this past year, we partnered with the VMware Cloud Marketing Team to create a one-of-a-kind immersive booth experience for VMworld Las Vegas 2018 and Barcelona 2018 called Cloud City. VMware’s cloud offering needed a distinct presence to foster a deeper understanding and greater connectivity between brand, product and customers stepping into the cloud.

Cloud City

Our solution was Cloud City, a destination merging future-forward architecture, light, texture, sound and interactions with VMware Cloud experts to give consumers a window into how the cloud, and more specifically how VMware Cloud, can be an essential solution for them. VMworld is the brand’s penultimate engagement where hands-on learning helped showcase its cloud offerings. Cloud City garnered 4000-plus demos, which led to a 20% lead conversion in 10 days.

Finally, for Google, we designed and built a platform for the hosting of online events anywhere in the world: Google Gather. For its first release, teams across Google, including Android, Cloud and Education, used Google Gather to reach and convert potential customers across the globe. With hundreds of events to date, the platform now reaches enterprise decision-makers at massive scale, spanning far beyond what has been possible with traditional event marketing, management and hosting.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Recently, a friend and I shot and edited a fun video homage to the original technology boom-town: Detroit, Michigan. It features two cultural icons from the region, an original big block ‘60s muscle car and some gritty electro beats. My four-year-old son thinks it’s the coolest thing he’s ever seen. It’s going to be hard for me to top that.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Human flight, the Internet and our baby monitor!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram, Twitter, Medium and LinkedIn.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Where to start?! Music has always played an important part of my creative process, and the joy I derive from what we do. I have day-long playlists curated around what I’m trying to achieve during that time. Being able to influence how I feel when working on a brief is essential — it helps set me in the right mindset.

Sometimes, it might be film scores when working on visuals, jazz to design a workshop schedule or techno to dial-up productivity when doing expenses.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Spend time with my kids. They remind me that there is a simple and unpretentious way to look at life.

Lucid and Eys3D partner on VR180 depth camera module

EYS3D Microelectronics Technology, the company behind embedded camera modules in some top-tier AR/VR headsets, has partnered with that AI startup Lucid. Lucid will power their next-generation depth-sensing camera module, Axis. This means that a single, small, handheld device can capture accurate 3D depth maps with up to a 180-degree field of view at high resolution, allowing content creators to scan, reconstruct and output precise 3D point clouds.

This new camera module, which was demoed for the first time at CES, will allow developers, animators and game designers a way to transform the physical world into a virtual one, ramping up content for 3D, VR and AR all with superior performance in resolution and field of view at a lower cost than some technologies currently available.

A device capturing the environment exactly as you perceive it, but enhanced with capabilities of precise depth, distance and understanding could help eliminate the boundaries between what you see in the real world and what you can create in the VR and AR world. This is what the Lucid-powered EYS3D’s Axis camera module aims to bring to content creators, as they gain the “super power” of transforming anything in their vision into a 3D object or scene which others can experience, interact with and walk in.

What was only previously possible with eight to 16 high-end DSLR cameras, and expensive software or depth sensors is now combined into one tiny camera module with stereo lenses paired with IR sensors. Axis will cover up to a 180-degree field of view while providing millimeter-accurate 3D in point cloud or depth map format. This device provides a simple plug-and-play experience through USB 3.1 Gen1/2 and supported Windows and Linux software suites, allowing users to further develop their own depth applications such as 3D reconstructing an entire scene, scanning faces into 3D models or just determining how far away an object is.

Lucid’s AI-enhanced 3D/depth solution, known as 3D Fusion Technology, is currently deployed in many devices, such as 3D cameras, robots and mobile phones, including the Red Hydrogen One, which just launched through AT&T and Verizon nationwide.

EYS3D’s new depth camera module powered by Lucid will be available in Q3 2019.

From artist to AR technologist: What I learned along the way

By Leon Hui

As an ARwall co-founder and chief technology officer (CTO), I manage all things relating to technology for the company. This includes overseeing software and technology development, designs, engineering, IT, troubleshooting and everything in-between. Launching the company, I solely developed the critical pieces of technology required to achieve the ARwall concept overall.

I came into augmented reality (AR) as a game development software engineer, and that plays a big role in how I approach this new medium. Stepping into ARwall, it became my job to produce artistic realtime graphics for AR backdrops and settings, while also pursuing technological advancements that will move the AR industry forward.

Rene Amador presents in front of an ARwall screen. The TV monitor in the foreground shows the camera’s perspective.

Alongside CEO Rene Amador, the best way we found to make sure the company retained artistic values was to bring on highly talented artists, coders and engineers with a diverse skill set in both art and tech. It’s our mission to not let the scales tip one way or the other, and to focus on bringing in both artistic and tech talent.

With the continuing convergence of entertainment and technology, it is vital for a creative technology company to continue to advance, while maintaining and nurturing artistic integrity.

Here is what we have learned along the way in striking this balance:

Diversify Your Hiring
Going into AR, or any other immersive field, it is very important that one understands realtime graphics.

So, while it’s useful for my company to hire engineers that have graphics and coding backgrounds — as many game engineers do — it’s still crucial to hire for the individual strengths of both tech and art. At ARwall, our open roles could be combined for one gifted individual, or isolated with an emphasis on either artistry or coding, for those with specialties.

Because we are dealing with high-quality realtime graphics, the ARwall team would be similar to the team profiles of any AAA game studio. We never deviated from an artistic trajectory — we just brought technology along for the ride. We think of talent recruitment as a crucial process in our advancement and always have our eyes out for our next game developer to fill roles ranging from technical, environment, material and character artist to graphics, game engine and generalist engineer.

Expand Your Education
If someone with a background in film or TV post production came to work in a new tech industry, like AR, they would need to expand their own education. It’s challenging, but not impossible. While my company’s current emphasis is on game developers and CG artists, the backgrounds of fellow co-founders Rene Amador, Eric Navarrette and Jocelyn Hsu sit in ad agencies, television digital development, post production and beyond.

Jocelyn Hsu on an XR set, a combination of physical set pieces with the CG set extension running in the background.

There are a variety of toolsets and concepts left to learn, including: the software development life cycle; Microsoft Project or Hansoft; Agile methodology; the definition of “realtime graphics” and how it works; the top-dog game engine tools, including Unity and Unreal Engine 4; and digital asset creation pipelines for game engines, among others.

The transition is largely based on ones game development background but, of course, there is always a learning curve when entering a new industry.

Focus on the Balance
We understand that the core of a “technology company,” as we bill ourselves, is still the foundational technology. However, depending on the type of technology, companies need staffers that have a high-level mastery of the technology in order to demonstrate its full potential to others. It just happens that with AR technology there is an inherently visual aspect, which translates to a need for superior artistry in unison with the precise technology.

In order for AR technology to showcase and look more appealing, high-quality artistry is very much needed. This can be a difficult balance to maintain if focus or purpose are lost. For ARwall, we aim to hire talent that excels at art or engineering, or both.

ARwall expanded its offerings to stake its claim as a technology company, but built on each founders’ roots as artists, engineers and producers. Tech and art aren’t mutually-exclusive; rather, with focus, education and time to search for the right talent, technology companies can excel with invention and keep their creative edge, all at once.


Leon Hui brings to the team 20+ years of technical experience as a software engineer focusing on realtime 3D graphics, VR/AR and systems architecture. He has lead/senior technical roles on 15 AAA shipped titles as a veteran of top developers including EA, Microsoft Studios, Konami Digital Entertainment. He was previously TD at Skydance Interactive. ARwall is based in Burbank. 

 

Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko

Nvidia intros Turing-powered Titan RTX

Nvidia has introduced its new Nvidia Titan RTX, a desktop GPU that provides the kind of massive performance needed for creative applications, AI research and data science. Driven by the new Nvidia Turing architecture, Titan RTX — dubbed T-Rex — delivers 130 teraflops of deep learning performance and 11 GigaRays of raytracing performance.

Turing features new RT Cores to accelerate raytracing, plus new multi-precision Tensor Cores for AI training and inferencing. These two engines — along with more powerful compute and enhanced rasterization — will help speed the work of developers, designers and artists across multiple industries.

Designed for computationally demanding applications, Titan RTX combines AI, realtime raytraced graphics, next-gen virtual reality and high-performance computing. It offers the following features and capabilities:
• 576 multi-precision Turing Tensor Cores, providing up to 130 Teraflops of deep learning performance
• 72 Turing RT Cores, delivering up to 11 GigaRays per second of realtime raytracing performance
• 24GB of high-speed GDDR6 memory with 672GB/s of bandwidth — two times the memory of previous-generation Titan GPUs — to fit larger models and datasets
• 100GB/s Nvidia NVLink, which can pair two Titan RTX GPUs to scale memory and compute
• Performance and memory bandwidth sufficient for realtime 8K video editing
• VirtualLink port, which provides the performance and connectivity required by next-gen VR headsets

Titan RTX provides multi-precision Turing Tensor Cores for breakthrough performance from FP32, FP16, INT8 and INT4, allowing faster training and inference of neural networks. It offers twice the memory capacity of previous-generation Titan GPUs, along with NVLink to allow researchers to experiment with larger neural networks and datasets.

Titan RTX accelerates data analytics with RAPIDS. RAPIDS open-source libraries integrate seamlessly with the world’s most popular data science workflows to speed up machine learning.

Titan RTX will be available later in December in the US and Europe for $2,499.

Storage for Interactive, VR

By Karen Moltenbrey

Every vendor in the visual effects and post production industries relies on data storage. However, for those studios working on new media or hybrid projects, which generate far more content in general, they not only need a reliable solution, they need one that can handle terabytes upon terabytes of data.

Here, two companies in the VR space discuss their needs for a storage solution that serve their business requirements.

Lap Van Luu

Magnopus
Located in downtown Los Angeles, Magnopus creates amazing VR and AR experiences. While a fairly new company — it was founded in 2013 — its staff has an extensive history in the VFX and games industries, with Academy Award winners among its founders. So, there is no doubt that the group knows what it takes to create amazing content.

It also knows the necessity of a reliable storage solution and one that can handle the large data generated by an AR or VR project. At Magnopus, the crew uses a custom-built solution leveraging Supermicro architecture. As Magnopus CTO Lap Van Luu points out, they are using an SSG-6048R-E1CR60N 4U chassis that the studio populates with two types of tier storage: the cache read-and-write layer is NVMe, while the second tier is SAS. Both are in a RAID-10 configuration with 1TB of NVMe and 500TB of SAS raw storage.

“This setup allows us to scale to a larger workforce and meet the demands of our artists,” says Luu. “We leverage faster NVMe Flash and larger SAS for the bulk of our storage requirements.”

Before Magnopus, Luu worked at companies with all kinds of storage systems over the past 20 years, including those from NetApp, BlueArc and Isilon, as well as custom builds of ZFS, FreeNAS, Microsoft Windows Storage Spaces and Hadoop configurations. However, since Magnopus opened, it has only switched to a bigger and faster version of its original setup, starting with a custom Supermicro system with 400GB of SSD and 250TB of SAS in the same configuration.

“We went with this configuration because as we were moving more into realtime production than traditional VFX, the need for larger renderfarms and storage IO demands dropped dramatically,” says Luu. “We also knew that we wanted to leverage smart caching due to the cost of Flash storage dropping to a reasonable price point. It was the ideal situation to be in. We were starting a new company with a less-demanding infrastructure with newer technology that was cheaper, faster and better overall.”

Nevertheless, choosing a specific solution was not a decision that was made lightly. “When you move away from your premier storage solution providers, there is always a concern for scalability and reliability. When working in realtime production, the concern to re-render elements wasn’t a factor of hours or days, but rather seconds and minutes. It was important for us to have redundant backups. But for the cost saving on storage, we could easily get mirrored servers and still be saving a significant amount of money.”

Luu knew the studio wanted to leverage Flash caching, so the big question was, How much Flash was necessary to meet the demands of their artists and processing farm? The processing farm was mainly used to generate textures and environments that were imported over to a real-time engine, such as Unity or Unreal Engine. To this end, Magnopus had to find out who offered a solution for caching that was as hands-off as possible and was invisible to all the users. “LSI, now Avago, had a solution with the RAID controller called cachecade, which dealt with all the caching,” he says. “All you had to do was set up some preferences and the RAID controller would take care of the rest.”

However, cachecade had a size limit on the caching layer of 512GB, so the studio had to do some testing to see if it would ever exceed that, and in a rare situation it did, says Luu. “But it was never a worry because behind the flash cache was a 60 SAS drive RAID-10 configuration.”

As Luu explains, when working with VFX, IOPS (IO operations per second) is always the biggest issue due to the heavy demand from certain types of applications. “VFX work and compositing can typically drive any storage solution to a grinding halt when you have a renderfarm taxing the production storage from your artists,” he explains. However, realtime development IO demands are significantly less since the assets are created in a DCC application but imported into a game engine, where processing occurs in realtime and locally. So, storing all those traditional VFX elements are not necessary, and the overall capacity of storage dropped to one-tenth of what was required with VFX, Luu points out.

And since Magnopus has a Flash-based cache layer that is large enough to meet the company’s IO demands, it does not have to leverage localization to reduce the IO demand off the main production server; as a result, the user gets immediate server response. And, it means that all data within the pipeline resides on the company’s main production server — where the company starts and ends any project.

“Magnopus is a content-focused technology company,” Luu says. “All our assets and projects that we create are digital. Storage is extremely important because it is the lifeblood of everything we create. The storage server can be the difference between if a user can focus on creative content creation where the infrastructure is invisible or the frustration of constantly being blocked and delayed by hardware. Enabling everyone to work as efficiently as possible allows for the best results and products for our clients and customers.”

Light Sail VR
Light Sail VR is a Hollywood-based VR boutique that is a pioneer in cinematic virtual reality storytelling. Since its founding three years ago, the studio has been producing a range of interactive, 360- and 180-degree VR content, including original work and branded pieces for Google, ABC, GoPro and Paramount.

Matt Celia on set for Speak of the Devil.

Because Light Sail VR is a unique but small company, employees often have to wear a number of hats. For instance, co-founder Robert Watts is executive producer and handles many of the logistical issues. His partner, Matthew Celia, is creative director and handles more of the technical aspects of the business. So when it comes to managing the company’s storage needs, Celia is the guy. And, having a reliable system that keeps things running smoothly is paramount, as he is also juggling shoots and post-production work. No one can afford delays in production and post, but for a small company, it can be especially disastrous.

Light Sail VR does not simply dabble in VR; it is what the company does exclusively. Most of the projects thus far have been live action, though the group started its first game engine work this year. When the studio produced a piece with GoPro in the first year of its founding, it was on a sneakernet of G-Drives from G-Technology, “and I was going crazy!” says Celia. “VR is fantastic, but it’s very data-intensive. You can max out a computer’s processing very easily, and the render times are extraordinarily long. There’s a lot of shots to get through because every shot becomes a visual effects shot with either stitching, rotoscoping or compositing needed.”

He continues: “I told Robert [Watts] we needed to get a shared storage server so if I max out one computer while I’m working, I can just go to another computer and keep working, rather than wait eight to 10 hours for a render to finish.”

The Speak of the Devil shoot.

Celia had been dialed into the post world for some time. “Before diving into the world of VR, I was a Final Cut guy, and the LumaForge guys and [founder] Sam Mestman were people I always respected in the industry,” he says. So, Celia reached out to them with a cold call and explained that Light Sail VR was doing virtual reality, an uncharted, pioneering new thing, and was going to need a lot of storage — and needed it fast. “I told them, ‘We want to be hooked up to many computers, both Macs and PCs, and don’t want to deal with file structures and those types of things.’”

Celia points out that they are an independent and small boutique, so finding something that was cost effective and reliable was important. LumaForge responded with a solution called Jellyfish Mobile, geared for small teams and on-set work or portable office environments. “I think we got the 30TB NAS server that has four 10Gb Ethernet connections.” That enabled Light Sail VR to hook up the system to all its computers, “and it worked,” he adds. “I could work on one shot, hit render, and go to another computer and continue working on the next shot and hit render, then kind of ping-pong back and forth. It made our lives a lot easier.”

Light Sail VR has since graduated to the larger-capacity Jellyfish Rack system, which is a 160TB solution (expandable up to 1 petabyte).

The storage is located in Light Sail VR’s main office and is hooked up to its computers. The filmmakers shoot in the field and, if on location, download the data to drives, which they transport back to the office and load onto the server. Then, they transcode all the media to DNX. (VR is captured in H.264 format, which is not user friendly for editing due to the high-res frame size.)

Currently, Celia is in New York, having just wrapped the 20th episode of original content for Refinery29, a media company focused on young women that produces editorial and video programming, live events and social, shareable content delivered across major social media platforms, and covers a variety of categories from style to politics and more. Eight of the episodes are currently in various stages of the post pipeline, due to come out later this year. “And having a solid storage server has been a godsend,” Celia says.

The studio backs up locally onto Seagate drives for archival purposes and sometimes employs G-Technology drives for on-set work. “We just got this new G-Tech SSD that’s 2TB. It’s been great for use on set because having an SSD and downloading all the cards while on set makes your wrap process so much faster,” Celia points out.

Lately, Light Sail VR is shooting a lot of VR-180, requiring two 64GB cards per camera — one for the right eye and one for the left eye. But when they are shooting with the Yi Halo next-gen 3D 360-degree Google Jump camera, they use 17 64GB cards. “That’s a lot of data,” says Celia. “You can have a really bad day if you have really bad drives.”

The studio’s previous solution operated via Thunderbolt 1 in a RAID-5. It only worked on a single machine and was not cross-platform. As the studio made the transition over to PC from Mac to take advantage of better hardware capable of supporting VR playback, that solution was just not practical. They also needed a solution that was plug and play, so they could just pop it into a 10Gb Ethernet connection — they did not want fiber, “which can get expensive.”

The Light Sail team.

“I just wanted something very simple that was cross-platform and could handle what we were doing, which is, by the way, 6K or 8K stereo at 60 frames per second – these workloads are larger than most feature films,” Celia says. “So, we needed a lot of storage. We needed it fast. We needed it to be shared.”

However, while Celia searched for a system, one thing became clear to him: The solutions were technical. “It seemed like I would have to be my own IT department.” And, that was just one more hat he did not want to have to wear. “At LumaForge, they are independent filmmakers. They understood what I was trying to do immediately, and were willing to go on that journey with us.”

Say Celia, “I always call hard drives or storage the underwear of the post production world because it’s the thing you hate spending a lot of money on, but you really need it to perform and work.”

Main Image: Magnopus


Karen Moltenbrey is a long-time VFX and post writer.

Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.

Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.

Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.

Satore Tech tackles post for Philharmonia Orchestra’s latest VR film

The Philharmonia Orchestra in London debuted its latest VR experience at Royal Festival Hall alongside the opening two concerts of the Philharmonia’s new season. Satore Tech completed VR stitching for the Mahler 3: Live From London film. This is the first project completed by Satore Tech since it was launched in June of this year.

The VR experience placed users at the heart of the Orchestra during the final 10 minutes of Mahler’s Third Symphony, which was filmed live in October 2017. The stitching project was completed by creative technologist/SFX/VR expert Sergio Ochoa, who leads Satore Tech. The company used SGO Mistika technology to post the project, which Ochoa helped to develop during his time in that company — he was creative technologist and CEO of SGO’s French division.

Luke Ritchie, head of innovation and partnerships at the Philharmonia Orchestra, says, “We’ve been working with VR since 2015, it’s a fantastic technology to connect new audiences with the Orchestra in an entirely new way. VR allows you to sit at the heart of the Orchestra, and our VR experiences can transform audiences’ preconceptions of orchestral performance — whether they’re new to classical music or are a die-hard fan.”

It was a technically demanding project for Satore Tech to stitch together, as the concert was filmed live, in 360 degrees, with no retakes using Google’s latest Jump Odyssey VR camera. This meant that Ochoa was working with four to five different depth layers at any one time. The amount of fast movement also meant the resolution of the footage needed to be up-scaled from 4K to 8K to ensure it was suitable for the VR platform.

“The guiding principle for Satore Tech is we aspire to constantly push the boundaries, both in terms of what we produce and the technologies we develop to achieve that vision,” explains Ochoa. “It was challenging given the issues that arise with any live recording, but the ambition and complexity is what makes it such a very suitable initial project for us.”

Satore Tech’s next project is currently in development in Mexico, using experimental volumetric capture techniques with some of the world’s most famous dancers. It is slated for release early next year.

Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018

DeepMotion’s Neuron cloud app trains digital characters using AI

DeepMotion has launched DeepMotion Neuron, the first tool for completely procedural, physical character animation, for presale. The cloud application trains digital characters to develop physical intelligence using advanced artificial intelligence (AI), physics and deep learning. With guidance and practice, digital characters can now achieve adaptive motor control just as humans do, in turn allowing animators and developers to create more lifelike and responsive animations than those possible using traditional methods.

DeepMotion Neuron is a behavior-as-a-service platform that developers can use to upload and train their own 3D characters, choosing from hundreds of interactive motions available via an online library. Neuron will enable content creators to tell more immersive stories by adding responsive actors to games and experiences. By handling large portions of technical animation automatically, the service also will free up time for artists to focus on expressive details.

DeepMotion Neuron is built on techniques identified by researchers from DeepMotion and Carnegie Mellon University who studied the application of reinforcement learning to the growing domain of sports simulation, specifically basketball, where real-world human motor intelligence is at its peak. After training and optimization, the researchers’ characters were able to perform interactive ball-handling skills in real-time simulation. The same technology used to teach digital actors how to dribble can be applied to any physical movement using Neuron.

DeepMotion Neuron’s cloud platform is slated for release in Q4 of 2018. During the DeepMotion Neuron prelaunch, developers and animators can register on the DeepMotion website for early access and discounts.

Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.

HP intros new entry-level HP Z lineup

HP is offering new entry-level workstations with their HP Z lineup, which is designed to help accelerate performance and secure pros’ workflows.

The HP Z2 Mini, HP Z2 Small Form Factor and HP Z2 Tower, as well as the HP EliteDesk 800 Workstation Edition, feature built-in end-to-end HP security services, providing protection from evolving malware threats with self-healing BIOS and an HP endpoint security controller. Users get protection from hardware-enforced security solutions, including HP Sure Start Gen4 and HP Sure Run, which help keep critical processes running, even if malware tries to stop them. Additionally, HP’s Manageability Kit Gen 2 manages multiple devices.

All HP Z2 workstations can now connect with Thunderbolt for fast device connections and offer an array of certifications for the apps pros are using in their day-to-day work lives. HP Performance Advisor is available to optimize software and drivers, and users can deploy Intel Xeon processors and ECC memory for added reliability. The customization, expandability, performance upgradeability and I/O options help future-proof HP Z workstation purchases.

Here are some details about the fourth-generation entry HP Z workstation family:

The HP Z2 Mini G4 workstation features what HP calls “next-level performance” in a small form factor (2.7 liters in total volume). Compared to the previous generation HP Z2 Mini, it offers two times more graphics power. Users can choose either the Nvidia Quadro P600 or Nvidia Quadro P1000 GPU. In addition, there is the option for AMD Radeon Pro WX4150 graphics.

Thanks to its size, users can mount it under a desk, behind a display or in a rack — up to 56 HP Z2 Mini workstations will fit in a standard 42U rack with the custom rackmount bracket accessory. With its flexible I/O, users can configure the system for connectivity of legacy serial ports, as well as support for up to six displays for peripheral and display connectivity needs. The HP Z2 G4 Mini comes with six core Intel Xeon Processors.

The HP Z2 Small Form Factor (SFF) G4 workstation offers 50 percent more processing power than the previous generation in the exact same compact size. The six-core CPU provides significant performance boosts. The HP Z2 SFF takes customization to the next level with flexible I/O options that free up valuable PCIe slots, while providing customization for legacy or specialized equipment, and for changing display needs.

The HP Z2 G4 SFF ships with four PCIe slots and dual M.2 storage slots. Its flexible I/O option enables users to customize networking, I/O or display needs without taking up PCIe slots or adding external adapters.

The HP Z2 Tower G4 workstation is designed for complex workloads like rendering with up to Ultra 3D graphics and the latest Intel Core or Intel Xeon processors. The HP Z2 tower can handle demanding 3D projects with over 60 percent more graphics power than the previous generation. With high clock speeds, users can get full, unthrottled performance, even with heavy workloads.

The HP EliteDesk 800 workstation Edition targets users who want to upgrade to a workstation-class desktop with integrated ISV certified applications experience.

Designed for 2D/3D design, it is also out-of-the box optimized for leading VR engines and features the Nvidia GeForce GTX 1080.

The HP Z2 Mini is expected to be available later this month for a starting price of $799; the HP Z2 Small Form Factor is expected to be available later this month for a starting price of
$749; the HP Z2 Tower is expected to be available later this month for a starting price of $769; and the HP EliteDesk 800 is expected to be available later this month for a starting price of $642, including Nvidia Quadro P400 graphics.

Lenovo intros 15-inch VR-ready ThinkPad P52

Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.

“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”

The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.

“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”

The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.

The ThinkPad P52 will be available later this month.

Combining 3D and 360 VR for The Cabiri: Anubis film

Whether you are using 360 VR or 3D, both allow audiences to feel in on the action and emotion of a film narrative or performance, but combine the two together and you can create a highly immersive experience that brings the audience directly into the “reality” of the scenes.

This is exactly what film producers and directors Fred Beahm and Bogdan Darev have done in The Cabiri: Anubis, a 3D/360VR performance art film showing at the Seattle International Film Festival’s (SIFF) VR Zone on May 18 through June 10.

The Cabiri is a Seattle-based performance art group that creates stylistic and athletic dance and entertainment routines at theater venues throughout North America. The 3D/360VR film can now be streamed from the Pixvana app to the new Oculus Go headset, which is specifically designed for 3D and 360 streaming and viewing.

“As a director working in cinema to create worlds where reality is presented in highly stylized stories, VR seemed the perfect medium to explore. What took me by complete surprise was the emotional impact, the intimacy and immediacy the immersive experience allows,” says Darev. “VR is truly a medium that highlights our collective responsibility to create original and diverse content through the power of emerging technologies that foster curiosity and the imagination.”

“Other than a live show, 3D/360VR is the ideal medium for viewers to experience the rhythmic movement in The Cabiri’s performances. Because they have the feeling of being within the scene, the viewers become so engaged in the experience that they feel the emotional and dramatic impact,” explains Beahm, who is also the cinematographer, editor and post talent for The Cabiri film.

Beahm has a long list of credits to his name, and a strong affinity for the post process that requires a keen sense of the look and feel a director or producer is striving to achieve in a film. “The artistic and technical functions of the post process take a film from raw footage to a good result, and with the right post artist and software tools to a great film,” he says. “This is why I put a strong emphasis on the post process, because along with a great story and cinematography, it’s a key component of creating a noteworthy film. VR and 3D require several complex steps, and you want to use tools that simplify the process so you can save time, create high-quality results and stay within budget.”

For The Cabiri film, he used the Kandao Obsidian S camera, filming in 6K 3D360, then SGO’s Mistika VR for their stereo 3D optical-flow stitching. He edited in Adobe’s Premiere Pro CC 2018 and finished in Assimilate’s Scratch VR, using their 3D/360VR painting, tracking and color grading tools. He then delivered in 4K 3D360 to Pixvana’s Spin Studio.”

“Scratch VR is fast. For example, with the VR transform-and-vector paint tools I can quickly paint out the nadir, or easily delete unwanted artifacts like portions of a camera rig and wires, or even a person. It’s also easy to add in graphics and visual effects with the built-in tracker and compositing tools. It’s also the only software I use that renders content in the background while you continue working on your project. Another advantage is that Scratch VR will automatically connect to an Oculus headset for viewing 3D and 360,” he continues. “During our color grading session, Bogdan would wear an Oculus Rift headset and give me suggestions about changes I should make, such as saturation and hues, and I could quickly do these on the fly and save the versions for comparison.”

Behind the Title: Spacewalk Sound’s Matthew Bobb

NAME: Matthew Bobb

COMPANY: Pasadena, California’s SpaceWalk Sound 

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service audio post facility specializing in commercials, trailers and spatial sound for virtual reality (VR). We have a heavy focus on branded content with clients such as Panda Express and Biore and studios like Warner Bros., Universal and Netflix.

WHAT’S YOUR JOB TITLE?
Partner/Sound Supervisor/Composer

WHAT DOES THAT ENTAIL?
I’ve transitioned more into the sound supervisor role. We have a fantastic group of sound designers and mixers that work here, plus a support staff to keep us on track and on budget. Putting my faith in them has allowed me to step away from the small details and look at the bigger picture on every project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
We’re still a small company, so while I mix and compose a little less than before, I find my days being filled with keeping the team moving forward. Most of what falls under my role is approving mixes, prepping for in-house clients the next day, sending out proposals and following up on new leads. A lot of our work is short form, so projects are in and out the door pretty fast — sometimes it’s all in one day. That means I always have to keep one eye on what’s coming around the corner.

The Greatest Showman 360

WHAT’S YOUR FAVORITE PART OF THE JOB?
Lately, it has been showing VR to people who have never tried it or have had a bad first experience, which is very unfortunate since it is a great medium. However, that all changes when you see someone come out of a headset exclaiming,”Wow, that is a game changer!”

We have been very fortunate to work on some well-known and loved properties and to have people get a whole new experience out of something familiar is exciting.

WHAT’S YOUR LEAST FAVORITE?
Dealing with sloppy edits. We have been pushing our clients to bring us into the fold as early as v1 to make suggestions on the flow of each project. I’ll keep my eye tuned to the timing of the dialog in relation to the music and effects, while making sure attention has been paid to the pacing of the edit to the music. I understand that the editor and director will have their attention elsewhere, so I’m trying to bring up potential issues they may miss early enough that they can be addressed.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I would say 3pm is pretty great most days. I should have accomplished something major by this point, and I’m moments away from that afternoon iced coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be crafting the ultimate sandwich, trying different combinations of meats, cheeses, spreads and veggies. I’d have a small shop, preferably somewhere tropical. We’d be open for breakfast and lunch, close around 4pm, and then I’d head to the beach to sip on Russell’s Reserve Small Batch Bourbon as the sun sets. Yes, I’ve given this some thought.

WHY DID YOU CHOOSE THIS PROFESSION?
I came from music but quickly burned out on the road. Studio life suited me much more, except all the music studios I worked at seemed to lack focus, or at least the clientele lacked focus. I fell into a few sound design gigs on the side and really enjoyed the creativity and reward of seeing my work out in the world.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We had a great year working alongside SunnyBoy Entertainment on VR content for the Hollywood studios including IT: Float, The Greatest Showman 360, Annabelle Creation: Bee’s Room and Pacific Rim: Inside the Uprising 360. We also released our first piece of interactive content, IT: Escape from Pennywise, for Gear VR and iOS.

Most recently, I worked on Star Wars: The Last Jedi in Scoring The Last Jedi: A 360 VR Experience. This takes Star Wars fans on a VIP behind-the-scenes intergalactic expedition, giving them on a virtual tour of the The Last Jedi’s production and soundstages and dropping them face-to-face with Academy Award-winning film composer John Williams and film director Rian Johnson.

Personally, I got to compose two Panda Express commercials, which was a real treat considering I sustained myself through college on a healthy diet of orange chicken.

It: Float

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It: Float was very special. It was exciting to take an existing property that was not only created by Stephen King but was also already loved by millions of people, and expand on it. The experience brought the viewer under the streets and into the sewers with Pennywise the clown. We were able to get very creative with spatial sound, using his voice to guide you through the experience without being able to see him. You never knew where he was lurking. The 360 audio really ramped up the terror! Plus, we had a great live activation at San Diego Comic Con where thousands of people came through and left pumped to see a glimpse of the film’s remake.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
It’s hard to imagine my life without these three: Spotify Premium, no ads! Philips Hue lights for those vibes. Lastly, Slack keeps our office running. It’s our not-so-secret weapon.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I treat social media as an escape. I’ll follow The Onion for a good laugh, or Anthony Bourdain to see some far flung corner of earth I didn’t know about.

DO YOU LISTEN TO MUSIC WHEN NOT MIXING OR EDITING?
If I’m doing busy work, I prefer something instrumental like Eric Prydz, Tycho, Bonobo — something with a melody and a groove that won’t make me fall asleep, but isn’t too distracting either.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
The best part about Los Angeles is how easy it is to escape Los Angeles. My family will hit the road for long weekends to Palm Springs, Big Bear or San Diego. We find a good mix of active (hiking) and inactive (2pm naps) things to do to recharge.

The-Artery embraces a VR workflow for Mercedes spots

The-Artery founder and director Vico Sharabani recently brought together an elite group of creative artists and skilled technologists to create a cross-continental VR production pipeline for Mercedes-Benz’s Masters tournament brand campaign called “What Makes Us.”

Emmy-nominated cinematographer Paul Cameron (Westworld) and VFX supervisor Rob Moggach co-directed the project, which features a series six of intense broadcast commercials — including two fully CGI spots that were “shot” in a completely virtual world.

The agency and The-Artery team, including Vico Sharabani (third from the right).

This pair of 30-second commercials, First and Can’t, are the first to be created using a novel, realtime collaborative VR software application called Nu Design with Atom View technology. While in Los Angeles, Cameron worked within a virtual world, choosing camera bodies and lenses inside the space that allowed him to “shoot” for POV and angles that would have taken weeks to complete in the real world.

The software enabled him to grab and move the camera while all artistic camera direction was recorded virtually and used for final renders. This allowed both Sharabani, who was in NYC, and Moggach, who was in Toronto, to interact live and in realtime as if they were standing together on a physical set.

We reached out to Sharabani, Cameron and Moggach for details on VR workflow, and how they see the technology impacting production and creativity.

How did you come to know about Nurulize and the Nu Design Atom View technology?
Vico Sharabani: Scott Metzger, co-founder of Nurulize, is a long-time friend, colleague and collaborator. We have all been supporting each other’s careers and initiatives, so as soon as the alpha version of Nu Design was operational, we jumped on the opportunity of deploying it in real production.

How does the ability to shoot in VR change the production paradigm moving forward?
Rob Moggach: From scout to pre-light to shoot, through to dailies and editorial, it allows us to collaborate on digital productions in a traditional filmmaking process with established roles and procedures that are known to work.

Instead of locking animated productions into a rigid board, previs, animation workflow, a director can make decisions on editorial and find unexpected moments in the capture that wouldn’t necessarily be boarded and animated otherwise. Being able to do all of this without geographical restriction and still feel like you’re together in the same room is remarkable.

What types of projects are ideal for this new production pipeline?
Sharabani: The really beautiful thing for The-Artery, as a first time user of this technology, is to prove that this workflow can be used by companies like us on every project, and not only in films by Steven Spielberg and James Cameron. The obvious ideal fit is for projects like fully CGI productions; previs of big CGI environments that need to be considered in photography; virtual previs of scouted locations in remote or dangerous locations; blocking of digital sets in pre-existing greenscreen or partially built stages; and multiple remote creative teams that need to share a vision and input

What are the specific benefits?
Moggach: With a virtual pipeline, we are able to…
1) Work much faster than traditional previs to quickly capture multiple camera setups.
2) Visualize environments and CGI with a camera in-hand to find shots you didn’t know were there on screen.
3) Interact closely regardless of location and truly feel together in the same place.
4) Use known filmmaking processes, allowing us to capitalize on established wisdom and experience.

What impacts will it have to creativity?
Paul Cameron: For me, the VR workflow added a great impact to the overall creative approach for both commercials. It enabled me to go into the environment and literally grab a camera, move around the car, be in the middle of the car, pull the camera over the car. Basically, it allowed me to put the camera in places I always wanted to put the camera, but it would take hours to get cranes or scaffold for different positions.

The other fascinating thing is that you are able to scale the set up and down. For instance, I was able to scale the car down to 25% its normal size and make a very drastic camera move over the car, handheld with a VR camera, and with the combination of slowing it down, and smoothing it down a bit, we were able to design camera moves that were very organic and very natural.

I think it also allowed me to achieve a greater understanding of the set size and space, the geometry of the set and the relationship of the car to the set. In the past, it would be a process of going through a wireframe, waiting for the rendering — in this case, the car — and programming camera moves. It basically helps with conceptualization of camera moves and shot design in a new way for me.

Also being a director of photography, it is very empowering to be able to grab the camera literally with a controller and move through that space. Again, it just takes a matter of seconds to make very dramatic camera moves, whereas even on set it could take upwards of an hour or two to move a technocrane and actually get a feel for that shot, so it is very empowering overall.

What does it now allow directors to achieve?
Cameron: One of the better features about the VR workflow is that you can actually just teleport yourself around the set while you are inside of it. So, basically, you picture yourself inside this set, and with a left hand controller and one for the right hand, you have the ability to kind of teleport yourself to different perspectives. In this case, the automobile, the geometry and wireframe geometry of the set, so it gives you a very good idea of the perspectives from different angles and you can move around really quickly.

The other thing that I found fascinating was that not only can you move around this set, in this case, I was able to fly… upwards of about 150 feet and look down on the set. This was, while you are immersed in the VR world, quite intoxicating. You are literally flying and hovering above the set, and it kind of feels like you are standing on a beam with no room to move forward or backward without falling.

Paul Cameron

So the ability to move around in an endless set perspective-wise and teleport yourself around and above the set looking down, was amazing. In the case of the Can’t commercial, I was able to teleport on the other side of the wind turbine and look back at the automobile.

Although we had the 3D CADs of sets in the past, and we were able to travel around and look at camera positions, somehow the immediacy and the power of being in the VR environment with the two controllers was quite powerful. I think for one of the sessions I had the glasses on for almost four hours straight. We recorded multiple camera moves, and everybody was quite shocked that I was in the environment for that long. But for me, it was like being on a set, almost like a pre-pre-light or something, where I was able to have my space as a director and move around and get to see my angles and design my shots.

What other tools did you use?
Sharabani: Houdini for CG,Redshift (with support of GridMarkets) for rendering, Nuke for compositing, Flame for finishing, Resolve for color grading and Premiere for editing.

NextComputing, Z Cam, Assimilate team on turnkey VR studio

NextComputing, Z Cam and Assimilate have teamed up to create a complete turnkey VR studio. Foundation VR Studio is designed to provide all aspects of the immersive production process and help the creatives be more creative.

According to Assimilate CEO Jeff Edson, “Partnering with Z Cam last year was an obvious opportunity to bring together the best of integrated 360 cameras with a seamless workflow for both live and post productions. The key is to continue to move the market from a technology focus to a creative focus. Integrated cameras took the discussions up a level of integration away from the pieces. There have been endless discussions regarding capable platforms for 360; the advantage we have is we work with just about every computer maker as well as the component companies, like CPU and GPU manufacturers. These are companies that are willing to create solutions. Again, this is all about trying to help the market focus on the creative as opposed to debates about the technology, and letting creative people create great experiences and content. Getting the technology out of their way and providing solutions that just work helps with this.”

These companies are offering a few options with their Power VR Studio.

The Foundation VR Studio, which costs $8,999 and is available now includes:
• NextComputing Edge T100 workstation
o CPU: 6-core Intel core i7-8700K 3.7GHz processor
o Memory: 16GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

Then there is the Power VR Studio, for $10,999, which is also available now. It includes:
• NextComputing Edge T100 workstation
o CPU: 10-core Intel core i9-7900K 3.3GHz processor
o Memory: 32GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

These companies will be at NAB demoing the systems.

 

 

GTC embraces machine learning and AI

By Mike McCarthy

I had the opportunity to attend GTC 2018, Nvidia‘s 9th annual technology conference in San Jose this week. GTC stands for GPU Technology Conference, and GPU stands for graphics processing unit, but graphics makes up a relatively small portion of the show at this point. The majority of the sessions and exhibitors are focused on machine learning and artificial intelligence.

And the majority of the graphics developments are centered around analyzing imagery, not generating it. Whether that is classifying photos on Pinterest or giving autonomous vehicles machine vision, it is based on the capability of computers to understand the content of an image. Now DriveSim, Nvidia’s new simulator for virtually testing autonomous drive software, dynamically creates imagery for the other system in the Constellation pair of servers to analyze and respond to, but that is entirely machine-to-machine imagery communication.

The main exception to this non-visual usage trend is Nvidia RTX, which allows raytracing to be rendered in realtime on GPUs. RTX can be used through Nvidia’s OptiX API, as well as Microsoft’s DirectX RayTracing API, and eventually through the open source Vulkan cross-platform graphics solution. It integrates with Nvidia’s AI Denoiser to use predictive rendering to further accelerate performance, and can be used in VR applications as well.

Nvidia RTX was first announced at the Game Developers Conference last week, but the first hardware to run it was just announced here at GTC, in the form of the new Quadro GV100. This $9,000 card replaces the existing Pascal-based GP100 with a Volta-based solution. It retains the same PCIe form factor, the quad DisplayPort 1.4 outputs and the NV-Link bridge to pair two cards at 200GB/s, but it jumps the GPU RAM per card from 16GB to 32GB of HBM2 memory. The GP100 was the first Quadro offering since the K6000 to support double-precision compute processing at full speed, and the increase from 3,584 to 5,120 CUDA cores should provide a 40% increase in performance, before you even look at the benefits of the 640 Tensor Cores.

Hopefully, we will see simpler versions of the Volta chip making their way into a broader array of more budget-conscious GPU options in the near future. The fact that the new Nvidia RTX technology is stated to require Volta architecture CPUs leads me to believe that they must be right on the horizon.

Nvidia also announced a new all-in-one GPU supercomputer — the DGX-2 supports twice as many Tesla V100 GPUs (16) with twice as much RAM each (32GB) compared to the existing DGX-1. This provides 81920 CUDA cores addressing 512GB of HBM2 memory, over a fabric of new NV-Link switches, as well as dual Xeon CPUs, Infiniband or 100GbE connectivity, and 32TB of SSD storage. This $400K supercomputer is marketed as the world’s largest GPU.

Nvidia and their partners had a number of cars and trucks on display throughout the show, showcasing various pieces of technology that are being developed to aid in the pursuit of autonomous vehicles.

Also on display in the category of “actually graphics related” was the new Max-Q version of the mobile Quadro P4000, which is integrated into PNY’s first mobile workstation, the Prevail Pro. Besides supporting professional VR applications, the HDMI and dual DisplayPort outputs allow a total of three external displays up to 4K each. It isn’t the smallest or lightest 15-inch laptop, but it is the only system under 17 inches I am aware of that supports the P4000, which is considered the minimum spec for professional VR implementation.

There are, of course, lots of other vendors exhibiting their products at GTC. I had the opportunity to watch 8K stereo 360 video playing off of a laptop with an external GPU. I also tried out the VRHero 5K Plus enterprise-level HMD, which brings the VR experience to whole other level. Much more affordable is TP-Cast’s $300 wireless upgrade Vive and Rift HMDs, the first of many untethered VR solutions. HTC has also recently announced the Vive Pro, which will be available in April for $800. It increases the resolution by 1/3 in both dimensions to 2880×1600 total, and moves from HDMI to DisplayPort 1.2 and USB-C. Besides VR products, they also had all sorts of robots in various forms on display.

Clearly the world of GPUs has extended far beyond the scope of accelerating computer graphics generation, and Nvidia is leading the way in bringing massive information processing to a variety of new and innovative applications. And if that leads us to hardware that can someday raytrace in realtime at 8K in VR, then I suppose everyone wins.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Supersphere offering flypacks for VR/360 streaming

Supersphere, a VR/360° production studio, will be at NAB this year debuting 12G glass-to-glass flypacks optimized for live VR/360° streaming. These multi-geometry (mesh/rectilinear/equirectangular) flypacks can handle 360°, 180°, 4K or HD production and seamlessly mix and match each geometry. They also include built-in VDN (video distribution network) encoding and delivery for live streaming to any platform or custom player.

“Live music, both in streaming and in ticket sales, has posted consistent growth in the US and Worldwide. It’s a multibillion-dollar industry and only getting bigger. We are investing in the immersive streaming market, because we see that trend reflected in our client requests,” explains founder/EP of Supershere. “Clients always want to provide audiences with the most engaging experience possible. An immersive environment is the way to do it.”

Each flypack is standard equipped with Z Cam K1 Pro 180° cameras and Z CAM S1 Pro 360° cameras, and customizable to any camera as productions demand. They are also equipped with Blackmagic’s latest ATEM Production Studio 4K live production switchers to facilitate multi-camera live production across a range of video sources. The included Assimilate Scratch VR Z enables realtime geometry, stitching, color grading, finishing and ambisonic audio. The system also offers fully integrated transcoding and delivery — Teleos Media’s VDN (Video Distribution Network) delivers immersive experiences to any devicewith instant start experience, sustained 16Mbps at high frame rates and 4K + VR resolutions. This allows clients to easily build custom 360° video players on their websites or apps as a destination for live-streamed content, in addition to streaming directly to YouTube, Facebook and other popular platforms.

“These flypacks provide an incredibly robust workflow that takes the complexity out of immersive live production — capable of handling the data required for stunning high-resolution projects in one flexible end-to-end package,” says Wilson. “Plus with Teleos’ VDN capabilities, we make it easy for any client to live stream high-end content directly to whatever device or app best suits their customers’ needs, including the option to quickly build custom, fully integrated 360° live players.”

Z Cam, Assimilate reduce price of S1 VR camera/Scratch VR bundle

The Z Cam S1 VR camera/WonderStitch/Assimilate Scratch VR Z bundle, an integrated VR production workflow offering, is now $3,999, down from $4,999.

The Z Cam S1/Scratch VR Z bundle provides acquisition via Z Cam’s S1 pro VR camera, stitching via the WonderStitch software and a streamlined VR post workflow via Assimilate’s realtime Scratch VR Z tools.

Here are some details:
If streaming live 360 from the Z Cam S1 through Scratch VR Z, users can take advantage of realtime features such as inserting/composting graphics/text overlays, including animations, and keying for elements like greenscreen — all streaming live to Facebook Live 360.

Scratch VR Z can be used to do live camera preview, prior to shooting with the S1. During the shoot, Scratch VR Z is used for dailies and data management, including metadata. It’s a direct connect to the PC and then to the camera via a high-speed Ethernet port. Stitching of the imagery is done in Z Cam’s WonderStitch, now integrated into Scratch VR Z, then comes traditional editing, color grading, compositing, multichannel audio from the S1 or adding external ambisonic sound, finishing and then publishing to all final online or stand-alone 360 platforms.

The Z Cam S1/Scratch VR Z bundle is available now.

Behind the Title: Light Sail VR’s Matthew Celia

NAME: Matthew Celia

COMPANY: LA’s Light Sail VR (@lightsailvr)

CAN YOU DESCRIBE YOUR COMPANY?
Light Sail VR is a virtual reality production company specializing in telling immersive narrative stories. We’ve built a strong branded content business over the last two years working with clients such as Google and GoPro, and studios like Paramount and ABC.

Whether it’s 360 video, cinematic VR or interactive media, we’ve built an end-to-end pipeline to go from script to final delivery. We’re now excited to be moving into creating original IP and more interactive content that fuses cinematic live-action film footage with game engine mechanics.

WHAT’S YOUR JOB TITLE?
Creative Director and Managing Partner

WHAT DOES THAT ENTAIL?
A lot! We’re a small boutique shop so we all wear many hats. First and foremost, I am a director and work hard to deliver a compelling story and emotional connection to the audience for each one of our pieces. Story first is our motto, and I try and approach every technical problem with a creative solution. Figuring out execution is a large part of that.

In addition to the production side, I also carry a lot of the technical responsibilities in post production, such as keeping our post pipeline humming and inventing new workflows. Most recently, I have been dabbling in programming interactive cinema using the Unity game engine.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I am in charge of washing the lettuce when we do our famous “Light Sail VR Sandwich Club” during lunch. Yes, you get fed for free if you work with us, and I make an amazing italian sandwich.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Hard to say. I really like what I do. I like being on set and working with actors because VR is such a great medium for them to play in, and it’s exciting to collaborate with such creative and talented people.

National Parks Service

WHAT’S YOUR LEAST FAVORITE?
Render times and computer crashes. My tech life is in constant beta. Price we pay for being on the bleeding edge, I guess!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like the early morning because it is quiet, my brain is fresh, and I haven’t yet had 20 people asking something of me.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably the same, but at a large company. If I left the film business I’d probably teach. I love working with kids.

WHY DID YOU CHOOSE THIS PROFESSION?
I feel like I’ve wanted to be a filmmaker since I could walk. My parents like to drag out the home movies of me asking to look in my dad’s VHS video camera when I was 4. I spent most of high school in the theater and most people assumed I would be an actor. But senior year I fell in love with film when I shot and cut my first 16mm reversal stock on an old reel-to-reel editing machine. The process was incredibly fun and rewarding and I was hooked. I only recently discovered VR, but in many ways it feels like the right path for me because I think cinematic VR is the perfect intersection of filmmaking and theater.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
On the branded side, we just finished up two tourism videos. One for the National Parks Service which was a 360 tour of the Channel Islands with Jordan Fisher and the other was a 360 piece for Princess Cruises. VR is really great to show people the world. The last few months of my life have been consumed by Light Sail VR’s first original project, Speak of the Devil.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Speak of the Devil is at the top of that list. It’s the first live-action interactive project I’ve worked on and it’s massive. Crafted using the GoPro Odyssey camera in partnership with Google Jump it features over 50 unique locations, 13 different endings and is currently taking up about 80TB of storage (and counting). It is the largest project I’ve worked on to date, and we’ve done it all on a shoestring budget thanks to the gracious contributions of talented creative folks who believed in our vision.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My instant-read grill meat thermometer, my iPhone and my Philips Hue bulbs. Seriously, if you have a baby, it’s a life saver being able to whisper, Hey, Siri, turn off the lights.”

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m really active on several Facebook groups related to 360 video production. You can get a lot of advice and connect directly with vendors and software engineers. It’s a great community.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I tend to pop on some music when I’m doing repetitive mindless tasks, but when I have to be creative or solve a tough tech problem, the music is off so that I can focus. My favorite music to work to tends to be Dave Matthews Band live albums. They get into 20-minute long jams and it’s great.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
De-stressing is really hard when you own your own company. I like to go walking, but if that doesn’t work, I’ll try diving into some cooking for my family, which forces me to focus on something not work related. I tend to feel better after eating a really good meal.

Rogue takes us on VR/360 tour of Supermodel Closets

Rogue is a NYC-based creative boutique that specializes in high-end production and post for film, advertising and digital. Since its founding two years ago, executive creative director, Alex MacLean and his team have produced a large body of work providing color grading, finishing and visual effects for clients such as HBO, Vogue, Google, Vice, Fader and more. For the past three years MacLean has also been at the forefront of VR/360 content for narratives and advertising.

MacLean recently wrapped up post production on four five-minute episodes of 360-degree tours of Supermodel Closets. The series is a project of Conde Nast Entertainment and Vogue for Vogue’s 125th anniversary. If you’re into fashion, this VR tour gives you a glimpse at what supermodels wear in their daily lives. Viewers can look up, down and all around to feel immersed in the closet of each model as she shows her favorite fashions and shares the stories behind their most prized pieces.

 

Tours include the closets of Lily Aldridge, Cindy Crawford, Kendall Jenner  and
Amber Valletta.

MacLean worked with director Julina Tatlock, who is a co-founder and CEO of 30 Ninjas, a digital entertainment company that develops, writes and produces VR, multi-platform and interactive content. Rogue and 30 Ninjas worked together to determine the best workflow for the series. “I always think it’s best practice to collaborate with the directors, DPs and/or production companies in advance of a VR shoot to sort out any technical issues and pre-plan the most efficient production process from shoot to edit, stitching through all the steps of post-production,” reports MacLean. “Foresight is everything; it saves a lot of time, money, and frustration for everyone, especially when working in VR, as well as 3D.”

According to MacLean, they worked with a new camera format, the YI Halo camera, which is designed for professional VR data acquisition. “I often turn to the Assimilate team to discuss the format issues because they always support the latest camera formats in their Scratch VR tools. This worked well again because I needed to define an efficient VR and 3D workflow that would accommodate the conforming, color grading, creating of visual effects and the finishing of a massive amount of data at 6.7K x 6.7K resolution.”

 

The Post
“The post production process began by downloading 30 Ninjas’ editorial, stitched footage from the cloud to ingest into our MacBook Pro workstations to do the conform at 6K x 6K,” explains MacLean. “Organized data management is a critical step in our workflow, and Scratch VR is a champ at that. We were simultaneously doing the post for more than one episode, as well as other projects within the studio, so data efficiency is key.”

“We then moved the conformed raw 6.7K x 6.7K raw footage to our HP Z840 workstations to do the color grading, visual effects, compositing and finishing. You really need powerful workstations when working at this resolution and with this much data,” reports MacLean. “Spherical VR/360 imagery requires focused concentration, and then we’re basically doing everything twice when working in 3D. For these episodes, and for all VR/360 projects, we create a lat/long that breaks out the left eye and right eye into two spherical images. We then replicate the work from one eye to the next, and color correct any variances. The result is seamless color grading.

 

“We’re essentially using the headset as a creative tool with Scratch VR, because we can work in realtime in an immersive environment and see the exact results of work in each step of the post process,” he continues. “This is especially useful when doing any additional compositing, such as clean-up for artifacts that may have been missed or adding or subtracting data. Working in realtime eases the stress and time of doing a new composite of 360 data for the left eye and right eye 3D.”

Playback of content in the studio is very important to MacLean and team, and he calls the choice of multiple headsets another piece to the VR/360 puzzle. “The VR/3D content can look different in each headset so we need to determine a mid-point aesthetic look that displays well in each headset. We have our own playback black box that we use to preview the color grading and visual effects, before committing to rendering. And then we do a final QC review of the content, and for these episodes we did so in Google Daydream (untethered), HTV Live (tethered) and the Oculus Rift (tethered).”

MacLean sees rendering as one of their biggest challenges. “It’s really imperative to be diligent throughout all the internal and client reviews prior to rendering. It requires being very organized in your workflow from production through finishing, and a solid QC check. Content at 6K x 6K, VR/360 and 3D means extremely large files and numerous hours of rendering, so we want to restrict re-rendering as much as possible.”

Storage in the Studio: VFX Studios

By Karen Maierhofer

It takes talent and the right tools to generate visual effects of all kinds, whether it’s building breathtaking environments, creating amazing creatures or crafting lifelike characters cast in a major role for film, television, games or short-form projects.

Indeed, we are familiar with industry-leading content creation tools such as Autodesk’s Maya, Foundry’s Mari and more, which, when placed into the hands of creatives, the result in pure digital magic. In fact, there is quite a bit of technological magic that occurs at visual effects facilities, including one kind in particular that may not have the inherent sparkle of modeling and animation tools but is just as integral to the visual effects process: storage. Storage solutions are the unsung heroes behind most projects, working behind the scenes to accommodate artists and keep their productive juices flowing.

Here we examine three VFX facilities and their use of various storage solutions and setups as they tackle projects large and small.

Framestore
Since it was founded in 1986, Framestore has placed its visual stamp on a plethora of Oscar-, Emmy- and British Academy Film Award-winning visual effects projects, including Harry Potter, Gravity and Guardians of the Galaxy. With increasingly more projects, Framestore expanded from its original UK location in London to North American locales such as Montreal, New York, Los Angeles and Chicago, handling films as well as immersive digital experiences and integrated advertisements for iconic brands, including Guinness, Geico, Coke and BMW.

Beren Lewis

As the company and its workload grew and expanded into other areas, including integrated advertising, so, too, did its storage needs. “Innovative changes, such as virtual-reality projects, brought on high demand for storage and top-tier performance,” says NYC-based Beren Lewis, CTO of advertising and applied technologies at Framestore. “The team is often required to swiftly accommodate multiple workflows, including stereoscopic 4K and VR.”

Without hesitation, Lewis believes storage is typically the most challenging aspect of technology within the VFX workflow. “If the storage isn’t working, then neither are the artists,” he points out. Furthermore, any issues with storage can potentially lead to massive financial implications for the company due to lost time and revenue.

According to Lewis, Framestore uses its storage solution — a Pixit PixStor General Parallel File System (GPFS) storage cluster using the NetApp E-Series hardware – for all its project data. This includes backups to remote co-location sites, video preprocessing, decompression, disaster recovery preparation, scalability and high performance for VFX, finishing and rendering workloads.

The studio moved all the integrated advertising teams over to the PixStor GPFS clusters this past spring. Currently, Framestore has five primary PixStor clusters using NetApp E-Series in use at each office in London, LA, Chicago and Montreal.

According to Lewis, Framestore partnered with Pixit Media and NetApp to take on increasingly complicated and resource-hungry VR projects. “This partnership has provided the global integrated advertising team with higher performance and nonstop access to data,” he says. “The Pixit Media PixStor software-defined scale-out storage solution running on NetApp E-Series systems brings fast, reliable data access for the integrated advertising division so the team can embrace performance and consistency across all five sites, take a cost-effective, simplified approach to disaster recovery and have a modular infrastructure to support multiple workflows and future expansion.”

BMW

Framestore selected its current solution after reviewing several major storage technologies. It was looking for a single namespace that was very stable, while providing great performance, but it also had to be scalable, Lewis notes. “The PixStor ticked all those boxes and provided the right balance between enterprise-grade hardware and support, and open-source standards,” he explains. “That balance allowed us to seamlessly integrate the PixStor into our network, while still maintaining many of the bespoke tools and services that we had developed in-house over the years, with minimum development time.”

In particular, the storage solution provides the required high performance so that the studio’s VFX, finishing and rendering workloads can all run “full-out with no negative effect on the finishing editors’ or graphic artists’ user experience,” Lewis says. “This is a game-changing capability for an industry that typically partitions off these three workloads to keep artists from having to halt operations. PixStor running on E-Series consolidates all three workloads onto a single IT infrastructure with streamlined end-to-end production of projects, which reduces both time to completion and operational costs, while both IT acquisition and maintenance costs are reduced.”

At Framestore, integrating storage into the workflow is simple. The first step after a project is green-lit is the establishment of a new file set on the PixStor GPFS cluster, where ingested footage and all the CG artist-generated project data will live. “The PixStor is at the heart of the integrated advertising storage workflow from start to finish,” Lewis says. Because the PixStor GPFS cluster serves as the primary storage for all integrated advertising project data, the division’s workstations, renderfarm, editing and finishing stations connect to the cluster for review, generation and storage of project content.

Prior to the move to PixStor/NetApp, Framestore had been using a number of different storage offerings. According to Lewis, they all suffered from the same issues in terms of scalability and degradation of performance under render load — and that load was getting heavier and more unpredictable with every project. “We needed a technology that scaled and allowed us to maintain a single namespace but not suffer from continuous slowdowns for artists due to renderfarm load during crunch times or project delivery.”

Geico

As Lewis explains, with the PixStor/NetApp solution, processing was running up to 270,000 IOPS (I/O operations per second), which was at least several times what Framestore’s previous infrastructure would have been able to handle in a single namespace. “Notably, the development workflow for a major theme-park ride was unhindered by all the VR preprocessing, while backups to remote co-location sites synched every two hours without compromising the artist, rendering or finishing workloads,” he says. “This provided a cost-effective, simplified approach to disaster recovery, and Framestore now has a fast, tightly integrated platform to support its expansion plans.”

To stay at the top of its game, Framestore is always reviewing new technologies, and storage is often part of that conversation. To this end, the studio plans to build on the success it has had with PixStor by expanding the storage to handle some additional editorial playback and render workloads using an all-Non-Volatile Memory Express (NVMe) flash tier. Other projects include a review of object storage technology for use as a long-term, off-premises storage target for archival data.

Without question, the industry’s visual demands are rapidly changing. Not long ago, Framestore could easily predict storage and render requirements for a typical project. But that is no longer the case, and the studio finds itself working in ever-increasing resolutions and frame rates. Whereas projects may have been as small as 3TB in the recent past, nowadays the studio regularly handles multiple projects of 300TB or larger. And the storage must be shared with other projects of varying sizes and scope.

“This new ‘unknowns’ element of our workflow puts many strains on all aspects of our pipeline, but especially the storage,” Lewis points out. “Knowing that our storage can cope with the load and can scale allows us to turn our attention to the other issues that these new types of projects bring to Framestore.”

As Lewis notes, working with high-resolution images and large renderfarms create a unique set of challenges for any storage technology that’s not seen in many other fields. The VFX will often test any storage technology well beyond what other industries are capable of. “If there’s an issue or a break point, we will typically find it in spectacular fashion,” he adds.

Rising Sun Pictures
As a contributor to the design and execution of computer-generated effects on more than 100 feature films since its inception 22 years ago, Rising Sun Pictures (RSP) has pushed the technical bar many times over in film as well as television projects. Based in Adelaide, South Australia, RSP has built a top team of VFX artists who have tackled such box-office hits as Thor: Ragnarok, X-Men and Game of Thrones, as well as the Harry Potter and Hunger Games franchises.

Mark Day

Such demanding, high-level projects require demanding, high-level effects, which, in turn, demand a high-performance, reliable storage solution capable of handling varying data I/O profiles. “With more than 200 employees accessing and writing files in various formats, the need for a fast, reliable and scalable solution is paramount to business continuity,” says Mark Day, director of engineering at RSP.

Recently, RSP installed an Oracle ZS5 storage appliance to handle this important function. This high-performance, unified storage system provides NAS and SAN cloud-converged storage capabilities that enable on-premises storage to seamlessly access Oracle Public Cloud. Its advanced hardware and software architecture includes a multi-threading SMP storage operating system for running multiple workloads and advanced data services without performance degradation. The offering also caches data on DRAM or flash cache for optimal performance and efficiency, while keeping data safely stored on high-capacity SSD (solid state disk) or HDD (hard disk drive) storage.

Previously, the studio had been using an Dell EMC Isilon storage cluster with Avere caching appliances, and the company is still employing the solution for parts of its workflow.

When it came time to upgrade to handle RSP’s increased workload, the facility ran a proof of concept with multiple vendors in September 2016 and benchmarked their systems. Impressed with Oracle, RSP began installation in early 2017. According to Day, RSP liked the solution’s ability to support larger packet sizes — now up to 1MB. In addition, he says its “exceptional” analytics engine gives introspection into a render job.

“It has a very appealing [total cost of ownership], and it has caching right out of the box, removing the need for additional caching appliances,” says Day. Storage is at the center of RSP’s workflow, storing all the relevant information for every department — from live-action plates that are turned over from clients, scene setup files and multi-terabyte cache files to iterations of the final product. “All employees work off this storage, and it needs to accommodate the needs of multiple projects and deadlines with zero downtime,” Day adds.

Machine Room

“Visual effects scenes are getting more complex, and in turn, data sizes are increasing. Working in 4K quadruples file sizes and, therefore, impacts storage performance,” explains Day. “We needed a solution that could cope with these requirements and future trends in the industry.”

According to Day, the data RSP deals with is broad, from small setup files to terabyte geocache files. A one-minute 2K DPX sequence is 17GB for the final pass, while 4K is 68GB. “Keep in mind this is only the final pass; a single shot could include hundreds of passes for a heavy computer-generated sequence,” he points out.

Thus, high-performance storage is important to the effective operation of a visual effects company like RSP. In fact, storage helps the artists stay on the creative edge by enabling them to iterate through the creative process of crafting a shot and a look. “Artists are required to iterate their creative process many times to perfect the look of a shot, and if they experience slowdowns when loading scenes, this can have a dramatic effect on how many iterations they can produce. And in turn, this affects employees’ efficiency and, ultimately, the profitability of the company,” says Day.

Thor: Ragnarok

Most recently, RSP used its new storage solution for work on the blockbuster Thor: Ragnarok, in particular, for the Val’s Flashback sequence — which was extremely complex and involved extensive lighting and texture data, as well as high-frame-rate plates (sometimes more than 1,000fps for multiple live-action footage plates). “Before, our storage refresh early versions of this shot could take up to 24 hours to render on our server farm. But since installing our new storage, we saw this drastically reduced to six hours — that’s a 3x improvement, which is a fantastic outcome,” says Day.

Outpost VFX
A full-service VFX studio for film, broadcast and commercials, Outpost VFX, based in Bournemouth, England, has been operational since late 2012. Since that time, the facility has been growing by leaps and bounds, taking on major projects, including Life, Nocturnal Animals, Jason Bourne and 47 Meters Down.

Paul Francis

Due to this fairly rapid expansion, Outpost VFX has seen the need for increased capacity in its storage needs. “As the company grows and as resolution increases and HDR comes in, file sizes increase, and we need much more capacity to deal with that effectively,” says CTO Paul Francis.

When setting up the facility five years ago, the decision was made to go with PixStor from Pixit Media and Synology’s NAS for its storage solution. “It’s an industry-recognized solution that is extremely resilient to errors. It’s fast, robust and the team at Pixit provides excellent support, which is important to us,” says Francis.

Foremost, the solution had to provide high capacity and high speeds. “We need lots of simultaneous connections to avoid bottlenecks and ensure speedy delivery of data,” Francis adds. “This is the only one we’ve used, really. It has proved to be stable enough to support us through our growth over the last couple of years — growth that has included a physical office move and an increase in artist capacity to 80 seats.”

Outpost VFX mainly works with image data and project files for use with Autodesk’s Maya, Foundry’s Nuke, Side Effects’ Houdini and other VFX and animation tools. The challenge this presents is twofold, both large and small: concern for large file sizes, and problems the group can face with small files, such as metadata. Francis explains: “Sequentially loading small files can be time-consuming due to the current technology, so moving to something that can handle both of these areas will be of great benefit to us.”

Locally, artists use a mix of HDDs from a number of different manufacturers to store reference imagery and so forth — older-generation PCs have mostly Western Digital HDDs while newer PCs have generic SSDs. When replacing or upgrading equipment, Outpost VFX uses Samsung 900 Series SSDs, depending on the required performance and current market prices.

Life

Like many facilities, Outpost VFX is always weighing its options when it comes to finding the best solution for its current and future needs. Presently, it is looking at splitting up some of its storage solutions into smaller segments for greater resilience. “When you only have one storage solution and it fails, everything goes down. We’re looking to break our setup into smaller, faster solutions,” says Francis.

Additionally, security is a concern for Outpost VFX when it comes to its clients. According to Francis, certain shows need to be annexed, meaning the studio will need a separate storage solution outside of its main network to handle that data.

When Outpost VFX begins a job, the group ingests all the plates it needs to work on, and they reside in a new job folder created by production and assigned to a specific drive for active jobs. This folder then becomes the go-to for all assets, elements and shot iterations created throughout the production. For security purposes, these areas of the server are only visible to and accessible by artists, who in turn cannot access the Internet; this ensures that the files are “watertight and immune to leaks,” says Francis, adding that with PixStor, the studio is able to set up different partitions for different areas that artists can jump between easily.

How important is storage to Outpost VFX? “Frankly, there’d be no operation without storage!” Francis says emphatically. “We deal with hundreds of terrabytes of data in visual effects, so having high-capacity, reliable storage available to us at all times is absolutely essential to ensure a smooth and successful operation.”

47 Meters Down

Because the studio delivers visual effects across film, TV and commercials simultaneously, storage is an important factor no matter what the crew is working on. A recent film project like 47 Meters Down required the full gamut of visual effects work, as Outpost VFX was the sole vendor for the project. So, the studio needed the space and responsiveness of a storage system that enabled them to deliver more than 420 shots, a number of which featured heavy 3D builds and multiple layers of render elements.

“We had only about 30 artists at that point, so having a stable solution that was easy for our team to navigate and use was crucial,” Francis points out.

Main Image: From Outpost VFX’s Domestos commercial out of agency MullenLowe London.

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.

A Closer Look: VR solutions for production and post

By Alexandre Regeffe

Back in September, I traveled to Amsterdam to check out new tools relating to VR and 360 production and post. As a producer based in Paris, France, I have been working in the virtual reality part of the business for over two years. While IBC took place in September, the information I have to share is still quite relevant.

KanDao

I saw some very cool technology at the show regarding VR and 360 video, especially within the cinematic VR niche. And niche is the perfect word — I see the market slightly narrowing after the wave of hype that happened a couple of years ago. Personally, I don’t think the public has been reached yet, but pardon my French pessimism. Let’s take a look…

Cameras
One new range of products I found amazing were the Obsidian cameras from manufacturer KanDao. This Chinese brand has a smart product line with their 3D/360 cameras. Starting with the Obsidian Go, they reach pro cinematic levels with the Obsidian R (for Resolution, which is 8K per eye) and the Obsidian S (for speed, which you can capture at 120fps). It offers a small radial form factor, only six eyes to produce very smooth stereoscopy, with very a high resolution per eye, which is one of the keys to reaching a good feeling of immersion using a HMD.

Kandao’s features are promising, including handling 6DoF with depth map generation. To me, this is the future of cinematic VR producing — you will be able to have more freedom as the viewer, translating slightly your point of view to see behind objects with natural parallax distortion in realtime! Let me call it “extended” stereoscopic 360.

I can’t speak about professional 360 cameras without also mentioning the Ozo from Nokia. Considered by users to be the first pro VR camera, the Ozo+ version launched this year with a new ISP and offers astonishing new features, especially when you transfer your shots in the Ozo creator tool, which is in version 2.1.

Nokia Ozo+

Powerful tools, like highlights and shadow recovery, haze removal, auto stabilization and better denoising. are there to improve the overall image quality. Another big thing on the Nokia booth was the version 2.0 of the Ozo Live system. Yes, you can now webcast your live event in stereoscopic 360 with a 4K-per-eye resolution! And you can simply use a (boosted) laptop to do it! All the VR tools from Nokia are part of what they call Ozo Reality, an integrated ecosystem where you can create, deliver and experience cinematic VR.

VR Post
When you talk about VR post you have to talk about stitching — assembling all sources to obtain a 360 image. As a French-educated man, you know I have to complain somehow: I hate stitching. And I often yell at these guys who shoot at wrong camera positions. Spending hours (and money) dealing with seam lines is not my tasse de thé.

A few months before IBC, I found my grace: Mistika VR from SGO. Well known for their color grading tool Mistika Ultima (which is one of the finest in stereoscopic), SGO launched a stitching tool for 360 video. Fantastic results. Fantastic development team.

In this very intuitive tool, you can stitch sources of almost all existing cameras and rigs available on the market now, from Samsung gear 360 to Jaunt. With amazing optical flow algorithms, seam line fine adjustments, color matching and many other features, it is to me by far the best tool for outputing a clean, seamless equirectangular image. And the upcoming Mistika VR 3D for stitching stereoscopic sources is very promising. You know what? Thanks to Mistika VR, the stitching process could be fun. Even for me.

In general, optical flow is a huge improvement for stitching, and we can find this parameter in the Kandao Studio stitching tool (designed only for Obsidian cameras), for instance. When you’re happy with your stitch, you can then edit, color grade and maybe add VFX and interactivity in order to bring a really good experience to viewers.

Immersive video within Adobe Premiere.

Today, Adobe CC takes the lead of the editing scene with their specific 360 tools, such as their contextual viewer. But the big hit was when they acquired the Skybox plugins suite from Mettle, which will be integrated natively in the next Adobe CC version (for Premiere and After Effects).

With this set of tools you can easily manipulate your equirectangular sources, do tripod removal, sky replacements and all the invisible effects that were tricky to do without Skybox. You can then add contextual 360 effects like text, blur, transitions, greenscreen, and much more, in monoscopic and even stereoscopic mode. All this while viewing your timeline directly in your Oculus Rift and in realtime! And, incredibly it’s working — I use these tools all day long.

So let’s talk about the Mettle team. Created by two artists back in 1992, they joined the VR movement three years ago with the Skybox suite. They understood they had to bring tech to creative people. As a result they made smart tools with very well-designed GUI. For instance, look at Mettle’s new Mantra creative toolset for After Effects and Premiere. It is incredible to work with because you get the power to create very artistic designs in 360 in Adobe CC. And if you’re a solid VFX tech, wait for their Volumatrix depth-related VR FX software tools. Working in collaboration with Facebook, Mettle will launch the next big tool to do VFX in 3D/360 environments using camera-generated depth maps. It will open new awesome possibilities for content creators.

You know, the current main issue in cinematic 360 is image quality. Of course, we could talk about resolution or pixel per eye, but I think we should focus on color grading. This task is very creative — bringing emotions to the viewers. For me, the best 360 color grading tool to achieve these goals with uncompromised quality is Scratch VR from Assimilate. Beautiful. Formidable. Scratch is a very powerful color grading system, always on top in terms of technology. Now that they’ve added VR capabilities, you can color grade your stereoscopic equirectangular sources as easily as with normal sources. My favorite is mask repeater function, so you can naturally handle masks even in the back seam, which is almost impossible in other color grading tools. And you can also view your results directly in your HMD.

Scratch VR and ZCam collaboration.

At NAB 2017, they provided Scratch VR Z, an integrated workflow in collaboration with ZCam, the manufacturer of the S1 and S1 Pro. In this workflow you can, for instance, stitch sources directly into Scratch and do super high-quality color grading with realtime live streaming, along with logo insertion, greenscreen capabilities, layouts, etc. Crazy. For finishing, the Scratch VR output module is also very useful, enabling you to render your result in ProRes even on Windows, or in 10-bit H264, and many other formats.

Finishing and Distribution
So your cinematic VR experience is finished (you’ll notice I’ve skipped the sound part of the process, but since it’s not the part I work on I will not speak about this essential stage). But maybe you want to add some interactivity for a better user experience?

I visited IBC’s Future Zone to talk with the Liquid Cinema team. What is it? Simply, it’s a set of tools enabling you to enhance your cinematic VR experience. One important word is storytelling — with liquid cinema you can add an interactive layer to your story. The first tool needed is the authoring application where you drop your sources, which can be movies, stills, 360 and 2D stuff. Then create and enjoy.

For example, you can add graphic layers and enable the viewers gaze function, create multibranching scenarios based on intelligent timelines, play with forced perspective features so your viewer never misses an important thing… you must to try it.

The second part of the suite is about VR distribution. As a content creator you want your experience to be on all existing platforms, HMDs, channels … not an easy feat, but with Liquid Cinema it’s possible. Their player is compatible with Samsung Gear VR, Oculus Rift, HTC Vive, iOS, Android, Daydream and more. It’s coming to Apple TV soon.

IglooVision

The third part of the suite is the management of your content. Liquid Cinema has a CMS tool, which is very simple and allows changes, like geoblocking, easily, and provides useful analytics tools like heat map. And you can use your Vimeo pro account as a CDN if needed. Perfect.

Also in the Future Zone was the igloo from IglooVision. This is one of the best “social” ways to experience cinematic VR that I have ever seen. Enter this room with your friends and you can watch 360 all around and finish your drink (try this with an HMD). Comfortable, isn’t it? You can also use it as a “shared VR production suite” by connecting Adobe Premiere or your favorite tool directly to the system. Boom. You have now an immersive 360-degree monitor around you and your post production team.

So that was my journey into the VR stuff of IBC 2017. Of course, this is a non-exhaustive list of tools, with nothing about sound (which is very important in VR), but it’s my personal choice. Period.

One last thing: VR people. I have met a lot of enthusiastic, smart, interesting and happy women and men, helping content producers like me to push their creative limits. So thanks to all of them and see ya.


Paris-based Alexandre Regeffe is a 25-year veteran of TV and film. He is currently VR post production manager at Neotopy, a VR studio, as well as a VR effects specialist working on After Effects and the entire Adobe suite. His specialty is cinematic VR post workflows.

Sonic Union adds Bryant Park studio targeting immersive, broadcast work

New York audio house Sonic Union has launched a new studio and creative lab. The uptown location, which overlooks Bryant Park, will focus on emerging spatial and interactive audio work, as well as continued work with broadcast clients. The expansion is led by principal mix engineer/sound designer Joe O’Connell, now partnered with original Sonic Union founders/mix engineers Michael Marinelli and Steve Rosen and their staff, who will work out of both its Union Square and Bryant Park locations. O’Connell helmed sound company Blast as co-founder, and has now teamed up with Sonic Union.

In other staffing news, mix engineer Owen Shearer advances to also serve as technical director, with an emphasis on VR and immersive audio. Former Blast EP Carolyn Mandlavitz has joined as Sonic Union Bryant Park studio director. Executive creative producer Halle Petro, formerly senior producer at Nylon Studios, will support both locations.

The new studio, which features three Dolby Atmos rooms, was created and developed by Ilan Ohayon of IOAD (Architect of Record), with architectural design by Raya Ani of RAW-NYC. Ani also designed Sonic’s Union Square studio.

“We’re installing over 30 of the new ‘active’ JBL System 7 speakers,” reports O’Connell. “Our order includes some of the first of these amazing self-powered speakers. JBL flew a technician from Indianapolis to personally inspect each one on site to ensure it will perform as intended for our launch. Additionally, we created our own proprietary mounting hardware for the installation as JBL is still in development with their own. We’ll also be running the latest release of Pro Tools (12.8) featuring tools for Dolby Atmos and other immersive applications. These types of installations really are not easy as retrofits. We have been able to do something really unique, flexible and highly functional by building from scratch.”

Working as one team across two locations, this emerging creative audio production arm will also include a roster of talent outside of the core staff engineering roles. The team will now be integrated to handle non-traditional immersive VR, AR and experiential audio planning and coding, in addition to casting, production music supervision, extended sound design and production assignments.

Main Image Caption: (L-R) Halle Petro, Steve Rosen, Owen Shearer, Joe O’Connell, Adam Barone, Carolyn Mandlavitz, Brian Goodheart, Michael Marinelli and Eugene Green.

 

Tackling VR storytelling challenges with spatial audio

By Matthew Bobb

From virtual reality experiences for brands to top film franchises, VR is making a big splash in entertainment and evolving the way creators tell stories. But, as with any medium and its production, bringing a narrative to life is no easy feat, especially when it’s immersive. VR comes with its own set of challenges unique to the platform’s capacity to completely transport viewers into another world and replicate reality.

Making high-quality immersive experiences, especially for a film franchise, is extremely challenging. Creators must place the viewer into a storyline crafted by the studios and properly guide them through the experience in a way that allows them to fully grasp the narrative. One emerging strategy is to emphasize audio — specifically, 360 spatial audio. VR offers a sense of presence no other medium today can offer. Spatial audio offers an auditory presence that augments a VR experience, amplifying its emotional effects.

My background as audio director for VR experiences includes top film franchises such as Warner Bros. and New Line Cinema’s IT: Float — A Cinematic VR Experience, The Conjuring 2 — Experience Enfield VR 360, Annabelle: Creation VR — Bee’s Room, and the upcoming Greatest Showman VR experience for 20th Century Fox. In the emerging world of VR, I have seen production teams encounter numerous challenges that call for creative solutions. For some of the most critical storytelling moments, it’s crucial for creators to understand the power of spatial audio and its potential to solve some of the most prevalent challenges that arise in VR production.

Most content creators — even some of those involved in VR filmmaking — don’t fully know what 360 spatial audio is or how its implementation within VR can elevate an experience. With any new medium, there are early adopters who are passionate about the process. As the next wave of VR filmmakers emerge, they will need to be informed about the benefits of spatial audio.

Guiding Viewers
Spatial audio is an incredible tool that helps make a VR experience feel believable. It can present sound from several locations, which allows viewers to identify their position within a virtual space in relation to the surrounding environment. With the ability to provide location-based sound from any direction and distance, spatial audio can then be used to produce directional auditory cues that grasp the viewer’s attention and coerce them to look in a certain direction.

VR is still unfamiliar territory for a lot of people, and the viewing process isn’t as straightforward as a 2D film or game, so dropping viewers into an experience can leave them feeling lost and overwhelmed. Inexperienced viewers are also more apprehensive and rarely move around or turn their heads while in a headset. Spatial audio cues prompting them to move or look in a specific direction are critical, steering them to instinctively react and move naturally. On Annabelle: Creation VR — Bee’s Room, viewers go into the experience knowing it’s from the horror genre and may be hesitant to look around. We strategically used audio cues, such as footsteps, slamming doors and a record player that mysteriously turns on and off, to encourage viewers to turn their head toward the sound and the chilling visuals that await.

Lacking Footage
Spatial audio can also be a solution for challenging scene transitions, or when there is a dearth of visuals to work with in a sequence. Well-crafted aural cues can paint a picture in a viewer’s mind without bombarding the experience with visuals that are often unnecessary.

A big challenge when creating VR experiences for beloved film franchises is the need for the VR production team to work in tandem with the film’s production team, making recording time extremely limited. When working on IT: Float, we were faced with the challenge of having a time constraint for shooting Pennywise the Clown. Consequently, there was not an abundance of footage of him to place in the promotional VR experience. Beyond a lack of footage, they also didn’t want to give away the notorious clown’s much-anticipated appearance before the film’s theatrical release. The solution to that production challenge was spatial audio. Pennywise’s voice was strategically used to lead the experience and guide viewers throughout the sewer tunnels, heightening the suspense while also providing the illusion that he was surrounding the viewer.

Avoiding Visual Overkill
Similar to film and video games, sound is half of the experience in VR. With the unique perspective the medium offers, creators no longer have to fully rely on a visually-heavy narrative, which can overwhelm the viewer. Instead, audio can take on a bigger role in the production process and make the project a well-rounded sensory experience. In VR, it’s important for creators to leverage sensory stimulation beyond visuals to guide viewers through a story and authentically replicate reality.

As VR storytellers, we are reimagining ways to immerse viewer in new worlds. It is crucial for us to leverage the power of audio to smooth out bumps in the road and deliver a vivid sense of physical presence unique to this medium.


Matthew Bobb is the CEO of the full-service audio company Spacewalk Sound. He is a spatial audio expert whose work can be seen in top VR experiences for major film franchises.

Editing 360 Video in VR (Part 2)

By Mike McCarthy

In the last article I wrote on this topic, I looked at the options for shooting 360-degree video footage, and what it takes to get footage recorded on a Gear 360 ready to review and edit on a VR-enabled system. The remaining steps in the workflow will be similar regardless of which camera you are using.

Previewing your work is important so, if you have a VR headset you will want to make sure it is installed and functioning with your editing software. I will be basing this article on using an Oculus Rift to view my work in Adobe Premiere Pro 11.1.2 on a Thinkpad P71 with an Nvidia Quadro P5000 GPU. Premiere requires an extra set of plugins to interface to the Rift headset. Adobe acquired Mettle’s Skybox VR Player plugin back in June, and has made it available to Creative Cloud users upon request, which you can do here.

Skybox VR player

Skybox can project the Adobe UI to the Rift, as well as the output, so you could leave the headset on when making adjustments, but I have not found that to be as useful as I had hoped. Another option is to use the GoPro VR Player plugin to send the Adobe Transmit output to the Rift, which can be downloaded for free here (use the 3.0 version or above). I found this to have slightly better playback performance, but fewer options (no UI projection, for example). Adobe is expected to integrate much of this functionality into the next release of Premiere, which should remove the need for most of the current plugins and increase the overall functionality.

Once our VR editing system is ready to go, we need to look at the footage we have. In the case of the Gear 360, the dual spherical image file recorded by the camera is not directly usable in most applications and needs to be processed to generate a single equirectangular projection, stitching the images from both cameras into a single continuous view.

There are a number of ways to do this. One option is to use the application Samsung packages with the camera: Action Director 360. You can download the original version here, but will need the activation code that came with the camera in order to use it. Upon import, the software automatically processes the original stills and video into equirectangular 2:1 H.264 files. Instead of exporting from that application, I pull the temp files that it generates on media import, and use them in Premiere. (C:\Users\[Username]\Documents\CyberLink\ActionDirector\1.0\360) is where they should be located by default. While this is the simplest solution for PC users, it introduces an extra transcoding step to H.264 (after the initial H.265 recording), and I frequently encountered an issue where there was a black hexagon in the middle of the stitched image.

Action Director

Activating Automatic Angle Compensation in the Preferences->Editing panel gets around this bug, while trying to stabilize your footage to some degree. I later discovered that Samsung had released a separate Version 2 of Action Director available for Windows or Mac, which solves this issue. But I couldn’t get the stitched files to work directly in the Adobe apps, so I had to export them, which was yet another layer of video compression. You will need a Samsung activation code that came with the Gear 360 to use any of the versions, and both versions took twice as long to stitch a clip as its run time on my P71 laptop.

An option that gives you more control over the stitching process is to do it in After Effects. Adobe’s recent acquisition of Mettle’s SkyBox VR toolset makes this much easier, but it is still a process. Currently you have to manually request and install your copy of the plugins as a Creative Cloud subscriber. There are three separate installers, and while this stitching process only requires Skybox Suite AE, I would install both the AE and Premiere Pro versions for use in later steps, as well as the Skybox VR player if you have an HMD to preview with. Once you have them installed, you can use the Skybox Converter effect in After Effects to convert from the Gear 360’s fisheye files to the equirectangular assets that Premiere requires for editing VR.

Unfortunately, Samsung’s format is not one of the default conversions supported by the effect, so it requires a little more creativity. The two sensor images have to be cropped into separate comps and with plugin applied to each of them. Setting the Input to fisheye and the output to equirectangular for each image will give the desired distortion. A feathered mask applied to the circle to adjust the seam, and the overlap can be adjusted with the FOV and re-orient camera values.

Since this can be challenging to setup, I have posted an AE template that is already configured for footage from the Gear 360. The included directions should be easy to follow, and the projection, overlap and stitch can be further tweaked by adjusting the position, rotation and mask settings in the sub-comps, and the re-orientation values in the Skybox Converter effects. Hopefully, once you find the correct adjustments for your individual camera, they should remain the same for all of your footage, unless you want to mask around an object crossing the stitch boundary. More info on those types of fixes can be found here. It took me five minutes to export 60 seconds of 360 video using this approach, and there is no stabilization or other automatic image analysis.

Video Stitch Studio

Orah makes Video-Stitch Studio, which is a similar product but with a slightly different feature set and approach. One limitation I couldn’t find a way around is that the program expects the various fisheye source images to be in separate files, and unlike AVP I couldn’t get the source cropping tool to work without rendering the dual fisheye images into separate square video source files. There should be a way to avoid that step, but I couldn’t find one. (You can use the crop effect to remove 1920 pixels on one side or the other to make the conversions in Media Encoder relatively quickly.) Splitting the source file and rendering separate fisheye spheres adds a workflow step and render time, and my one-minute clip took 11 minutes to export. This is a slower option, which might be significant if you have hours of footage to process instead of minutes.

Clearly, there are a variety of ways to get your raw footage stitched for editing. The results vary greatly between the different programs, so I made video to compare the different stitching options on the same source clip. My first attempt was with a locked-off shot in the park, but that shot was too simple to see the differences, and it didn’t allow for comparison of the stabilization options available in some of the programs. I shot some footage from a moving vehicle to see how well the motion and shake would be handled by the various programs. The result is now available on YouTube, fading between each of the five labeled options over the course of the minute long clip. I would categorize this as testing how well the various applications can handle non-ideal source footage, which happens a lot in the real world.

I didn’t feel that any of the stitching options were perfect solutions, so hopefully we will see further developments in that regard in the future. You may want to explore them yourself to determine which one best meets your needs. Once your footage is correctly mapped to equirectangular projection, ideally in a 2:1 aspect ratio, and the projects are rendered and exported (I recommend Cineform or DNxHR), you are ready to edit your processed footage.

Launch Premiere Pro and import your footage as you normally would. If you are using the Skybox Player plugin, turn on Adobe Transmit with the HMD selected as the only dedicated output (in the Skybox VR configuration window, I recommend setting the hot corner to top left, to avoid accidentally hitting the start menu, desktop hide or application close buttons during preview). In the playback monitor, you may want to right click the wrench icon and select Enable VR to preview a pan-able perspective of the video, instead of the entire distorted equirectangular source frame. You can cut, trim and stack your footage as usual, and apply color corrections and other non-geometry-based effects.

In version 11.1.2 of Premiere, there is basically one VR effect (VR Projection), which allows you to rotate the video sphere along all three axis. If you have the Skybox Suite for Premiere installed, you will have some extra VR effects. The Skybox Rotate Sphere effect is basically the same. You can add titles and graphics and use the Skybox Project 2D effect to project them into the sphere where you want. Skybox also includes other effects for blurring and sharpening the spherical video, as well as denoise and glow. If you have Kolor AVP installed that adds two new effects as well. GoPro VR Horizon is similar to the other sphere rotation ones, but allows you to drag the image around in the monitor window to rotate it, instead of manually adjusting the axis values, so it is faster and more intuitive. The GoPro VR Reframe effect is applied to equirectangular footage, to extract a flat perspective from within it. The field of view can be adjusted and rotated around all three axis.

Most of the effects are pretty easy to figure out, but Skybox Project 2D may require some experimentation to get the desired results. Avoid placing objects near the edges of the 2D frame that you apply it to, to keep them facing toward the viewer. The rotate projection values control where the object is placed relative to the viewer. The rotate source values rotate the object at the location it is projected to. Personally, I think they should be placed in the reverse order in the effects panel.

Encoding the final output is not difficult, just send it to Adobe Media Encoder using either H.264 or H.265 formats. Make sure the “Video is VR” box is checked at the bottom of the Video Settings pane, and in this case that the frame layout is set to monoscopic. There are presets for some of the common framesizes, but I would recommend lowering the bitrates, at least if you are using Gear 360 footage. Also, if you have ambisonic audio set channels to 4.0 in the audio pane.

Once the video is encoded, you can upload it directly to Facebook. If you want to upload to YouTube, exports from AME with the VR box checked should work fine, but for videos from other sources you will need to modify the metadata with this app here.  Once your video is uploaded to YouTube, you can embed it on any webpage that supports 2D web videos. And YouTube videos can be streamed directly to your Rift headset using the free DeoVR video player.

That should give you a 360-video production workflow from start to finish. I will post more updated articles as new software tools are developed, and as I get new 360 cameras with which to test and experiment.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

GoPro intros Hero6 and its first integrated 360 solution, Fusion

By Mike McCarthy

Last week, I traveled to San Francisco to attend GoPro’s launch event for its new Hero6 and Fusion cameras. The Hero6 is the next logical step in the company’s iteration of action cameras, increasing the supported frame rates to 4Kp60 and 1080p240, as well as adding integrated image stabilization. The Fusion on the other hand is a totally new product for them, an action-cam for 360-degree video. GoPro has developed a variety of other 360-degree video capture solutions in the past, based on rigs using many of their existing Hero cameras, but Fusion is their first integrated 360-video solution.

While the Hero6 is available immediately for $499, the Fusion is expected to ship in November for $699. While we got to see the Fusion and its footage, most of the hands-on aspects of the launch event revolved around the Hero6. Each of the attendees was provided a Hero6 kit to record the rest of the days events. My group was provided a ride on the RocketBoat through the San Francisco Bay. This adventure took advantage of a number of features of the camera, including the waterproofing, the slow motion and the image stabilization.

The Hero6

The big change within the Hero6 is the inclusion of GoPro’s new custom-designed GP1 image processing chip. This allows them to process and encode higher frame rates, and allows for image stabilization at many frame-rate settings. The camera itself is physically similar to the previous generations, so all of your existing mounts and rigs will still work with it. It is an easy swap out to upgrade the Karma drone with the new camera, which also got a few software improvements. It can now automatically track the controller with the camera to keep the user in the frame while the drone is following or stationary. It can also fly a circuit of 10 waypoints for repeatable shots, and overcoming a limitation I didn’t know existed, it can now look “up.”

There were fewer precise details about the Fusion. It is stated to be able to record a 5.2K video sphere at 30fps and a 3K sphere at 60fps. This is presumably the circumference of the sphere in pixels, and therefore the width of an equi-rectangular output. That would lead us to conclude that the individual fish-eye recording is about 2,600 pixels wide, plus a little overlap for the stitch. (In this article, GoPro’s David Newman details how the company arrives at 5.2K.)

GoPro Fusion for 360

The sensors are slightly laterally offset from one another, allowing the camera to be thinner and decreasing the parallax shift at the side seams, but adding a slight offset at the top and bottom seams. If the camera is oriented upright, those seams are the least important areas in most shots. They also appear to have a good solution for hiding the camera support pole within the stitch, based on the demo footage they were showing. It will be interesting to see what effect the Fusion camera has on the “culture” of 360 video. It is not the first affordable 360-degree camera, but it will definitely bring 360 capture to new places.

A big part of the equation for 360 video is the supporting software and the need to get the footage from the camera to the viewer in a usable way. GoPro already acquired Kolor’s Autopano Video Pro a few years ago to support image stitching for their larger 360 video camera rigs, so certain pieces of the underlying software ecosystem to support 360-video workflow are already in place. The desktop solution for processing the 360 footage will be called Fusion Studio, and is listed as coming soon on their website.

They have a pretty slick demonstration of flat image extraction from the video sphere, which they are marketing as “OverCapture.” This allows a cellphone to pan around the 360 sphere, which is pretty standard these days, but by recording that viewing in realtime they can output standard flat videos from the 360 sphere. This is a much simpler and more intuitive approach to virtual cinematography that trying to control the view with angles and keyframes in a desktop app.

This workflow should result in a very fish-eye flat video, similar to the more traditional GoPro shots, due to the similar lens characteristics. There are a variety of possible approaches to handling the fish-eye look. GoPro’s David Newman was explaining to me some of the solutions he has been working on to re-project GoPro footage into a sphere, to reframe or alter the field of view in a virtual environment. Based on their demo reel, it looks like they also have some interesting tools coming for using the unique functionality that 360 makes available to content creators, using various 360 projections for creative purposes within a flat video.

GoPro Software
On the software front, GoPro has also been developing tools to help its camera users process and share their footage. One of the inherent issues of action-camera footage is that there is basically no trigger discipline. You hit record long before anything happens, and then get back to the camera after the event in question is over. I used to get one-hour roll-outs that had 10 seconds of usable footage within them. The same is true when recording many attempts to do something before one of them succeeds.

Remote control of the recording process has helped with this a bit, but regardless you end up with tons of extra footage that you don’t need. GoPro is working on software tools that use AI and machine learning to sort through your footage and find the best parts automatically. The next logical step is to start cutting together the best shots, which is what Quikstories in their mobile app is beginning to do. As someone who edits video for a living, and is fairly particular and precise, I have a bit of trouble with the idea of using something like that for my videos, but for someone to whom the idea of “video editing” is intimidating, this could be a good place to start. And once the tools get to a point where their output can be trusted, automatically sorting footage could make even very serious editing a bit easier when there is a lot of potential material to get through. In the meantime though, I find their desktop tool Quik to be too limiting for my needs and will continue to use Premiere to edit my GoPro footage, which is the response I believe they expect of any professional user.

There are also a variety of new camera mount options available, including small extendable tripod handles in two lengths, as well as a unique “Bite Mount” (pictured, left) for POV shots. It includes a colorful padded float in case it pops out of your mouth while shooting in the water. The tripods are extra important for the forthcoming Fusion, to support the camera with minimal obstruction of the shot. And I wouldn’t recommend the using Fusion on the Bite Mount, unless you want a lot of head in the shot.

Ease of Use
Ironically, as someone who has processed and edited hundreds of hours of GoPro footage, and even worked for GoPro for a week on paper (as an NAB demo artist for Cineform during their acquisition), I don’t think I had ever actually used a GoPro camera. The fact that at this event we were all handed new cameras with zero instructions and expected to go out and shoot is a testament to how confident GoPro is that their products are easy to use. I didn’t have any difficulty with it, but the engineer within me wanted to know the details of the settings I was adjusting. Bouncing around with water hitting you in the face is not the best environment for learning how to do new things, but I was able to use pretty much every feature the camera had to offer during that ride with no prior experience. (Obviously I have extensive experience with video, just not with GoPro usage.) And I was pretty happy with the results. Now I want to take it sailing, skiing and other such places, just like a “normal” GoPro user.

I have pieced together a quick highlight video of the various features of the Hero6:


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Director Ava DuVernay named VES Summit’s keynote speaker

Director/producer/writer Ava DuVernay has been named keynote speaker at the 2017 VES Summit, “Inspiring Change: Building on 20 Years of VES Innovation.” The forum, which takes place Saturday, October 28, celebrates the Visual Effects Society’s 20th anniversary and brings together creatives, executives and visionaries from a variety of disciplines to discuss the evolution of visual imagery and the VFX industry landscape in a TED Talks-like atmosphere.

At the 2012 Sundance Film Festival, DuVernay won the Best Director Prize for her second feature film Middle of Nowhere, which she also wrote and produced. For her work on Selma in 2014, she was nominated for the Academy Award for Best Picture. In 2017, she was nominated for the Academy Award for Best Documentary Feature for her film 13th. Her current directorial work includes the dramatic television series Queen Sugar, and the upcoming Disney feature film A Wrinkle in Time.

It was back in 2010 that DuVernay made her directorial debut with the acclaimed 2008 hip-hop documentary This Is The Life, and she has gone on to direct several network documentaries, including Venus Vs. for ESPN. She has also directed significant short form work, including August 28: A Day in the Life of a People, commissioned by The Smithsonian’s National Museum of African American History and Culture, as well as fashion and beauty films for Prada and Apple.

Other speakers include:
–  Syd Mead, visual futurist and conceptual artist
–  President of IMAX Home Entertainment Jason Brenek on “Evolution in Entertainment: VR, Cinema and Beyond”
– CEO of SSP BlueHemanshu Nigam on “When Hackers Attack: How Can Hollywood Fight Back?”
– Head of Adobe Research Gavin Miller on “Will the Future Look More Like Harry Potter or Star Trek?”
–  Senior research engineer at Autodesk, Evan Atherton on “The Age of Imagination”
–  Founder/CEO of the Emblematic Group, Nonny de la Peña on “Creating for Virtual, Augmented and Mixed Realities”

Additional speakers and roundtable moderators will be announced soon. The 2017 VES Summit takes place at the Sofitel Hotel Beverly Hills.

Review: Blackmagic’s Fusion 9

By David Cox

At Siggraph in August, Blackmagic Design released a new version of its compositing software Fusion. For those not familiar with Fusion, it is a highly flexible node-based compositor that can composite in 2D and 3D spaces. Its closest competitor is Nuke from The Foundry.

The raft of new updates in Version 9 could be categorized into one of two areas: features created in response to user requests, and a set of tools for VR. Also announced with the new release is a price drop to $299 for the full studio version, which, judging by global resellers instantly running out of stock (Fusion ships via dongle), seems to have been a popular move!

As with other manufacturers in the film and broadcast area, the term “VR” is a little misused as they are really referring to “360 video.” VR, although a more exciting term, would demand interactivity. That said, as a post production suite for 360 video, Fusion already has a very strong tool set. It can create, manipulate, texture and light 3D scenes made from imported CGI models and built-in primitives and particles.

Added in Version 9 is a spherical camera that can capture a scene as a 360 2D or stereo 3D image. In addition, new tools are provided to cross-convert between many 360 video image formats. Another useful tool allows a portion of a 360-degree image to be unwrapped (or un-distorted) so that restoration or compositing work can be easily carried out on it before it is perfectly re-wrapped back into the 360-degree image.

There is also a new stabilizer for 360 wrap-around shots. A neat feature is that Fusion 9 can directly drive VR headsets such as Oculus Rift. Within Fusion, any node can be routed to any viewing monitor and the VR headset simply presents itself as an extra one of those.

Notably, Blackmagic has opted not to tackle 360-degree image stitching — the process by which images from multiple cameras facing in different directions are “stitched” together to form a single wrap-around view. I can understand this — on one hand, there are numerous free or cheap apps that perform stitching and so there’s no need for Blackmagic to reinvent that wheel. On the other hand, Blackmagic targets the mass user area, and given that 360 video production is a niche activity, productions that strap together multiple cameras form an even smaller and decreasing niche due to the growing number of single-step 360-degree cameras that provide complete wrap-around images without the need for stitching.

Moving on from VR/360, Fusion 9 now boasts some very significant additional features. While some Fusion users had expressed concerned that Blackmagic was favoring Resolve, in fact it is now clear that the Fusion development team have been very busy indeed.

Camera Tracker
First up is an embedded camera tracker and solver. Such a facility aims to deduce how the original camera in a live-action shoot moved through the scene and what lens must have been on it. From this, a camera tracker produces a virtual 3D scene into which a compositor can add objects that then move precisely with the original shot.

Fusion 9’s new camera tracker performed well in tests. It requires the user to break the process down into three logical steps: track, refine and export. Fusion initially offers auto-placed trackers, which follow scores of details in the scene quite quickly. The operator then removes any obviously silly trackers (like the ones chasing around the moving people in a scene) and sets Fusion about the task of “solving” the camera move.

Once done, Fusion presents a number of features to allow the user to measure the accuracy of the resulting track and to locate and remove trackers that are adversely affecting that result. This is a circular process by which the user can incrementally improve the track. The final track is then converted into a 3D scene with a virtual camera and a point cloud to show where the trackers would exist in 3D space. A ground plane is also provided, which the user can locate during the tracking process.

While Fusion 9’s camera tracker perhaps doesn’t have all the features of a dedicated 3D tracker such as SynthEyes from Andersson Technologies, it does satisfy the core need and has plenty of controls to ensure that the tool is flexible enough to deal with most scenarios. It will certainly be received as a welcome addition.

Planar Tracker
Next up is a built-in “planar” tracker. Planar trackers work differently than classic point trackers, which simply try to follow a small area of detail. A planar tracker follows a larger area of a shot, which makes up a flat plane — such as a wall or table top. From this, the planar tracker can deduce rotation, location, scale and perspective.

Fusion 9 Studio’s new planar tracker also performed well in tests. It assessed the track quickly and was not easily upset by foreground objects obscuring parts of the tracked area. The resulting track can either be used directly to insert another image into the resulting plane or to stabilize the shot, or indirectly by producing a separate Planar Transform node. This is used to warp any other asset such as a matte for rotoscoping work.

Inevitably, any planar tracker will be compared to the long-established “daddy” of them all, Mocha Pro from Boris FX. At a basic level, Fusion’s planar tracker worked just as well as Mocha, creating solid tracks from a user-defined area nicely and quickly. However, I would think that for complex rotoscoping, where a user will have many roto layers, driven by many tracking sources, with other layers acting as occlusion masks, Mocha’s working environment would be easier to control. Such a task would lead to many, many wired up nodes in Fusion, whereas Mocha would present the same functions within a simper layer-list. Of course, Mocha Pro is available as an OFX plug-in for Fusion Studio anyway, so users can have the best of both worlds.

Delta Keyer
Blackmagic also added a new keyer to Fusion called the Delta Keyer. It is a color difference keyer with a wide range of controls to refine the resulting matte and the edges of the key. It worked well when tested against one of my horrible greenscreens, something I keep for these very occasions!

The Delta Keyer can also take a clean plate as a reference input, which is essentially a frame of the green/bluescreen studio without the object to be keyed. The Delta Keyer then uses this to understand which deviations from the screen color represent the foreground object and which are just part of an uneven screen color.

To assist with this process, there is also a new Clean Plate node, which is designed to create an estimate of a clean plate in the absence of one being available from the shoot (for example, if the camera was moving). The combination of the clean plate and the Delta Keyer produced good results when challenged to extract subtle object shadows from an unevenly lit greenscreen shot.

Studio Player
Studio Player is also new for Fusion 9 Studio; it’s a multi-station shot review tool. Multiple versions of clips and comps can be added to the Studio Player’s single layer timeline, where simple color adjustments and notes can be added. A neat feature is that multiple studio players in different locations can be slaved together so that cross-facility review sessions can take place, with everyone looking at the same thing at the same time, which helps!

Fusion 9 Studio also supports the writing of Apple-approved Pro Res from all its supported platforms, including Windows and Linux. Yep – you read that right. Other format support has also been widened and improved, such as faster native handling for DNxHR codecs, for example.

Summing Up
All in all, the updates to Fusion 9 are comprehensive and very much in line with what professional users have been asking for. I think it certainly demonstrates that Blackmagic is as committed to Fusion as Resolve, and at $299, it’s a no-brainer for any professional VFX artist to have available to them.

Of course, the price drop shows that Blackmagic is also aiming Fusion squarely at the mass independent filmmaker market. Certainly, with Resolve and Fusion, those users will have pretty much all the post tools they will need.

Fusion by its nature and heritage is a more complex beast to learn than Resolve, but it is well supported with a good user manual, forums and video tutorials. I would think it likely that for this market, Fusion might benefit from some minor tweaks to make it more intuitive in certain areas. I also think the join between Resolve and Fusion will provide a lot of interest going forward for this market. Adobe has done a masterful job bridging Premiere and After Effects. The join between Resolve and Fusion is more rudimentary, but if Blackmagic gets this right, they will have a killer combination.

Finally, Fusion 9 extends what was already a very powerful and comprehensive compositing suite. It has become my primary compositing device and the additions in version 9 only serve to cement that position.


David Cox is a VFX compositor and colorist with 20+ years experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Charlieuniformtango adds director Elliot Dillman to roster

Director Elliot Dillman has joined Charlieuniformtango for national commercial and VR representation. His directorial experience spans broadcast commercials, branded VR content, music videos and multi-camera live events.

He has been at the helm of national ad campaigns for a number of top agencies including GSD&M, CP+B, Leo Burnett, Y&R and BBDO, directing spots for brands such as Kraft, Nerf, GMC and Subway.

Recently, Dillman’s work with Verizon and Momentum Worldwide received two 2017 Clio Awards for the “Virtual Gridiron” VR experience at Super Bowl LI. Also, his short film, “In Harmony,” part of the Oculus VR for Good program, premiered at the Oculus house during the 2017 Sundance Film Festival and was an official selection of SXSW 2017. The film examines the Harmony Project’s work to help Los Angeles kids stay in school through educational music programs. “

Dillman is the oldest son of Emmy-winning director Ray Dillman, so he grew up on film sets. He started working regularly as a PA at age 12 for production companies like Gartner and MJZ. While studying at Loyola Marymount University’s School of Film and Television, Dillman simultaneously began his directing career with multi-camera live concert shoots for the ESPN Summer and Winter X Games. Shortly after graduating, he directed feature film tie-in spots for Sony Pictures, Warner Bros. and Paramount, along with ad campaigns for the Walt Disney Company and Royal Purple Motor Oil, amongst others.