Tag Archives: VR

Lucid and Eys3D partner on VR180 depth camera module

EYS3D Microelectronics Technology, the company behind embedded camera modules in some top-tier AR/VR headsets, has partnered with that AI startup Lucid. Lucid will power their next-generation depth-sensing camera module, Axis. This means that a single, small, handheld device can capture accurate 3D depth maps with up to a 180-degree field of view at high resolution, allowing content creators to scan, reconstruct and output precise 3D point clouds.

This new camera module, which was demoed for the first time at CES, will allow developers, animators and game designers a way to transform the physical world into a virtual one, ramping up content for 3D, VR and AR all with superior performance in resolution and field of view at a lower cost than some technologies currently available.

A device capturing the environment exactly as you perceive it, but enhanced with capabilities of precise depth, distance and understanding could help eliminate the boundaries between what you see in the real world and what you can create in the VR and AR world. This is what the Lucid-powered EYS3D’s Axis camera module aims to bring to content creators, as they gain the “super power” of transforming anything in their vision into a 3D object or scene which others can experience, interact with and walk in.

What was only previously possible with eight to 16 high-end DSLR cameras, and expensive software or depth sensors is now combined into one tiny camera module with stereo lenses paired with IR sensors. Axis will cover up to a 180-degree field of view while providing millimeter-accurate 3D in point cloud or depth map format. This device provides a simple plug-and-play experience through USB 3.1 Gen1/2 and supported Windows and Linux software suites, allowing users to further develop their own depth applications such as 3D reconstructing an entire scene, scanning faces into 3D models or just determining how far away an object is.

Lucid’s AI-enhanced 3D/depth solution, known as 3D Fusion Technology, is currently deployed in many devices, such as 3D cameras, robots and mobile phones, including the Red Hydrogen One, which just launched through AT&T and Verizon nationwide.

EYS3D’s new depth camera module powered by Lucid will be available in Q3 2019.

HPA releases 2019 Tech Retreat program, includes eSports

The Hollywood Professional Association (HPA) has set its schedule for the 2019 HPA Tech Retreat, set for February 11-15. The Tech Retreat, which is celebrating its 25th year, takes place over the course of a week at the JW Marriott Resort & Spa in Palm Desert, California.

The HPA Tech Retreat spans five days of sessions, technology demonstrations and events. During this week, important aspects of production, broadcast, post, distribution and related M&E trends are explored. One of the key differentiators of the Tech Retreat is its strict adherence to a non-commercial focus: marketing-oriented presentations are prohibited except at breakfast roundtables.

“Once again, we’ve received many more submissions than we could use,” says Mark Schubin, the Program Maestro of the HPA Tech Retreat. “To say this year’s were ‘compelling’ is an understatement. We could have programmed a few more days. Rejecting terrific submissions is always the hardest thing we have to do. I’m really looking forward to learning the latest on HDR, using artificial intelligence to restore old movies and machine learning to deal with grunt work, the Academy’s new software foundation, location-based entertainment with altered reality and much more.”

This year’s program is as follows:

Monday February 11: TR-X
eSports: Dropping the Mic on Center Stage
Separate registration required
A half day of targeted panels, speakers and interaction, TR-X will focus on the rapidly growing arena of eSports, with a keynote from Yvette Martinez, CEO – North America of eSports organizer and production company ESL North America.
Tuesday February 12: Supersession
Next-Gen Workflows and Infrastructure: From the Set to the Consumer

Tuesday February 12: Supersession
Next-Gen Workflows and Infrastructure: From the Set to the Consumer

Wednesday February 13: Main Program Highlights
• Mark Schubin’s Technology Year in Review
• Washington Update (Jim Burger, Thompson Coburn LLP)
The highly anticipated review of legislation and its impact on our business from a leading Washington attorney.

• Deep Fakes (Moderated by Debra Kaufman, ETCentric; Panelists Marc Zorn, HBO; Ed Grogan, Department of Defense; Alex Zhukov, Video Gorillas)
It might seem nice to be able to use actors long dead, but the concept of “fake news” takes a terrifying new turn with deepfakes, the term that Wikipedia describes as a portmanteau of “deep learning” and “fake.” Although people have been manipulating images for centuries – long before the creation of Adobe Photoshop – the new AI-powered tools allow the creation of very convincing fake audio and video.

• The Netflix Media Database (Rohit Puri, Netflix)
An optimized user interface, meaningful personalized recommendations, efficient streaming and a high-quality catalog of content are the principal factors that define theNetflix end-user experience. A myriad of business workflows of varying complexities come together to realize this experience. Under the covers, they use computationally expensive computer vision, audio processing and natural language-processing based media analysis algorithms. These algorithms generate temporally and spatially dynamic metadata that is shared across the various use cases. The Netflix Media DataBase (NMDB) is a multi-tenant, data system that is used to persist this deeply technical metadata about various media assets at Netflix and that enables querying the same at scale. The “shared nothing” distributed database architecture allows NMDB to store large amounts of media timeline data, thus forming the backbone for various Netflix media processing systems.

• AI Film Restoration at 12 Million Frames per Second (Alex Zhukov, Video Gorillas)

• Is More Media Made for Subways Than for TV and Cinema? (and does it Make More $$$?) (Andy Quested, BBC)

• Broadcasters Panel (Moderator: Matthew Goldman, MediaKind)

• CES Review (Peter Putman, ROAM Consulting)
Pete Putman traveled to Las Vegas to see what’s new in the world of consumer electronics and returns to share his insights with the HPA Tech Retreat audience.

• 8K: Whoa! How’d We Get There So Quickly (Peter Putman, ROAM Consulting)

• Issues with HDR Home Video Deliverables for Features (Josh Pines, Technicolor)

• HDR “Mini” Session
• HDR Intro: Seth Hallen, Pixelogic
• Ambient Light Compensation for HDR Presentation: Don Eklund, Sony Pictures Entertainment
• HDR in Anime: Haruka Miyagawa, Netflix
• Pushing the Limits of Motion Appearance in HDR: Richard Miller, Pixelworks
• Downstream Image Presentation Management for Consumer Displays:
• Moderator: Michael Chambliss, International Cinematographers Guild
• Michael Keegan, Netflix
• Annie Chang, UHD Alliance
• Steven Poster, ASC, International Cinematographers Guild
• Toshi Ogura, Sony

• Solid Cinema Screens with Front Sound: Do They Work? (Julien Berry, Delair Studios)
Direct-view displays bring high image quality in the cinema but suffer from low pixel fill factor that can lead to heavy moiré and aliasing patterns. Cinema projectors have a much better fill factor which avoids most of those issues even though some moiré effect can be produced due to the screen perforations needed for the audio. With the advent of high contrast, EDR and soon HDR image quality in cinema, screen perforations impact the perceived brightness and contrast from the same image, though the effect has never been quantified since some perforations had always been needed for cinema audio. With the advent of high-quality cinema audio system, it is possible to quantify this effect.

Thursday, February 14: Main Program Highlights

• A Study Comparing Synthetic Shutter and HFR for Judder Reduction (Ianik Beitzel and Aaron Kuder, ARRI and Stuttgart Media University (HdM))

• Using Drones and Photogrammetry Techniques to Create Detailed (High Resolution) Point Cloud Scenes (Eric Pohl, Singularity Imaging)
Drone aerial photography may be used to create multiple geotagged images that are processed to create a 3D point cloud set of a ground scene. The point cloud may be used for production previsualization or background creation for videogames or VR/AR new-media products.

• Remote and Mobile Production Panel (Moderator: Mark Chiolis, Mobile TV Group; Wolfgang Schram, PRG; Scott Rothenberg, NEP)
With a continuing appetite for content from viewers of all the major networks, as well as niche networks, streaming services, web, eGames/eSports and venue and concert-tour events, the battle is on to make it possible to watch almost every sporting and entertainment event that takes place, all live as it is happening. Key members of the remote and mobile community explore what’s new and what workflows are behind the content production and delivery in today’s fast-paced environments. Expect to hear about new REMI applications, IP workflows, AI, UHD/HDR, eGames, and eSports.

• IMSC 1.1: A Single Subtitle and Caption Format for the Entertainment Chain (Pierre-Anthony Lemieux, Sandflow Consulting (supported by MovieLabs); Dave Kneeland, Fox)
IMSC is a W3C standard for worldwide subtitles/captions, and the result of an international collaboration. The initial version of IMSC (IMSC 1) was published in 2016, and has been widely adopted, including by SMPTE, MPEG, ATSC and DVB. With the recent publication of IMSC 1.1, we now have the opportunity to converge on a single subtitle/caption format across the entire entertainment chain, from authoring to consumer devices. IMSC 1.1 improves on IMSC 1 with support for HDR, advanced Japanese language features, and stereoscopic 3D. Learn about IMSC’s history, capabilities, operational deployment, implementation experience, and roadmap — and how to get involved.

• ACESNext and the Academy Digital Source Master: Extensions, Enhancements and a Standardized Deliverable (Andy Maltz, Academy of Motion Picture Arts & Sciences; Annie Chang, Universal Pictures)

• Mastering for Multiple Display and Surround Brightness Levels Using the Human Perceptual Model to Insure the Original Creative Intent Is Maintained (Bill Feightner, Colorfront)
Maintaining a consistent creative look across today’s many different cinema and home displays can be a big challenge, especially with the wide disparity in possible display brightness and contrast as well as the viewing environments or surrounds. Even if it was possible to have individual creative sessions, maintaining creative consistency would be very difficult at best. By using the knowledge of how the human visual system works, the perceptual model, processing source content to fit a given displays brightness and surround can be automatically applied while maintaining the original creative intent with little to no trimming.

• Cloud: Where Are We Now? (Moderator: Erik Weaver, Western Digital)

• Digitizing Workflow – Leveraging Platforms for Success (Roger Vakharia, Salesforce)
While the business of content creation hasn’t changed much over time, the technology enabling processes around production, digital supply chain and marketing resource management among other areas have become increasingly complex. Enabling an agile, platform-based workflow can help in decreasing time and complexity but cost, scale and business sponsorship are often inhibitors in driving success.

Driving efficiency at scale can be daunting but many media leaders have taken the plunge to drive agility across their business process. Join this discussion to learn best practices, integrations, workflows and techniques that successful companies have used to drive simplicity and rigor around their workflow and business process.

• Leveraging Machine Learning in Image Processing (Rich Welsh, Sundog Media Toolkit)
How to use AI (ML and DL networks) to perform “creative” tasks that are boring and humans spend time doing but don’t want to (working real world examples included)

• Leveraging AI in Post Production: Keeping Up with Growing Demands for More Content (Van Bedient, Adobe)
Expectations for more and more content continue to increase — yet staffing remains the same or only marginally bigger. How can advancements from machine learning help content creators? AI can be an incredible boon to remove repetitive tasks and tedious steps allowing humans to concentrate on the creative; ultimately AI can provide the one currency creatives yearn for more than anything else: Time.

• Deploying Component-Based Workflows: Experiences from the Front Lines (Moderator: Pierre-Anthony Lemieux, Sandflow Consulting (supported by MovieLabs))
The content landscape is shifting, with an ever-expanding essence and metadata repertoire, viewing experiences, global content platforms and automated workflows. Component-based workflows and formats, such as the Interoperable Master Format (IMF) standard, are being deployed to meet the challenges brought by this shift. Come and join us for a first-hand account from those on the front lines.

• Content Rights, Royalties and Revenue Management via Blockchain (Adam Lesh, SingularDTV)
The blockchain entertainment economy: adding transparency, disintermediating the supply chain, and empowering content creators to own, manage and monetize their IP to create sustainable, personal and connected economies. As we all know, rights and revenue (including royalties, residuals, etc.) management is a major pain point for content creators in the entertainment industry.

Friday, February 15: Main Program Highlights

• Beyond SMPTE Time Code: The TLX Project: (Peter Symes)
SMPTE Time Code, ST 12, was developed and standardized in the 1970s to support the emerging field of electronic editing. It has been, and continues to be, a robust standard; its application is almost universal in the media industry, and the standard has found use in other industries. However, ST 12 was developed using criteria and restrictions that are not appropriate today, and it has many shortcomings in today’s environment.

A new project in SMPTE, the Extensible Time Label (TLX) is gaining traction and appears to have the potential to meet a wide range of requirements. TLX is designed to be transport-agnostic and with a modern data structure.

• Blindsided: The Game-Changers We Might Not See Coming (Mark Harrison, Digital Production Partnership)
The world’s number one company for gaming revenue makes as much as Sony and Microsoft combined. It isn’t American or Japanese. Marketeers project that by 2019, video advertising on out-of-home displays will be as important as their spending on TV. Meanwhile, a single US tech giant could buy every franchise of the top five US sports leagues. From its off-shore reserves. And still have $50 billion change.

We all know consumers like OTT video. But that’s the least of it. There are trends in the digital economy that, if looked at globally, could have sudden, and profound, implications for the professional content creation industry. In this eye-widening presentation, Mark Harrison steps outside the western-centric, professional media industry perspective to join the technology, consumer and media dots and ask: what could blindside us if we don’t widen our point of view?

• Interactive Storytelling: Choose What Happens Next (Andy Schuler, Netflix)
Looking to experiment with nonlinear storytelling, Netflix launched its first interactive episodes in 2017. Both in children’s programming, the shows encouraged even the youngest of viewers to touch or click on their screens to control the trajectory of the story (think Choose Your Own Adventure books from the 1980s). How did Netflix overcome some of the more interesting technical challenges of the project (i.e., mastering, encoding, streaming), how was SMPTE IMF used to streamline the process and why are we more formalized mastering practices needed for future projects?

• HPA Engineering Excellence Award Winners (Moderator: Joachim Zell, EFILM, Chair HPA Engineering Excellence Awards; Joe Bogacz, Canon; Paul Saccone, Blackmagic Design; Lance Maurer, Cinnafilm; Michael Flathers, IBM; Dave Norman, Telestream).

Since the HPA launched in 2008, the HPA Awards for Engineering Excellence have honored some of the most groundbreaking, innovative, and impactful technologies. Spend a bit of time with a select group of winners and their contributions to the way we work and the industry at large.

• The Navajo Strategic Digital Plan (John Willkie, Luxio)

• Adapting to a COTS Hardware World (Moderator: Stan Moote, IABM)
Transitioning to off-the-shelf hardware is one of the biggest topics on all sides of the industry, from manufacturers, software and service providers through to system integrators, facilities and users themselves. It’s also incredibly uncomfortable. Post production was an early adopter of specialized workstations (e.g. SGI), and has now embraced a further migration up the stack to COTS hardware and IP networks, whether bare metal, virtualized, hybrid or fully cloud based. As the industry deals with the global acceleration of formats, platforms and workflows, what are the limits of COTS hardware when software innovation is continually testing the limits of general-purpose CPUs, GPUs and network protocols? Covering “hidden” issues in using COTS hardware, from the point of view of users and facility operators as well as manufacturers, services and systems integrators.

• Academy Software Foundation: Enabling Cross-Industry Collaboration for Open Source Projects (David Morin, Academy Software Foundation)
In August 2018, the Academy of Motion Picture Arts and Sciences and The Linux Foundation launched the Academy Software Foundation (ASWF) to provide a neutral forum for open source software developers in the motion picture and broader media industries to share resources and collaborate on technologies for image creation, visual effects, animation and sound. This presentation will explain why the Foundation was formed and how it plans to increase the quality and quantity of open source contributions by lowering the barrier to entry for developing and using open source software across the industry.

Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.

Satore Tech tackles post for Philharmonia Orchestra’s latest VR film

The Philharmonia Orchestra in London debuted its latest VR experience at Royal Festival Hall alongside the opening two concerts of the Philharmonia’s new season. Satore Tech completed VR stitching for the Mahler 3: Live From London film. This is the first project completed by Satore Tech since it was launched in June of this year.

The VR experience placed users at the heart of the Orchestra during the final 10 minutes of Mahler’s Third Symphony, which was filmed live in October 2017. The stitching project was completed by creative technologist/SFX/VR expert Sergio Ochoa, who leads Satore Tech. The company used SGO Mistika technology to post the project, which Ochoa helped to develop during his time in that company — he was creative technologist and CEO of SGO’s French division.

Luke Ritchie, head of innovation and partnerships at the Philharmonia Orchestra, says, “We’ve been working with VR since 2015, it’s a fantastic technology to connect new audiences with the Orchestra in an entirely new way. VR allows you to sit at the heart of the Orchestra, and our VR experiences can transform audiences’ preconceptions of orchestral performance — whether they’re new to classical music or are a die-hard fan.”

It was a technically demanding project for Satore Tech to stitch together, as the concert was filmed live, in 360 degrees, with no retakes using Google’s latest Jump Odyssey VR camera. This meant that Ochoa was working with four to five different depth layers at any one time. The amount of fast movement also meant the resolution of the footage needed to be up-scaled from 4K to 8K to ensure it was suitable for the VR platform.

“The guiding principle for Satore Tech is we aspire to constantly push the boundaries, both in terms of what we produce and the technologies we develop to achieve that vision,” explains Ochoa. “It was challenging given the issues that arise with any live recording, but the ambition and complexity is what makes it such a very suitable initial project for us.”

Satore Tech’s next project is currently in development in Mexico, using experimental volumetric capture techniques with some of the world’s most famous dancers. It is slated for release early next year.

30 Ninja’s Julina Tatlock to keynote SMPTE 2018, will focus on emerging tech

30 Ninjas CEO Julina Tatlock, an award-winning writer-producer, virtual reality director and social TV specialist, will present the keynote address at the SMPTE 2018 conference, which takes place from October 22-25 in downtown Los Angeles. The keynote by Tatlock will take place on the 23rd at 9am, immediately following the SMPTE Annual general membership meeting.

Tatlock specializes in producing and directing VR, creating social media and web-based narrative games for movies and broadcast, as well as collaborating with developers on integrating new tech intellectual property into interactive stories.

During her keynote, she will discuss the ways that content creation and entertainment production can leverage emerging technologies. Tatlock will also address topics such as how best to evaluate what might be the next popular entertainment technology and platform, as well as how to write, direct and build for technology and platforms that don’t exist yet.

Tatlock’s 30 Ninjas, is an award-winning immersive-entertainment company she founded along with director Doug Liman (Bourne Identity, Mr. & Mrs. Smith, Edge of Tomorrow, American Made). 30 Ninjas creates original narratives and experiences in new technologies such as virtual reality, augmented reality and mixed reality and location-based entertainment for clients such as Warner Bros., USA Network, Universal Cable Productions and Harper Collins.

Tatlock also is the executive producer and director of episodes three and four of the six-part VR miniseries “Invisible,” with production partners Condé Nast Entertainment, Jaunt VR and Samsung.

Before founding 30 Ninjas, she spent eight years at Oxygen Media, where she was VP of programming strategy. In an earlier role with Martha Stewart Living Omnimedia, Tatlock wrote and produced more than 100 of NBC’s Martha Stewart Living morning show segments.

Registration is open for both SMPTE 2018 and for the SMPTE 2018 Symposium, an all-day session that will precede the technical conference and exhibition on Oct. 22. Pre-registration pricing is available through Oct. 13. Further details are available at smpte2018.org.

Assimilate intros media toolkit, Scratch Play Pro

Assimilate is now offering Scratch Play Pro, which includes a universal professional format player, immersive media player, look creator (with version management), transcoder and QC tool.

Play Pro is able to play back most formats, such as camera formats (including Raw), deliverable formats of any kind, as well as still frame formats. You can also show the image in full screen on a second/output display, either attached to the GPU or through SDI video-IO (AJA, Blackmagic, Bluefish444). Users also have the ability to load and play as much media as they can store in a timely manner.

Part of Play Pro is the Construct (timeline) environment, a graphical media manager that allows users to load and manage stills/shots/timelines. It runs on Windows or OS X.

As an immersive video player, Play Pro supports equirectangular 360, cubic/cubic packed 360, 180° VR, stereo, mono, side-by-side or over/under, embedded ambisonic audio and realtime mesh de-warping of 180° VR media. Playback is on screen or through immersive headsets like Oculus Rift, HTC Vive and HMDs supporting OpenVR on both Windows and Mac. In addition to playback and CDL color correction, Play Pro can directly publish your 360/180 media to Facebook 360 or YouTube 360.
As a look creator, Play Pro supports 1D and 3D LUT formats of any size for import and export. It also supports import/export of CDLs in both CDL and CC. It also allows you to combine two different LUTs and still add a display LUT on top. A CDL-based toolset, which is compatible with all other color tools, allows you to modify looks and/or create complete new looks.

It can also export LUTs in different depths and sizes to fit different LUT boxes, cameras and monitors. The ability to create looks in production or to import looks created by post production allows you to establish a consistent color pipeline from on-set to delivery. Using the Construct (timeline) environment, users can store all look versions and apply them at any time in the production process.

Play Pro reads in all formats and can transcode to ProRes, H.264 and H.265. For VR delivery, it supports H.264 rendering up to 8K, including the metadata needed for online portals, such as YouTube and Facebook. Users can add custom metadata, such as scene and take information and include it in any exported file. Or they can export it as a separate ALE-file for use further down the pipeline.

As a QC tool, Play Pro can be used on-set and in post. It supports SDI output, split-screen, A-B overlay and audio monitoring and routing. It also comes with a number of QC-tools for video measuring, like a vectorscope, waveform, curves, histogram, as well extensive annotation capabilities through its note feature.

All metadata and comments can be exported as a report in different styles, including an HDR-analysis report that calculates MaxFall and MaxCLL. Action- and title-safe guides, as well as blanking and letterboxing, can be enabled as an overlay for review.

Scratch Play Pro is available now for $19 per month, or $199 for a yearly license.

Lenovo intros 15-inch VR-ready ThinkPad P52

Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.

“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”

The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.

“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”

The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.

The ThinkPad P52 will be available later this month.

Combining 3D and 360 VR for The Cabiri: Anubis film

Whether you are using 360 VR or 3D, both allow audiences to feel in on the action and emotion of a film narrative or performance, but combine the two together and you can create a highly immersive experience that brings the audience directly into the “reality” of the scenes.

This is exactly what film producers and directors Fred Beahm and Bogdan Darev have done in The Cabiri: Anubis, a 3D/360VR performance art film showing at the Seattle International Film Festival’s (SIFF) VR Zone on May 18 through June 10.

The Cabiri is a Seattle-based performance art group that creates stylistic and athletic dance and entertainment routines at theater venues throughout North America. The 3D/360VR film can now be streamed from the Pixvana app to the new Oculus Go headset, which is specifically designed for 3D and 360 streaming and viewing.

“As a director working in cinema to create worlds where reality is presented in highly stylized stories, VR seemed the perfect medium to explore. What took me by complete surprise was the emotional impact, the intimacy and immediacy the immersive experience allows,” says Darev. “VR is truly a medium that highlights our collective responsibility to create original and diverse content through the power of emerging technologies that foster curiosity and the imagination.”

“Other than a live show, 3D/360VR is the ideal medium for viewers to experience the rhythmic movement in The Cabiri’s performances. Because they have the feeling of being within the scene, the viewers become so engaged in the experience that they feel the emotional and dramatic impact,” explains Beahm, who is also the cinematographer, editor and post talent for The Cabiri film.

Beahm has a long list of credits to his name, and a strong affinity for the post process that requires a keen sense of the look and feel a director or producer is striving to achieve in a film. “The artistic and technical functions of the post process take a film from raw footage to a good result, and with the right post artist and software tools to a great film,” he says. “This is why I put a strong emphasis on the post process, because along with a great story and cinematography, it’s a key component of creating a noteworthy film. VR and 3D require several complex steps, and you want to use tools that simplify the process so you can save time, create high-quality results and stay within budget.”

For The Cabiri film, he used the Kandao Obsidian S camera, filming in 6K 3D360, then SGO’s Mistika VR for their stereo 3D optical-flow stitching. He edited in Adobe’s Premiere Pro CC 2018 and finished in Assimilate’s Scratch VR, using their 3D/360VR painting, tracking and color grading tools. He then delivered in 4K 3D360 to Pixvana’s Spin Studio.”

“Scratch VR is fast. For example, with the VR transform-and-vector paint tools I can quickly paint out the nadir, or easily delete unwanted artifacts like portions of a camera rig and wires, or even a person. It’s also easy to add in graphics and visual effects with the built-in tracker and compositing tools. It’s also the only software I use that renders content in the background while you continue working on your project. Another advantage is that Scratch VR will automatically connect to an Oculus headset for viewing 3D and 360,” he continues. “During our color grading session, Bogdan would wear an Oculus Rift headset and give me suggestions about changes I should make, such as saturation and hues, and I could quickly do these on the fly and save the versions for comparison.”

VR at NAB 2018: A Parisian’s perspective

By Alexandre Regeffe

Even though my cab driver from the airport to my hotel offered these words of wisdom — “What happens in Vegas, stays in Vegas” — I’ve decided not to listen to him and instead share with you the things that impressed for the VR world at NAB 2018.

Back in September of 2017, I shared with you my thoughts on the VR offerings at the IBC show in Amsterdam. In case you don’t remember my story, I’m a French guy who jumped into the VR stuff three years ago and started a cinematic VR production company called Neotopy with a friend. Three years is like a century in VR. Indeed, this medium is constantly evolving, both technically and financially.

So what has become of VR today? Lots of different things. VR is a big bag where people throw AR, MR, 360, LBE, 180 and 3D. And from all of that, XR (Extended Reality) was born, which means everything.

Insta360 Titan

But if this blurred concept leads to some misunderstanding, is it really good for consumers? Even us pros are finding it difficult to explain what exactly VR is, currently.

While at NAB, I saw a presentation from Nick Bicanic during which he used the term “frameless media.” And, thank you, Nick, because I think that is exactly what‘s in this big bag called VR… or XR. Today, we consume a lot of content through a frame, which is our TV, computer, smartphone or cinema screen. VR allows us to go beyond the frame, and this is a very important shift for cinematographers and content creators.

But enough concepts and ideas, let us start this journey on the NAB show floor! My first stop was the VR pavilion, also called the “immersive storytelling pavilion” this year.

My next stop was to see SGO Mistika. For over a year, the SGO team has been delivering an incredible stitching software with its Mistika VR. In my opinion, there is a “before” and an “after” this tool. Thanks to its optical flow capacities, you can achieve a seamless stitching 99% of the time, even with very difficult shooting situations. The last version of the software provided additional features like stabilization, keyframe capabilities, more cameras presets and easy integration with Kandao and Insta360 camera profiles. VR pros used Mistika’s booth as sort of a base camp, meeting the development team directly.

A few steps from Misitka was Insta360, with a large, yellow booth. This Chinese company is a success story with the consumer product Insta360 One, a small 360 camera for the masses. But I was more interested in the Insta360 Pro, their 8K stereoscopic 3D360 flagship camera used by many content creators.

At the show, Insta360’s big announcement was Titan, a premium version of the Insta360 Pro offering better lenses and sensors. It’s available later this year. Oh, and there was the lightfield camera prototype, the company’s first step into the volumetric capture world.

Another interesting camera manufacturer at the show was Human Eyes Technology, presenting their Vuze+. With this affordable 3D360 camera you can dive into stereoscopic 360 content and learn the basics about this technology. Side note: The Vuze+ was chosen by National Geographic to shoot some stunning sequences in the International Space Station.

Kandao Obsidian

My favorite VR camera company, Kandao, was at NAB showing new features for its Obsidian R and S cameras. One of the best is the 6DoF capabilities. With this technology, you can generate a depth map from the camera directly in Kandao Studio, the stitching software, which comes free when you buy an Obsidian. With the combination of a 360 stitched image and depth map, you can “walk” into your movie. It’s an awesome technique for better immersion. For me this was by far the best innovation in VR technology presented on the show floor

The live capabilities of Obsidian cameras have been improved, with a dedicated Kandao Live software, which allows you to live stream 4K stereoscopic 360 with optical flow stitching on the fly! And, of course, do not forget their new Qoocam camera. With its three-lens-equipped little stick, you can either do VR 180 stereoscopic or 360 monoscopic, while using depth map technology to refocus or replace the background in post — all with a simple click. Thanks to all these innovations, Kandao is now a top player in the cinematic VR industry.

One Kandao competitor is ZCam. They were there with a couple of new products: the ZCam V1, a 3D360 camera with a tiny form factor. It’s very interesting for shooting scenes where things are very close to the camera. It keeps a good stereoscopy even on nearby objects, which is a major issue with most of VR cameras and rigs. The second one is the small E2 – while it’s not really a VR camera, it can be used as an underwater rig, for example.

ZCam K1 Pro

The ZCam product range is really impressive and completely targeting professionals, from ZCam S1 to ZCam V1 Pro. Important note: take a look at their K1 Pro, a VR 180 camera, if you want to produce high-end content for the Google VR180 ecosystem.

Another VR camera at NAB was Samsung’s Round, offering stereoscopic capabilities. This relatively compact device comes with a proprietary software suite for stitching and viewing 360 shots. Thanks to IP65 normalization, you can use this camera outdoors in difficult weather conditions, like rain, dust or snow. It was great to see the live streaming 4K 3D360 operating on the show floor, using several Round cameras combined with powerful Next Computing hardware.

VR Post
Adobe Creative Cloud 2018 remains the must-have tool to achieve VR post production without losing your mind. Numerous 360-specific functionalities have been added during the last year, after Adobe bought the Mettle Skybox suite. The most impressive feature is that you can now stay in your 360 environment for editing. You just put your Oculus rift headset on and manipulate your Premiere timeline with touch controllers and proceed to edit your shots. Think of it as a Minority Report-style editing interface! I am sure we can expect more amazing VR tools from Adobe this year.

Google’s Lightfield technology

Mettle was at the Dell booth showing their new Adobe CC 360 plugin, called Flux. After an impressive Mantra release last year, Flux is now available for VR artists, allowing them to do 3D volumetric fractals and to create entire futuristic worlds. It was awesome to see the results in a headset!

Distributing VR
So once you have produced your cinematic VR content, how can you distribute it? One option is to use the Liquid Cinema platform. They were at NAB with a major update and some new features, including seamless transitions between a “flat” video and a 360 video. As a content creator you can also manage your 360 movies in a very smart CMS linked to your app and instantly add language versions, thumbnails, geoblocking, etc. Another exciting thing is built-in 6DoF capability right in the editor with a compatible headset — allowing you to walk through your titles, graphics and more!

I can’t leave without mentioning Voysys for live-streaming VR; Kodak PixPro and its new cameras ; Google’s next move into lightfield technology ; Bonsai’s launch of a new version of the Excalibur rig ; and many other great manufacturers, software editors and partners.

See you next time, Sin City.

Dell makes updates to its Precision mobile workstation line

Recently, Dell made updates to its line of Precision mobile workstations targeting the media and entertainment industries. The Dell Precision 7730 and 7530 mobile workstations feature the latest eighth-generation IntelCore and Xeon processors, AMD Radeon WX and Nvidia Quadro professional graphics, 3200MHz SuperSpeed memory and memory capacity up to 128GB.

The Dell Precision 7530 is a 15-inch VR-ready mobile workstation with large PCIe SSD storage capacity, especially for a 15-inch mobile workstation — up to 6TB. Dell says the 7730 enables new uses such as AI and machine learning development and edge inference systems.

Also new is the 15-inch Dell Precision 5530 two-in-one, which targets content creation and editing and features a very thin design. A flexible 360-degree hinge enables multiple modes of interaction, including support for touch and pen. It features the next-generation InfinityEdge 4K Ultra HD display. The Dell Premium pen offers precise pressure sensitivity (4,096 pressure points), tilt functionality and low latency for an experience that is reminiscent of drawing on paper. The new MagLev keyboard design reduces keyboard thickness “without compromising critical keyboard shortcuts in content creation workflows,” and ultra-thin GORE Thermal Insulation keeps the system cool.

This workstation weighs 3.9 pounds and delivers next-generation professional graphics up to Nvidia Quadro P2000. With enhanced 2666MHz memory speeds up to 32GB, users can accelerate their complicated workflows. And with up to 4TB of SSD storage, users can access, transfer and store large 3D, video and multimedia files quickly and easily.

The fully customizable 15-inch Dell Precision 3530 mobile workstation features eighth-generation Intel Core and next-generation Xeon processors, memory speeds up to 2666MHz and Nvidia Quadro P600 professional graphics. It also features a 92WHr battery and wide range of ports, including HDMI 2.0, Thunderbolt and VGA.