Tag Archives: VR

Timecode and GoPro partner to make posting VR easier

Timecode Systems and GoPro’s Kolor team recently worked together to create a new timecode sync feature for Kolor’s Autopano Video Pro stitching software. By combining their technologies, the two companies have developed a VR workflow solution that offers the efficiency benefits of professional standard timecode synchronization to VR and 360 filming.

Time-aligning files from the multiple cameras in a 360° VR rig can be a manual and time-consuming process if there is no easy synchronization point, especially when synchronizing with separate audio. Visually timecode-slating cameras is a disruptive manual process, and using the clap of a slate (or another visual or audio cue) as a sync marker can be unreliable when it comes to the edit process.

The new sync feature, included in the Version 3.0 update to Autopano Video Pro, incorporates full support for MP4 timecode generated by Timecode’s products. The solution is compatible with a range of custom, multi-camera VR rigs, including rigs using GoPro’s Hero 4 cameras with SyncBac Pro for timecode and also other camera models using alternative Timecode Systems products. This allows VR filmmakers to focus on the creative and not worry about whether every camera in the rig is shooting in frame-level synchronization. Whether filming using a two-camera GoPro Hero 4 rig or 24 cameras in a 360° array creating resolutions as high as 32K, the solution syncs with the same efficiency. The end results are media files that can be automatically timecode-aligned in Autopano Video Pro with the push of a button.

“We’re giving VR camera operators the confidence that they can start and stop recording all day long without the hassle of having to disturb filming to manually slate cameras; that’s the understated benefit of timecode,” says Paul Bannister, chief science officer of Timecode Systems.

“To create high-quality VR output using multiple cameras to capture high-quality spherical video isn’t enough; the footage that is captured needs to be stitched together as simply as possible — with ease, speed and accuracy, whatever the camera rig,” explains Alexandre Jenny, senior director of Immersive Media Solutions at GoPro. “Anyone who has produced 360 video will understand the difficulties involved in relying on a clap or visual cue to mark when all the cameras start recording to match up video for stitching. To solve that issue, either you use an integrated solution like GoPro Omni with a pixel-level synchronization, or now you have the alternative to use accurate timecode metadata from SyncBac Pro in a custom, scalable multicamera rig. It makes the workflow much easier for professional VR content producers.”

Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey

Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.

Assimilate’s Scratch VR Suite 8.6 now available

Back in February, Assimilate announced the beta version of its Scratch VR Suite 8.6. Well, now the company is back with a final version of the product, including user requests for features and functions.

Scratch VR Suite 8.6 is a realtime post solution and workflow for VR/360 content. With added GPU stitching of 360-video and ambisonic audio support, as well as live streaming, the Scratch VR Suite 8.6 allows VR content creators — DPs, DITs, post artists — a streamlined, end-to-end workflow for VR/360 content.

The Scratch VR Suite 8.6 workflow automatically includes all the basic post tools: dailies, color grading, compositing, playback, cloud-based reviews, finishing and mastering.

New features and updates include:
– 360 stitching functionality: Load the source media of multiple shots from your 360 cameras. into Scratch VR and easily wrap them into a stitch node to combine the sources into a equirectangular image.
• Support for various stitch template format, such as AutoPano, Hugin, PTGui and PTStitch scripts.
• Either render out the equirectangular format first or just continue to edit, grade and composite on top of the stitched nodes and render the final result.
• Ambisonic audio: Load, set and playback ambisonic audio files to complete the 360 immersive experience.
• Video with 360 sound can be published directly to YouTube 360.
• Additional overlay handles to the existing. 2D-equirectangular feature for more easily positioning. 2D elements in a 360 scene.
• Support for Oculus Rift, Samsung Gear VR, HTC Vive and Google Cardboard.
• Several new features and functions make working in HDR just as easy as SDR.
• Increased Format Support – Added support for all the latest formats for even greater efficiency in the DIT and post production processes.
• More Simplified DIT reporting function – Added features and functions enables even greater efficiencies in a single, streamlined workflow.
• User Interface: Numerous updates have been made to enhance and simplify the UI for content creators, such as for the log-in screen, matrix layout, swipe sensitivity, Player stack, tool bar and tool tips.

SMPTE’s ETCA conference takes on OTT, cloud, AR/VR, more

SMPTE has shared program details for its Entertainment Technology in the Connected Age (ETCA) conference, taking place in Mountain View, California, May 8-9 at the Microsoft Silicon Valley Campus.

Called “Redefining the Entertainment Experience,” this year’s conference will explore emerging technologies’ impact on current and future delivery of compelling connected entertainment experiences.

Bob DeHaven, GM of worldwide communications & media at Microsoft Azure, will present the first conference keynote, titled “At the Edge: The Future of Entertainment Carriage.” The growth of on-demand programming and mobile applications, the proliferation of the cloud and the advent of the “Internet of things” demands that video content is available closer to the end user to improve both availability and the quality of the experience.

DeHaven will discuss the relationships taking shape to embrace these new requirements and will explore the roles network providers, content delivery networks (CDNs), network optimization technologies and cloud platforms will play in achieving the industry’s evolving needs.

Hanno Basse, chief technical officer at Twentieth Century Fox Film, will present “Next-Generation Entertainment: A View From the Fox.” Fox distributes content via multiple outlets ranging — from cinema to Blu-ray, over-the-top (OTT), and even VR. Basse will share his views on the technical challenges of enabling next-generation entertainment in a connected age and how Fox plans to address them.

The first conference session, “Rethinking Content Creation and Monetization in a Connected Age,” will focus on multiplatform production and monetization using the latest creation, analytics and search technologies. The session “Is There a JND in It for Me?” will take a second angle, exploring what new content creation, delivery and display technology innovations will mean for the viewer. Panelists will discuss the parameters required to achieve original artistic intent while maintaining a just noticeable difference (JND) quality level for the consumer viewing experience.

“Video Compression: What’s Beyond HEVC?” will explore emerging techniques and innovations, outlining evolving video coding techniques and their ability to handle new types of source material, including HDR and wide color gamut content, as well as video for VR/AR.

Moving from content creation and compression into delivery, “Linear Playout: From Cable to the Cloud” will discuss the current distribution landscape, looking at the consumer apps, smart TV apps, and content aggregators/curators that are enabling cord-cutters to watch linear television, as well as the new business models and opportunities shaping services and the consumer experience. The session will explore tools for digital ad insertion, audience measurement and monetization while considering the future of cloud workflows.

“Would the Internet Crash If Everyone Watched the Super Bowl Online?” will shift the discussion to live streaming, examining the technologies that enable today’s services as well as how technologies such as transparent caching, multicast streaming, peer-assisted delivery and User Datagram Protocol (UDP) streaming might enable live streaming at a traditional broadcast scale and beyond.

“Adaptive Streaming Technology: Entertainment Plumbing for the Web” will focus specifically on innovative technologies and standards that will enable the industry to overcome inconsistencies of the bitrate quality of the Internet.

“IP and Thee: What’s New in 2017?” will delve into the upgrade to Internet Protocol infrastructure and the impact of next-generation systems such as the ATSC 3.0 digital television broadcast system, the Digital Video Broadcast (DVB) suite of internationally accepted open standards for digital television, and fifth-generation mobile networks (5G wireless) on Internet-delivered entertainment services.

Moving into the cloud, “Weather Forecast: Clouds and Partly Scattered Fog in Your Future” examines how local networking topologies, dubbed “the fog,” are complementing the cloud by enabling content delivery and streaming via less traditional — and often wireless — communication channels such as 5G.

“Giving Voice to Video Discovery” will highlight the ways in which voice is being added to pay television and OTT platforms to simplify searches.

In a session that explores new consumption models, “VR From Fiction to Fact” will examine current experimentation with VR technology, emerging use cases across mobile devices and high-end headsets, and strategies for addressing the technical demands of this immersive format.

You can resister for the conference here.

Sound editor/mixer Korey Pereira on 3D audio workflows for VR

By Andrew Emge

As the technologies for VR and 360 video rapidly advance and become more accessible, media creators are realizing the crucial role that sound plays in achieving realism. Sound designers are exploring this new frontier of 3D audio at the same time that tools for the medium are being developed and introduced. When everything is so new and constantly evolving, how does one learn where to start or decide where to invest time and experimentation?

To better understand this process, I spoke with Korey Pereira, a sound editor and mixer based in Austin, Texas. He recently entered the VR/360 audio world and has started developing a workflow.

Can you provide some background about who you are, the work you’ve done, and what you’ve been up to lately?
I’m the owner/creative director at Soularity Sound, an Austin-based post company. We primarily work with indie filmmakers, but also do some television and ad work. In addition to my work at Soularity, I also work as a sound editor and mixer at a few other Austin post facilities, including Soundcrafter. My credits with them include Richard Linklater’s Boyhood and Everybody Wants Some, as well as TV shows such as Shipping Wars and My 600lb Life.

You recently purchased the Pro Sound Effects NYC Ambisonics library. Can you talk about some VR projects you are working on?
In the coming months I plan to start creating audio content for VR with a local content creator, Deepak Chetty. Over the years we have collaborated on a number of projects, most recently I worked on his stereoscopic 3D sci-fi/action film, Hard Reset, which won the 2016 “Best 3D Live Action Short” from the Advanced Imaging Society.

Deepak Chetty shooting a VR project.

I love sci-fi as a genre, because there really are no rules. It lets you really go for it as far as sound. Deepak has been shifting his creative focus toward 360 content and we are hoping to start working together in that aspect in the near future.

The content Deepak is currently mostly working on non-fiction and documentary-based content in 360 — mainly environment capture with a through line of audio storytelling that serves as the backbone of the piece. He is also looking forward to experimenting with fiction-based narratives in the 360 space, especially with the use of spatial audio to enhance immersion for the viewer.

Prior to meeting Deepak, did you have any experience working with VR/3D audio?
No, this is my first venture into the world of VR audio or 3D audio. I have been mixing in surround for over a decade, but I am excited about the additional possibilities this format brings to the table.

What have been the most helpful sources for studying up and figuring out a workflow?
The Internet! There is such a wealth of information out there, and you kind of just have to dive in. The benefit of 360 audio being a relatively new format is that people are still willing to talk openly about it.

Was there anything particularly challenging to get used to or wrap your head around?
In a lot of ways designing audio for VR is not that different from traditional sound mixing for film. You start with a bed of ambiences and then place elements within a surround space. I guess the most challenging part of the transition is anticipating how the audience might hear your mix. If the viewer decides to watch a whole video facing the surrounds, how will it sound?

Can you describe the workflow you’ve established so far? What are some decisions you’ve made regarding DAW, monitoring, software, plug-ins, tools, formats and order of operation?
I am a Pro Tools guy, so my main goal was finding a solution that works seamlessly inside the Pro Tools environment. As I started looking into different options, the Two Big Ears Spatial Workstation really stood out to me as being the most intuitive and easiest platform to hit the ground running with. (Two Big Ears recently joined Facebook, so Spatial Workstation is now available for free!)

Basically, you install a Pro Tools plug-in that works as a 3D audio engine and gives you a Pro Tools project with all the routing and tracks laid out for you. There are object-based tracks that allow you to place sounds within a 3D environment as well as ambience tracks that allow you to add stereo or ambisonic beds as a basis for your mix.

The coolest thing about this platform is that it includes a 3D video player that runs in sync with Pro Tools. There is a binaural preview pathway in the template that lets you hear the shift in perspective as you move the video around in the player. Pretty cool!

In September 2016, another audio workflow for VR in Pro Tools entered the market from the Dutch company Audio Ease and their 360 pan suite. Much like the Spatial Workstation, the suite offers an object-based panner (360 pan) that when placed on every audio track allows you to pan individual items within the 360-degree field of view. The 360 pan suite also includes the 360 monitor, which allows you to preview head tracking within Pro Tools.

Where the 360 pan suite really stands out is with their video overlay function. By loading a 360 video inside of Pro Tools, Audio Ease adds an overlay on top of the Pro Tools video window, letting you pan each track in real time, which is really useful. For the features it offers, it is relatively affordable. The suite does not come with its own template, but they have a quick video guide to get you up and going fairly easily.

Are there any aspects that you’re still figuring out?
Delivery is still a bit up in the air. You may need to export in multiple formats to be able to upload to Facebook, YouTube, etc. I was glad to see that YouTube is supporting the ambisonic format for delivery, but I look forward to seeing workflows become more standardized across the board.

Any areas in which you see the need for further development, and/or where the tech just isn’t there yet?
I think the biggest limitation with VR is the lack of affordable and easy-to-use 3D audio capture devices. I would love to see a super-portable ambisonic rig that filmmakers can easily use in conjunction with shooting 360 video. Especially as media giants like YouTube are gravitating toward the ambisonic format for delivery, it would be great for them to be able to capture the actual space in the same format.

In January 2017, Røde announced the VideoMic Soundfield — an on-camera ambisonic, 360-degree surround sound microphone — though pricing and release dates have not yet been made public.

One new product I am really excited about is the Sennheiser Ambeo VR mic, which is around $1,650. That’s a bit pricey for the most casual user once you factor in a 4-track recorder, but for the professional user that already has a 788T, the Ambeo VR mic offers a nice turnkey solution. I like that the mic looks a little less fragile than some of the other options on the market. It has a built-in windscreen/cage similar to what you would see on a live handheld microphone. It also comes with a Rycote shockmount and cable to 4-XLR, which is nice.

Some leading companies have recently selected ambisonics as the standard spatial audio format — can you talk a bit about how you use ambisonics for VR?
Yeah, I think this is a great decision. I like the “future proof” nature of the ambisonic format. Even in traditional film mixing, I like having the option to export to stereo, 5.1 or 7.1 depending on the project. Until ambisonic becomes more standardized, I like that the Two Big Ears/FB 360 encoder allows you to export to the .tbe B-Format (FuMa or ambiX/YouTube) as well as quad-binaural.

I am a huge fan of the ambisonic format in general. The Pro Sound Effects NYC Ambisonics Library (and now Chicago and Tokyo as well) was my first experience using the format and I was blown away. In a traditional mixing environment it adds another level of depth to the backgrounds. I really look forward to being able to bring it to the VR format as well.


Andrew Emge is operations manager at Pro Sound Effects.

Lenovo intros VR-ready ThinkStation P320

Lenovo launched its VR-ready ThinkStation P320 at Develop3D Live, a UK-based conference that puts special focus on virtual reality as a productivity tool in design workflows. The ThinkStation P320 is the latest addition to the Lenovo portfolio of VR-ready certified workstations and is designed for power users looking to balance both performance and their budgets.

The workstation’s pro VR certification allows ThinkStation P320 users an to more easily add virtual reality into their workflow without requiring an initial high-end hardware and software investment.

The refreshed workstation will be available in both full-size tower and small form factor (SFF) and comes equipped with Intel’s newest Xeon processors and Core i7 processors — offering speeds of up to 4.5GHz with Turbo Boost (on the tower). Both form factors will also support the latest Nvidia Quadro graphics cards, including support for dual Nvidia Quadro P1000 GPUs in the small form factor.

The ISV-certified ThinkStation P320 supports up to 64GB of DDR4 memory and customization via the Flex Module. In terms of environmental sustainability, the P320 is Energy Star-qualified, as well as EPEAT Gold and Greenguard-certified.

The Lenovo ThinkStation P320 full-size tower and SFF will be available at the end of April.

Timecode’s new firmware paves the way for VR

Timecode Systems, which makes wireless technologies for sharing timecode and metadata, has launched a firmware upgrade that enhances the accuracy of its wireless genlock.

Promising sub-line-accurate synchronization, the system allows Timecode Systems products to stay locked in sync more accurately, setting the scene for development of a wireless sensor sync solution able to meet the requirements of VR/AR and motion capture.

“The industry benchmark for synchronization has always been ‘frame-accurate’, but as we started exploring the absolutely mission-critical sync requirements of virtual reality, augmented reality and motion capture, we realized sync had to be even tighter,” said Ashok Savdharia, chief technical officer at Timecode Systems. “With the new firmware and FPGA algorithms released in our latest update, we’ve created a system offering wireless genlock to sub-line accuracy. We now have a solid foundation on which to build a robust and immensely accurate genlock, HSYNC and VSYNC solution that will meet the demands of VR and motion capture.”

A veteran in camera and image sensor technology, Savdharia joined Timecode Systems last year. In addition to building up the company’s multi-camera range of solutions, he is leading a development team to pioneering a wireless sync system for the VR and motion capture market.

HPA Tech Retreat takes on realities of virtual reality

By Tom Coughlin

The HPA Tech Retreat, run by the Hollywood Professional Association in association with SMPTE, began with an insightful one-day VR seminar— Integrating Virtual Reality/Augmented Reality into Entertainment Applications. Lucas Wilson from SuperSphere kicked off the sessions and helped with much of the organization of the seminar.

The seminar addressed virtual reality (VR), augmented reality (AR) and mixed reality (MR, a subset of AR where the real world and the digital world interact, like Pokeman Go). As in traditional planar video, 360-degree video still requires a director to tell a story and direct the eye to see what is meant to be seen. Successful VR requires understanding how people look at things, how they perceive reality, and using that understanding to help tell a story. Some things that may help with this are reinforcement of the viewer’s gaze with color and sound that may vary with the viewer — e.g. these may be different for the “good guy” and the “bad guy.”

VR workflows are quite different from traditional ones, with many elements changing with multiple-camera content. For instance, it is much more difficult to keep a camera crew out of the image, and providing proper illumination for all the cameras can be a challenge. The image below from Jaunt shows their 360-degree workflow, including the use of their cloud-based computational image service to stitch the images from the multiple cameras.
Snapchat is the biggest MR application, said Wilson. Snapchat’s Snapchat-stories could be the basis of future post tools.

Because stand-alone headsets (head-mounted displays, or HMDs) are expensive, most users of VR rely on smart phone-based displays. There are also some places that allow one or more people to experience VR, such as the IMAX center in Los Angeles. Activities such as VR viewing will be one of the big drivers for higher-resolution mobile device displays.

Tools that allow artists and directors to get fast feedback on their shots are still in development. But progress is being made, and today over 50 percent of VR is used for video viewing rather than games. Participants in a VR/AR market session, moderated by the Hollywood Reporter’s Carolyn Giardina and including Marcie Jastrow, David Moretti, Catherine Day and Phil Lelyveld, seemed to agree that the biggest immediate opportunity is probably with AR.

Koji Gardiner from Jaunt gave a great talk on their approach to VR. He discussed the various ways that 360-degree video can be captured and the processing required to create finished stitched video. For an array of cameras with some separation between the cameras (no common axis point for the imaging cameras), there will be area that needs to be stitched together between camera images using common reference points between the different camera images as well as blind spots near to the cameras where they are not capturing images.

If there is a single axis for all of the cameras then there are effectively no blind spots and no stitching possible as shown in the image below. Covering all the space to get a 360-degree video requires additional cameras located on that axis to cover all the space.

The Fraunhofer Institute, in Germany, has been showing a 360-degree video camera with an effective single axis for several cameras for several years, as shown below. They do this using mirrors to reflect images to the individual cameras.

As the number of cameras is increased, the mathematical work to stitch the 360-degree images together is reduced.

Stitching
There are two approaches commonly used in VR stitching of multiple camera videos. The easiest to implement is a geometric approach that uses known geometries and distances to objects. It requires limited computational resources but results in unavoidable ghosting artifacts at seams from the separate images.

The Optical Flow approach synthesizes every pixel by computing correspondences between neighboring cameras. This approach eliminates the ghosting artifacts at the seams but has its own more subtle artifacts and requires significantly more processing capability. The Optical Flow approach requires computational capabilities far beyond those normally available to content creators. This has led to a growing market to upload multi-camera video streams to cloud services that process the stitching to create finished 360-degree videos.

Files from the Jaunt One camera system are first downloaded and organized on a laptop computer and then uploaded to Jaunt’s cloud server to be processed and create the stitching to make a 360 video. Omni-directionally captured audio can also be uploaded and mixed ambisonically, resulting in advanced directionality in the audio tied to the VR video experience.

Google and Facebook also have cloud-based resources for computational photography used for this sort of image stitching.

The Jaunt One 360-degree camera has a 1-inch 20MP rolling shutter sensor with frame rates up to 60fps with 3200 ISO max, 29dB SNR at ISO800. It has a 10 stops per camera module, with 130-degree diagonal FOV, 4/2.9 optics and with up to 16K resolution (8K per eye). Jaunt One at 60fps provides 200GB/minute uncompressed. This can fill a 1TB SSD in five minutes. They are forced to use compression to be able to use currently affordable storage devices. This compression creates 11GB per minute, which can fill a 1TB SSD in 90 minutes.

The actual stitched image, laid out flat, looks like a distorted projection. But when viewed in a stereoscopic viewer it appears to look like a natural image of the world around the viewer, giving an immersive experience. At one point in time the viewer does not see all of the image but only the image in a restricted space that they are looking directly at as shown in the red box in the figure below.

The full 360-degree image can be pretty high resolution, but unless the resolution is high enough, the resolution inside the scene being viewed at any point in time will be much less that the resolution of the overall scene, unless special steps are taken.

The image below shows that for a 4k 360-degree video the resolution in the field of view (FOV) may be only 1K, much less resolution and quite perceptible to the human eye.

In order to provide a better viewing experience in the FOV, either the resolution of the entire view must be better (e.g. the Jaunt One high-resolution version has 8K per eye and thus 16K total displayed resolution) or there must be a way to increase the resolution in the most significant FOV in a video, so at least in that FOV, the resolution leads to a greater feeling of reality.

Virtual reality, augmented reality and mixed reality create new ways of interacting with the world around us and will drive consumer technologies and the need for 360-degree video. New tools and stitching software, much of this cloud-based, will enable these workflows for folks who want to participate in this revolution in content. The role of a director is as important as ever as new methods are needed to tell stories and guide the viewer to engage in this story.

2017 Creative Storage Conference
You can learn more about the growth in VR content in professional video and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services at the 2017 Creative Storage Conference — taking place May 24, 2017 in Culver City.


Thomas M. Coughlin of Coughlin Associates is a storage analyst and consultant. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide.

Deluxe VFX

Craig Zerouni joins Deluxe VFX as head of technology

Deluxe has named Craig Zerouni as head of technology for Deluxe Visual Effects. In this role, he will focus on continuing to unify software development and systems architecture across Deluxe’s Method studios in Los Angeles, Vancouver, New York and India, and its Iloura studios in Sydney and Melbourne, as well as LA’s Deluxe VR.

Based in LA and reporting to president/GM of Deluxe VFX and VR Ed Ulbrich, Zerouni will lead VFX and VR R&D and software development teams and systems worldwide, working closely with technology teams across Deluxe’s Creative division.

Zerouni has been working in media technology and production for nearly three decades, joining Deluxe most recently from DreamWorks, where he was director of technology at its Bangalore, India-bsed facility overseeing all technology. Prior to that he spent nine years at Digital Domain, where he was first head of R&D responsible for software strategy and teams in five locations across three countries, then senior director of technology overseeing software, systems, production technology, technical directors and media systems. He has also directed engineering, products and teams at software/tech companies Silicon Grail, Side Effects Software and Critical Path. In addition, he was co-founder of London-based computer animation company CFX.

Zerouni’s work has contributed to features including Tron: Legacy, Iron Man 3, Maleficent, X-Men: Days of Future Past, Ender’s Game and more than 400 commercials and TV IDs and titles. He is a member of BAFTA, ACM/SIGGRAPH, IEEE and the VES. He has served on the AMPAS Digital Imaging Technology Subcommittee and is the author of the technical reference book “Houdini on the Spot.”

Says Ulbrich on the new hire: “Our VFX work serves both the features world, which is increasingly global, and the advertising community, which is increasingly local. Behind the curtain at Method, Iloura, and Deluxe, in general, we have been working to integrate our studios to give clients the ability to tap into integrated global capacity, technology and talent anywhere in the world, while offering a high-quality local experience. Craig’s experience leading global technology organizations and distributed development teams, and building and integrating pipelines is right in line with our focus.”