Category Archives: AR

Timecode and GoPro partner to make posting VR easier

Timecode Systems and GoPro’s Kolor team recently worked together to create a new timecode sync feature for Kolor’s Autopano Video Pro stitching software. By combining their technologies, the two companies have developed a VR workflow solution that offers the efficiency benefits of professional standard timecode synchronization to VR and 360 filming.

Time-aligning files from the multiple cameras in a 360° VR rig can be a manual and time-consuming process if there is no easy synchronization point, especially when synchronizing with separate audio. Visually timecode-slating cameras is a disruptive manual process, and using the clap of a slate (or another visual or audio cue) as a sync marker can be unreliable when it comes to the edit process.

The new sync feature, included in the Version 3.0 update to Autopano Video Pro, incorporates full support for MP4 timecode generated by Timecode’s products. The solution is compatible with a range of custom, multi-camera VR rigs, including rigs using GoPro’s Hero 4 cameras with SyncBac Pro for timecode and also other camera models using alternative Timecode Systems products. This allows VR filmmakers to focus on the creative and not worry about whether every camera in the rig is shooting in frame-level synchronization. Whether filming using a two-camera GoPro Hero 4 rig or 24 cameras in a 360° array creating resolutions as high as 32K, the solution syncs with the same efficiency. The end results are media files that can be automatically timecode-aligned in Autopano Video Pro with the push of a button.

“We’re giving VR camera operators the confidence that they can start and stop recording all day long without the hassle of having to disturb filming to manually slate cameras; that’s the understated benefit of timecode,” says Paul Bannister, chief science officer of Timecode Systems.

“To create high-quality VR output using multiple cameras to capture high-quality spherical video isn’t enough; the footage that is captured needs to be stitched together as simply as possible — with ease, speed and accuracy, whatever the camera rig,” explains Alexandre Jenny, senior director of Immersive Media Solutions at GoPro. “Anyone who has produced 360 video will understand the difficulties involved in relying on a clap or visual cue to mark when all the cameras start recording to match up video for stitching. To solve that issue, either you use an integrated solution like GoPro Omni with a pixel-level synchronization, or now you have the alternative to use accurate timecode metadata from SyncBac Pro in a custom, scalable multicamera rig. It makes the workflow much easier for professional VR content producers.”

Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey

MTI 3.31

Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.


The importance of audio in VR

By Anne Jimkes

While some might not be aware, sound is 50 percent of the experience in VR, as well as in film, television and games. Because we can’t physically see the audio, it might not get as much attention as the visual side of the medium. But the balance and collaboration between visual and aural is what creates the most effective, immersive and successful experience.

More specifically, sound in VR can be used to ease people into the experience, what we also call “on boarding.” It can be used subtly and subconsciously to guide viewers by motivating them to look in a specific direction of the virtual world, which completely surrounds them.

In every production process, it is important to discuss how sound can be used to benefit the storytelling and the overall experience of the final project. In VR, especially the many low-budget independent projects, it is crucial to keep the importance and use of audio in mind from the start to save time and money in the end. Oftentimes, there are no real opportunities or means to record ADR after a live-action VR shoot, so it is important to give the production mixer ample opportunity to capture the best production sound possible.

Anne Jimkes at work.

This involves capturing wild lines, making sure there is time to plant and check the mics, and recording room tone. Things that are already required, albeit not always granted, on regular shoots, but even more important on a set where a boom operator cannot be used due to the 360 degree view of the camera. The post process is also very similar to that for TV or film up to the point of actual spatialization. We come across similar issues of having to clean up dialogue and fill in the world through sound. What producers must be aware of, however, is that after all the necessary elements of the soundtrack have been prepared, we have to manually and meticulously place and move around all the “audio objects” and various audio sources throughout the space. Whenever people decide to re-orient the video — meaning when they change what is considered the initial point of facing forward or “north” — we have to rewrite all this information that established the location and movement of the sound, which takes time.

Capturing Audio for VR
To capture audio for virtual reality we have learned a lot about planting and hiding mics as efficiently as possible. Unlike regular productions, it is not possible to use a boom mic, which tends to be the primary and most naturally sounding microphone. Aside from the more common lavalier mics, we also use ambisonic mics, which capture a full sphere of audio and matches the 360 picture — if the mic is placed correctly on axis with the camera. Most of the time we work with Sennheiser and use their Ambeo microphone to capture 360 audio on set, after which we add the rest of the spatialized audio during post production. Playing back the spatialized audio has become easier lately, because more and more platforms and VR apps accept some form of 360 audio playback. There is still a difference between the file formats to which we can encode our audio outputs, meaning that some are more precise and others are a little more blurry regarding spatialization. With VR, there is not yet a standard for deliverables and specs, unlike the film/television workflow.

What matters most in the end is that people are aware of how the creative use of sound can enhance their experience, and how important it is to spend time on capturing good dialogue on set.


Anne Jimkes is a composer, sound designer, scholar and visual artist from the Netherlands. Her work includes VR sound design at EccoVR and work with the IMAX VR Centre. With a Master’s Degree from Chapman University, Jimkes previously served as a sound intern for the Academy of Television Arts & Sciences.


Sound editor/mixer Korey Pereira on 3D audio workflows for VR

By Andrew Emge

As the technologies for VR and 360 video rapidly advance and become more accessible, media creators are realizing the crucial role that sound plays in achieving realism. Sound designers are exploring this new frontier of 3D audio at the same time that tools for the medium are being developed and introduced. When everything is so new and constantly evolving, how does one learn where to start or decide where to invest time and experimentation?

To better understand this process, I spoke with Korey Pereira, a sound editor and mixer based in Austin, Texas. He recently entered the VR/360 audio world and has started developing a workflow.

Can you provide some background about who you are, the work you’ve done, and what you’ve been up to lately?
I’m the owner/creative director at Soularity Sound, an Austin-based post company. We primarily work with indie filmmakers, but also do some television and ad work. In addition to my work at Soularity, I also work as a sound editor and mixer at a few other Austin post facilities, including Soundcrafter. My credits with them include Richard Linklater’s Boyhood and Everybody Wants Some, as well as TV shows such as Shipping Wars and My 600lb Life.

You recently purchased the Pro Sound Effects NYC Ambisonics library. Can you talk about some VR projects you are working on?
In the coming months I plan to start creating audio content for VR with a local content creator, Deepak Chetty. Over the years we have collaborated on a number of projects, most recently I worked on his stereoscopic 3D sci-fi/action film, Hard Reset, which won the 2016 “Best 3D Live Action Short” from the Advanced Imaging Society.

Deepak Chetty shooting a VR project.

I love sci-fi as a genre, because there really are no rules. It lets you really go for it as far as sound. Deepak has been shifting his creative focus toward 360 content and we are hoping to start working together in that aspect in the near future.

The content Deepak is currently mostly working on non-fiction and documentary-based content in 360 — mainly environment capture with a through line of audio storytelling that serves as the backbone of the piece. He is also looking forward to experimenting with fiction-based narratives in the 360 space, especially with the use of spatial audio to enhance immersion for the viewer.

Prior to meeting Deepak, did you have any experience working with VR/3D audio?
No, this is my first venture into the world of VR audio or 3D audio. I have been mixing in surround for over a decade, but I am excited about the additional possibilities this format brings to the table.

What have been the most helpful sources for studying up and figuring out a workflow?
The Internet! There is such a wealth of information out there, and you kind of just have to dive in. The benefit of 360 audio being a relatively new format is that people are still willing to talk openly about it.

Was there anything particularly challenging to get used to or wrap your head around?
In a lot of ways designing audio for VR is not that different from traditional sound mixing for film. You start with a bed of ambiences and then place elements within a surround space. I guess the most challenging part of the transition is anticipating how the audience might hear your mix. If the viewer decides to watch a whole video facing the surrounds, how will it sound?

Can you describe the workflow you’ve established so far? What are some decisions you’ve made regarding DAW, monitoring, software, plug-ins, tools, formats and order of operation?
I am a Pro Tools guy, so my main goal was finding a solution that works seamlessly inside the Pro Tools environment. As I started looking into different options, the Two Big Ears Spatial Workstation really stood out to me as being the most intuitive and easiest platform to hit the ground running with. (Two Big Ears recently joined Facebook, so Spatial Workstation is now available for free!)

Basically, you install a Pro Tools plug-in that works as a 3D audio engine and gives you a Pro Tools project with all the routing and tracks laid out for you. There are object-based tracks that allow you to place sounds within a 3D environment as well as ambience tracks that allow you to add stereo or ambisonic beds as a basis for your mix.

The coolest thing about this platform is that it includes a 3D video player that runs in sync with Pro Tools. There is a binaural preview pathway in the template that lets you hear the shift in perspective as you move the video around in the player. Pretty cool!

In September 2016, another audio workflow for VR in Pro Tools entered the market from the Dutch company Audio Ease and their 360 pan suite. Much like the Spatial Workstation, the suite offers an object-based panner (360 pan) that when placed on every audio track allows you to pan individual items within the 360-degree field of view. The 360 pan suite also includes the 360 monitor, which allows you to preview head tracking within Pro Tools.

Where the 360 pan suite really stands out is with their video overlay function. By loading a 360 video inside of Pro Tools, Audio Ease adds an overlay on top of the Pro Tools video window, letting you pan each track in real time, which is really useful. For the features it offers, it is relatively affordable. The suite does not come with its own template, but they have a quick video guide to get you up and going fairly easily.

Are there any aspects that you’re still figuring out?
Delivery is still a bit up in the air. You may need to export in multiple formats to be able to upload to Facebook, YouTube, etc. I was glad to see that YouTube is supporting the ambisonic format for delivery, but I look forward to seeing workflows become more standardized across the board.

Any areas in which you see the need for further development, and/or where the tech just isn’t there yet?
I think the biggest limitation with VR is the lack of affordable and easy-to-use 3D audio capture devices. I would love to see a super-portable ambisonic rig that filmmakers can easily use in conjunction with shooting 360 video. Especially as media giants like YouTube are gravitating toward the ambisonic format for delivery, it would be great for them to be able to capture the actual space in the same format.

In January 2017, Røde announced the VideoMic Soundfield — an on-camera ambisonic, 360-degree surround sound microphone — though pricing and release dates have not yet been made public.

One new product I am really excited about is the Sennheiser Ambeo VR mic, which is around $1,650. That’s a bit pricey for the most casual user once you factor in a 4-track recorder, but for the professional user that already has a 788T, the Ambeo VR mic offers a nice turnkey solution. I like that the mic looks a little less fragile than some of the other options on the market. It has a built-in windscreen/cage similar to what you would see on a live handheld microphone. It also comes with a Rycote shockmount and cable to 4-XLR, which is nice.

Some leading companies have recently selected ambisonics as the standard spatial audio format — can you talk a bit about how you use ambisonics for VR?
Yeah, I think this is a great decision. I like the “future proof” nature of the ambisonic format. Even in traditional film mixing, I like having the option to export to stereo, 5.1 or 7.1 depending on the project. Until ambisonic becomes more standardized, I like that the Two Big Ears/FB 360 encoder allows you to export to the .tbe B-Format (FuMa or ambiX/YouTube) as well as quad-binaural.

I am a huge fan of the ambisonic format in general. The Pro Sound Effects NYC Ambisonics Library (and now Chicago and Tokyo as well) was my first experience using the format and I was blown away. In a traditional mixing environment it adds another level of depth to the backgrounds. I really look forward to being able to bring it to the VR format as well.


Andrew Emge is operations manager at Pro Sound Effects.


Quick Chat: Scott Gershin from The Sound Lab at Technicolor

By Randi Altman

Veteran sound designer and feature film supervising sound editor Scott Gershin is leading the charge at the recently launched The Sound Lab at Technicolor, which, in addition to film and television work, focuses on immersive storytelling.

Gershin has more than 100 films to his credit, including American Beauty (which earned him a BAFTA nomination), Guillermo del Toro’s Pacific Rim and Dan Gilroy’s Nightcrawler. But films aren’t the only genre that Gershin has tackled — in addition to television work (he has an Emmy nom for the TV series Beauty and the Beast), this audio post pro has created the sound for game titles such as Resident Evil, Gears of War and Fable. One of his most recent projects was contributing to id Software’s Doom.

We recently reached out to Gershin to find out more about his workflow and this new Burbank-based audio entity.

Can you talk about what makes this facility different than what Technicolor has at Paramount? 
The Sound Lab at Technicolor works in concert with our other audio facilities, tackling film, broadcast and gaming projects. In doing so we are able to use Technicolor’s world-class dubbing, ADR and Foley stages.

One of the focuses of The Sound Lab is to identify and use cutting-edge technologies and workflows not only in traditional mediums, but in those new forms of entertainment such as VR, AR, 360 video/films, as well as dedicated installations using mixed reality. The Sound Lab at Technicolor is made up of audio artists from multiple industries who create a “brain trust” for our clients.

Scott Gershin and The Sound Lab team.

As an audio industry veteran, how has the world changed since you started?
I was one of the first sound people to use computers in the film industry. When I moved from the music industry into film post production, I brought that knowledge and experience with me. It gave me access to a huge number of tools that helped me tell better stories with audio. The same happened when I expanded into the game industry.

Learning the interactive tools of gaming is now helping me navigate into these new immersive industries, combining my film experience to tell stories and my gaming experience using new technologies to create interactive experiences.

One of the biggest changes I’ve seen is that there are so many opportunities for the audience to ingest entertainment — creating competition for their time — whether it’s traveling to a theatre, watching TV (broadcast, cable and streaming) on a new 60- or 70-inch TV, or playing video games alone on a phone or with friends on a console.

There are so many choices, which means that the creators and publishers of content have to share a smaller piece of the pie. This forces budgets to be smaller since the potential audience size is smaller for that specific project. We need to be smarter with the time that we have on projects and we need to use the technology to help speed up certain processes — allowing us more time to be creative.

Can you talk about your favorite tools?
There are so many great technologies out there. Each one adds a different color to my work and provides me with information that is crucial to my sound design and mix. For example, Nugen has great metering and loudness tools that help me zero in on my clients LKFS requirements. With each client having their own loudness requirements, the tools allow me to stay creative, and meet their requirements.

Audi’s The Duel

What are some recent projects you’ve worked on?
I’ve been working on a huge variety of projects lately. Recently, I finished a commercial for Audi called The Duel, a VR piece called My Brother’s Keeper, 10 Webisodes of The Strain and a VR music piece for Pentatonix. Each one had a different requirement.

What is your typical workflow like?
When I get a job in, I look at what the project is trying to accomplish. What is the story or the experience about? I ask myself, how can I use my craft, shaping audio, to better enhance the experience. Once I understand how I am going to approach the project creatively, I look at what the release platform will be. What are the technical challenges and what frequencies and spacial options are open to me? Whether that means a film in Dolby Atmos or a VR project on the Rift. Once I understand both the creative and technical challenges then I start working within the schedule allotted me.

Speed and flow are essential… the tools need to be like musical instruments to me, where it goes from brain to fingers. I have a bunch of monitors in front of me, each one supplying me with different and crucial information. It’s one of my favorite places to be — flying the audio starship and exploring the never-ending vista of the imagination. (Yeah, I know it’s corny, but I love what I do!)


HPA Tech Retreat takes on realities of virtual reality

By Tom Coughlin

The HPA Tech Retreat, run by the Hollywood Professional Association in association with SMPTE, began with an insightful one-day VR seminar— Integrating Virtual Reality/Augmented Reality into Entertainment Applications. Lucas Wilson from SuperSphere kicked off the sessions and helped with much of the organization of the seminar.

The seminar addressed virtual reality (VR), augmented reality (AR) and mixed reality (MR, a subset of AR where the real world and the digital world interact, like Pokeman Go). As in traditional planar video, 360-degree video still requires a director to tell a story and direct the eye to see what is meant to be seen. Successful VR requires understanding how people look at things, how they perceive reality, and using that understanding to help tell a story. Some things that may help with this are reinforcement of the viewer’s gaze with color and sound that may vary with the viewer — e.g. these may be different for the “good guy” and the “bad guy.”

VR workflows are quite different from traditional ones, with many elements changing with multiple-camera content. For instance, it is much more difficult to keep a camera crew out of the image, and providing proper illumination for all the cameras can be a challenge. The image below from Jaunt shows their 360-degree workflow, including the use of their cloud-based computational image service to stitch the images from the multiple cameras.
Snapchat is the biggest MR application, said Wilson. Snapchat’s Snapchat-stories could be the basis of future post tools.

Because stand-alone headsets (head-mounted displays, or HMDs) are expensive, most users of VR rely on smart phone-based displays. There are also some places that allow one or more people to experience VR, such as the IMAX center in Los Angeles. Activities such as VR viewing will be one of the big drivers for higher-resolution mobile device displays.

Tools that allow artists and directors to get fast feedback on their shots are still in development. But progress is being made, and today over 50 percent of VR is used for video viewing rather than games. Participants in a VR/AR market session, moderated by the Hollywood Reporter’s Carolyn Giardina and including Marcie Jastrow, David Moretti, Catherine Day and Phil Lelyveld, seemed to agree that the biggest immediate opportunity is probably with AR.

Koji Gardiner from Jaunt gave a great talk on their approach to VR. He discussed the various ways that 360-degree video can be captured and the processing required to create finished stitched video. For an array of cameras with some separation between the cameras (no common axis point for the imaging cameras), there will be area that needs to be stitched together between camera images using common reference points between the different camera images as well as blind spots near to the cameras where they are not capturing images.

If there is a single axis for all of the cameras then there are effectively no blind spots and no stitching possible as shown in the image below. Covering all the space to get a 360-degree video requires additional cameras located on that axis to cover all the space.

The Fraunhofer Institute, in Germany, has been showing a 360-degree video camera with an effective single axis for several cameras for several years, as shown below. They do this using mirrors to reflect images to the individual cameras.

As the number of cameras is increased, the mathematical work to stitch the 360-degree images together is reduced.

Stitching
There are two approaches commonly used in VR stitching of multiple camera videos. The easiest to implement is a geometric approach that uses known geometries and distances to objects. It requires limited computational resources but results in unavoidable ghosting artifacts at seams from the separate images.

The Optical Flow approach synthesizes every pixel by computing correspondences between neighboring cameras. This approach eliminates the ghosting artifacts at the seams but has its own more subtle artifacts and requires significantly more processing capability. The Optical Flow approach requires computational capabilities far beyond those normally available to content creators. This has led to a growing market to upload multi-camera video streams to cloud services that process the stitching to create finished 360-degree videos.

Files from the Jaunt One camera system are first downloaded and organized on a laptop computer and then uploaded to Jaunt’s cloud server to be processed and create the stitching to make a 360 video. Omni-directionally captured audio can also be uploaded and mixed ambisonically, resulting in advanced directionality in the audio tied to the VR video experience.

Google and Facebook also have cloud-based resources for computational photography used for this sort of image stitching.

The Jaunt One 360-degree camera has a 1-inch 20MP rolling shutter sensor with frame rates up to 60fps with 3200 ISO max, 29dB SNR at ISO800. It has a 10 stops per camera module, with 130-degree diagonal FOV, 4/2.9 optics and with up to 16K resolution (8K per eye). Jaunt One at 60fps provides 200GB/minute uncompressed. This can fill a 1TB SSD in five minutes. They are forced to use compression to be able to use currently affordable storage devices. This compression creates 11GB per minute, which can fill a 1TB SSD in 90 minutes.

The actual stitched image, laid out flat, looks like a distorted projection. But when viewed in a stereoscopic viewer it appears to look like a natural image of the world around the viewer, giving an immersive experience. At one point in time the viewer does not see all of the image but only the image in a restricted space that they are looking directly at as shown in the red box in the figure below.

The full 360-degree image can be pretty high resolution, but unless the resolution is high enough, the resolution inside the scene being viewed at any point in time will be much less that the resolution of the overall scene, unless special steps are taken.

The image below shows that for a 4k 360-degree video the resolution in the field of view (FOV) may be only 1K, much less resolution and quite perceptible to the human eye.

In order to provide a better viewing experience in the FOV, either the resolution of the entire view must be better (e.g. the Jaunt One high-resolution version has 8K per eye and thus 16K total displayed resolution) or there must be a way to increase the resolution in the most significant FOV in a video, so at least in that FOV, the resolution leads to a greater feeling of reality.

Virtual reality, augmented reality and mixed reality create new ways of interacting with the world around us and will drive consumer technologies and the need for 360-degree video. New tools and stitching software, much of this cloud-based, will enable these workflows for folks who want to participate in this revolution in content. The role of a director is as important as ever as new methods are needed to tell stories and guide the viewer to engage in this story.

2017 Creative Storage Conference
You can learn more about the growth in VR content in professional video and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services at the 2017 Creative Storage Conference — taking place May 24, 2017 in Culver City.


Thomas M. Coughlin of Coughlin Associates is a storage analyst and consultant. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide.


Rick & Morty co-creator Justin Roiland to keynote VRLA

Justin Roiland, co-creator of Rick & Morty from Cartoon Network’s Adult Swim, will be delivering VRLA’s Saturday keynote. The expo, which takes place April 14 and 15 at the LA Convention Center, will include demos, educational sessions, experimental work and presentations.

The exhibit floor will feature hardware and software developers, content creators and prototype technology that can only be seen at VRLA. Registration is currently open, with the business-focused two-day “Pro” pass at $299 and a one-day pass for Saturday priced at $40.

Roiland, is also the newly-minted founder of the VR studio Squanchtendo, aims to dive into the surreally funny possibilities of the medium in his keynote, remarking, “What does the future of VR hold? Will there be more wizard games? Are grandmas real? What is a wizard really? Are there wizard grandmas? How does this factor into VR? Please come to my incredible keynote address on the state of VR.”

VRLA is currently accepting applications for its Indie Zone, which offers complimentary exhibition space to small teams who have raised less than $500,000 in venture capital funding or generated less than less than that amount in revenue. Click here to apply.


Chris Hill & Sami Tahari

Imaginary Forces expands with EP Chris Hill and director of biz dev Sami Tahari

Imaginary Forces has added executive producer Chris Hill and director of business development Sami Tahari to its Los Angeles studio. The additions come at a time when the creative studio is looking to further expand their cross-platform presence with projects that mix VR/AR/360 with traditional, digital and social media.

Celebrating 20 years in business this year, the independently owned Imaginary Forces is a creative company specializing in brand strategy and visual storytelling encompassing many disciplines, including full-service design, production and post production. Being successful for that long in this business means they are regularly innovating and moving where the industry takes them. This led to the hiring of Hill and Tahari, whose diverse backgrounds will help strengthen the company’s long-standing relationships, as well as its continuous expansion into emerging markets.

Recent work of note includes main titles for Netflix’s beloved Stranger Things, the logo reveal for Michael Bay’s Transformers: The Last Knight and an immersive experience for the Empire State Building.

Hill’s diverse production experience includes commercials, experience design, entertainment marketing and branding for such clients as HBO Sports, Google, A&E and the Jacksonville Jaguars, among others. He joins Imaginary Forces after recently presiding over the broadcast division of marketing agency BPG.

Tahari brings extensive marketing, business and product development experience spanning the tech and entertainment spaces. His resume includes time at Lionsgate and Google, where he was an instrumental leader in the creative development and marketing of Google Glass.

“Imaginary Forces has a proven ability to use design and storytelling across any medium or industry,” adds Hill. “We can expand that ability to new markets, whether it’s emerging technologies, original content or sports franchises. When you consider, for example, the investment in massive screens and new technologies in stadiums across the country, it demands [that] same high level of brand strategy and visual storytelling.”

Our Main Image: L-R: Chris Hill and Sami Tahari.

HPA Tech Retreat takes on VR/AR at Tech Retreat Extra

The long-standing HPA Tech Retreat is always a popular destination for tech-focused post pros, and while they have touched on virtual reality and augmented reality in the past, this year they are dedicating an entire day to the topic — February 20, the day before the official Retreat begins. TR-X (Tech Retreat Extra) will feature VR experts and storytellers sharing their knowledge and experiences. The traditional HPA Tech Retreat runs from February 21-24 in Indian Wells, California.

TR-X VR/AR is co-chaired by Lucas Wilson (Founder/Executive Producer at SuperSphereVR) and Marcie Jastrow (Senior VP, Immersive Media & Head of Technicolor Experience Center), who will lead a discussion focused on the changing VR/AR landscape in the context of rapidly growing integration into entertainment and applications.

Marcie Jastrow

Experts and creative panelists will tackle questions such as: What do you need to understand to enable VR in your environment? How do you adapt? What are the workflows? Storytellers, technologists and industry leaders will provide an overview of the technology and discuss how to harness emerging technologies in the service of the artistic vision. A series of diverse case studies and creative explorations — from NASA to the NFL — will examine how to engage the audience.

The TR-X program, along with the complete HPA Tech Retreat program, is available here. Additional sessions and speakers will be announced.

TR-X VR/AR Speakers and Panel Overview
Monday, February 20

Opening and Introductions
Seth Hallen, HPA President

Technical Introduction: 360/VR/AR/MR
Lucas Wilson

Panel Discussion: The VR/AR Market
Marcie Jastrow
David Moretti, Director of Corporate Development, Jaunt
Catherine Day, Head of VR/AR, Missing Pieces
Phil Lelyveld, VR/AR Initiative Program Lead, Entertainment Technology Center at USC

Acquisition Technology
Koji Gardiner, VP, Hardware, Jaunt

Live 360 Production Case Study
Andrew McGovern, VP of VR/AR Productions, Digital Domain

Live 360 Production Case Study
Michael Mansouri, Founder, Radiant Images

Interactive VR Production Case Study
Tim Dillon, Head of VR & Immersive Content, MPC Advertising USA

Immersive Audio Production Case Study
Kyle Schember, CEO, Subtractive

Panel Discussion: The Future
Alan Lasky, Director of Studio Product Development, 8i
Ben Grossmann, CEO, Magnopus
Scott Squires, CTO, Creative Director, Pixvana
Moderator: Lucas Wilson
Jen Dennis, EP of Branded Content, RSA

Panel Discussion: New Voices: Young Professionals in VR
Anne Jimkes, Sound Designer and Composer, Ecco VR
Jyotsna Kadimi, USC Graduate
Sho Schrock, Chapman University Student
Brian Handy, USC Student

TR-X also includes an ATSC 3.0 seminar, focusing on the next-generation television broadcast standard, which is nearing completion and offers a wide range of new content delivery options to the TV production community. This session will explore the expanding possibilities that the new standard provides in video, audio, interactivity and more. Presenters and panelists will also discuss the complex next-gen television distribution ecosystem that content must traverse, and the technologies that will bring the content to life in consumers’ homes.

Early registration is highly recommended for TR-X and the HPA Tech Retreat, which is a perennially sold-out event. Attendees can sign up for TR-X VR/AR, TR-X ATSC or the HPA Tech Retreat.

Main Image: Lucas Wilson.