Tag Archives: virtual reality

Behind the Title: Start VR Producer Ela Topcuoglu

NAME: Ela Topcuoglu

COMPANY: Start VR (@Start_VR)

CAN YOU DESCRIBE YOUR COMPANY?
Start VR is a full-service production studio (with offices in Sydney, Australia and Marina Del Rey, California) specializing in immersive and interactive cinematic entertainment. The studio brings expertise in entertainment and technology together with feature film quality visuals with interactive content, creating original and branded narrative experiences in VR.

WHAT’S YOUR JOB TITLE?
Development Executive and Producer

WHAT DOES THAT ENTAIL?
I am in charge of expanding Start VR’s business in North America. That entails developing strategic partnerships and increasing business development in the entertainment, film and technology sectors.

I am also responsible for finding partners for our original content slate as well as seeking existing IP that would fit perfectly in VR. I also develop relationships with brands and advertising agencies to create branded content. Beyond business development, I also help produce the projects that we move forward with.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The title comes with the responsibility of convincing people to invest in something that is constantly evolving, which is the biggest challenge. My job also requires me to be very creative in coming up with a native language to this new medium. I have to wear many hats to ensure that we create the best experiences out there.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that I get to wear lots of different hats. Being in the emerging field of VR, everyday is different. I don’t have a traditional 9-to-5 office job and I am constantly moving and hustling to set up business meetings and stay updated on the latest industry trends.

Also, being in the ever-evolving technology field, I learn something new almost everyday, which is extremely essential to my professional growth.

WHAT’S YOUR LEAST FAVORITE?
Convincing people to invest in virtual reality and seeing its incredible potential. That usually changes once they experience truly immersive VR, but regardless, selling the future is difficult.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
My favorite part of the day is the morning. I start my day with a much-needed shot of Nespresso, get caught up on emails, take a look at my schedule and take a quick breather before I jump right into the madness.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t working in VR, I would be investing my time in learning more about artificial intelligence (AI) and use that to advance medicine/health and education.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved entertaining people from a very young age, and I was always looking for an outlet to do that, so the entertainment business was the perfect fit. There is nothing like watching someone’s reaction to a great piece of content. Virtual reality is the ultimate entertainment outlet and I knew that I wanted to create experiences that left people with the same awe reaction that I had the moment I experienced it.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I worked and assisted in the business and legal affairs department at Media Rights Capital and had the opportunity to work on amazing TV projects, including House of Cards, Baby Driver and Ozark.

Awake: First Contact

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project that I am most proud of to date is the project that I am currently producing at Start VR. It’s called Awake: First Contact. It was a project I read about and said, “I want to work on that.”

I am in incredibly proud that I get to work on a virtual reality project that is pushing the boundaries of the medium both technically and creatively.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and speakers.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook and LinkedIn

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, especially if I’m working on a pitch deck. It really keeps me in the moment. I usually listen to my favorite DJ mixes on Soundcloud. It really depends on my vibe that day.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have recently started surfing, so that is my outlet at the moment. I also meditate regularly. It’s also important for me to make sure that I am always learning something new and unrelated to my industry.

Tackling VR storytelling challenges with spatial audio

By Matthew Bobb

From virtual reality experiences for brands to top film franchises, VR is making a big splash in entertainment and evolving the way creators tell stories. But, as with any medium and its production, bringing a narrative to life is no easy feat, especially when it’s immersive. VR comes with its own set of challenges unique to the platform’s capacity to completely transport viewers into another world and replicate reality.

Making high-quality immersive experiences, especially for a film franchise, is extremely challenging. Creators must place the viewer into a storyline crafted by the studios and properly guide them through the experience in a way that allows them to fully grasp the narrative. One emerging strategy is to emphasize audio — specifically, 360 spatial audio. VR offers a sense of presence no other medium today can offer. Spatial audio offers an auditory presence that augments a VR experience, amplifying its emotional effects.

My background as audio director for VR experiences includes top film franchises such as Warner Bros. and New Line Cinema’s IT: Float — A Cinematic VR Experience, The Conjuring 2 — Experience Enfield VR 360, Annabelle: Creation VR — Bee’s Room, and the upcoming Greatest Showman VR experience for 20th Century Fox. In the emerging world of VR, I have seen production teams encounter numerous challenges that call for creative solutions. For some of the most critical storytelling moments, it’s crucial for creators to understand the power of spatial audio and its potential to solve some of the most prevalent challenges that arise in VR production.

Most content creators — even some of those involved in VR filmmaking — don’t fully know what 360 spatial audio is or how its implementation within VR can elevate an experience. With any new medium, there are early adopters who are passionate about the process. As the next wave of VR filmmakers emerge, they will need to be informed about the benefits of spatial audio.

Guiding Viewers
Spatial audio is an incredible tool that helps make a VR experience feel believable. It can present sound from several locations, which allows viewers to identify their position within a virtual space in relation to the surrounding environment. With the ability to provide location-based sound from any direction and distance, spatial audio can then be used to produce directional auditory cues that grasp the viewer’s attention and coerce them to look in a certain direction.

VR is still unfamiliar territory for a lot of people, and the viewing process isn’t as straightforward as a 2D film or game, so dropping viewers into an experience can leave them feeling lost and overwhelmed. Inexperienced viewers are also more apprehensive and rarely move around or turn their heads while in a headset. Spatial audio cues prompting them to move or look in a specific direction are critical, steering them to instinctively react and move naturally. On Annabelle: Creation VR — Bee’s Room, viewers go into the experience knowing it’s from the horror genre and may be hesitant to look around. We strategically used audio cues, such as footsteps, slamming doors and a record player that mysteriously turns on and off, to encourage viewers to turn their head toward the sound and the chilling visuals that await.

Lacking Footage
Spatial audio can also be a solution for challenging scene transitions, or when there is a dearth of visuals to work with in a sequence. Well-crafted aural cues can paint a picture in a viewer’s mind without bombarding the experience with visuals that are often unnecessary.

A big challenge when creating VR experiences for beloved film franchises is the need for the VR production team to work in tandem with the film’s production team, making recording time extremely limited. When working on IT: Float, we were faced with the challenge of having a time constraint for shooting Pennywise the Clown. Consequently, there was not an abundance of footage of him to place in the promotional VR experience. Beyond a lack of footage, they also didn’t want to give away the notorious clown’s much-anticipated appearance before the film’s theatrical release. The solution to that production challenge was spatial audio. Pennywise’s voice was strategically used to lead the experience and guide viewers throughout the sewer tunnels, heightening the suspense while also providing the illusion that he was surrounding the viewer.

Avoiding Visual Overkill
Similar to film and video games, sound is half of the experience in VR. With the unique perspective the medium offers, creators no longer have to fully rely on a visually-heavy narrative, which can overwhelm the viewer. Instead, audio can take on a bigger role in the production process and make the project a well-rounded sensory experience. In VR, it’s important for creators to leverage sensory stimulation beyond visuals to guide viewers through a story and authentically replicate reality.

As VR storytellers, we are reimagining ways to immerse viewer in new worlds. It is crucial for us to leverage the power of audio to smooth out bumps in the road and deliver a vivid sense of physical presence unique to this medium.


Matthew Bobb is the CEO of the full-service audio company Spacewalk Sound. He is a spatial audio expert whose work can be seen in top VR experiences for major film franchises.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Assimilate and Z Cam offer second integrated VR workflow bundle

Z Cam and Assimilate are offering their second VR integrated workflow bundle, which features the Z Cam S1 Pro VR camera and the Assimilate Scratch VR Z post tools. The new Z Cam S1 Pro offers a higher level of image quality that includes better handling of low lights and dynamic range with detailed, well-saturated, noise-free video. In addition to the new camera, this streamlined pro workflow combines Z Cam’s WonderStitch optical-flow stitch feature and the end-to-end Scratch VR Z tools.

Z Cam and Assimilate have designed their combined technologies to ensure as simple a workflow as possible, including making it easy to switch back and forth between the S1 Pro functions and the Scratch VR Z tools. Users can also employ Scratch VR Z to do live camera preview, prior to shooting with the S1 Pro. Once the shoot begins with the S1 Pro, Scratch VR Z is then used for dailies and data management, including metadata. You don’t have to remove the SD cards and copy; it’s a direct connect to the PC and then to the camera via a high-speed Ethernet port. Stitching of the imagery is then done in Z Cam’s WonderStitch — now integrated into Scratch VR Z — as well as traditional editing, color grading, compositing, support for multichannel audio from the S1 or external ambisonic sound, finishing and publishing (to all final online or standalone 360 platforms).

Z Cam S1 Pro/Scratch VR Z  bundle highlights include:
• Lower light sensitivity and dynamic range – 4/3-inch CMOS image sensor
• Premium 220 degree MFT fisheye lens, f/2.8~11
• Coordinated AE (automatic exposure) and AWB ( automatic white-balance)
• Full integration with built-in Z Cam Sync
• 6K 30fps resolution (post stitching) output
• Gig-E port (video stream & setting control)
• WonderStich optical-flow based stitching
• Live Streaming to Facebook, YouTube or a private server, including text overlays and green/composite layers for a virtual set
• Scratch VR Z single, a streamlined, end-to-end, integrated VR post workflow

“We’ve already developed a few VR projects with the S1 Pro VR camera and the entire Neotopy team is awed by its image quality and performance,” says Alex Regeffe, VR post production manager at Neotopy Studio in Paris. “Together with the Scratch VR Z tools, we see this integrated workflow as a game changer in creating VR experiences, because our focus is now all on the creativity and storytelling rather than configuring multiple, costly tools and workflows.”

The Z Cam S1 Pro/Scratch VR Z bundle is available within 30 days of ordering. Priced at $11,999 (US), the bundle includes the following:
– Z CamS1 Pro Camera main unit, Z Cam S1 Pro battery unit (w/o battery cells), AC/DC power adapter unit and power connection cables (US, UK, EU).
– A Z Cam WonderStitch license, which is an optical flow-based stitching feature that performs offline stitching of files from Z Cam S1 Pro. Z Cam WonderStitch requires a valid software license associated with a designated Z Cam S1 Pro, and is nontransferable.
– A Scratch VR Z permanent license: a pro VR end-to-end, post workflow with an all-inclusive, realtime toolset for data management, dailies, conform, color grading, compositing, multichannel and ambisonic sound, and finishing, all integrated within the Z Cam S1 Pro camera. Includes one-year of support/updates.

The companies are offering a tutorial about the bundle.

Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

Dell partners with Sony on Spider-Man film, showcases VR experience

By Jay Choi

Sony Pictures Imageworks used Dell technology during the creation of the Spider-Man: Homecoming. To celebrate, Dell and Sony held a press junket in New York City that included tech demos and details on the film, as well as the Spider-Man: Homecoming Virtual Reality Experience. While I’m a huge Spider-Man fan, I am not biased in saying it was spectacular.

To begin the VR demo, users are given the same suit Tony Stark designs for Peter Parker in Captain America: Civil War and Spider-Man: Homecoming. The first action you perform is grabbing the mask and putting on the costume. You then jump into a tutorial that teaches you how to use your web-shooter mechanics (which implement intuitively with your VR controllers).

Users are then tasked with thwarting the villainous Vulture from attacking you and the city of New York. Admittedly, I didn’t get too far into the demo. I was a bit confused as to where to progress, but also absolutely stunned by the mechanics and details. Along with pulling triggers to fire webs, each button accessed a different type of web cartridge in your web shooter. So, like Spidey, I had to be both strategic and adaptive to each changing scenario. I actually felt like I was shooting webs and pulling large crates around… I honestly spent most of my time seeing how far the webs could go and what they could stick to — it was amazing!

The Tech
With the power of thousands of workstations, servers and over a petabyte of storage from Dell, Sony Pictures Imageworks and other studios, such as MPC and Method, were able to create the visual effects for the Spider-Man: Homecoming film. The Virtual Reality Experience actually pulled the same models, assets and details used in the film, giving users a truly awesome and immersive experience.

When I asked what this particular VR experience would cost your typical consumer, I was told that when developing the game, Dell researched major VR consoles and workstations and set a benchmark to strive for so most consumers should be able to experience the game without too much of a difference.

Along with the VR game, Dell also showcased its new gaming laptop: the Inspiron 15 7000. With a quad-core H-Class 7th-Gen Intel Core and Nvidia GeForce GTX 1050/1050 Ti, the laptop is marketed for hardcore gaming. It has a tough-yet-sleek design that’s appealing to the eye. However, I was more impressed with its power and potential. The junket had one of these new Inspiron laptops running the recently rebooted Killer Instinct fighting game (which ironically was my very first video game on the Super Nintendo… I guess violent video games did an okay job raising me). As a fighting game fanatic and occasional competitor, I have to say the game ran very smoothly. I couldn’t spot latency between inputs from the USB-connected X-Box One controllers or any frame skipping. It does what it says it can do!

The Inspiron 15 7000 was also featured in the Spider-Man: Homecoming film and was used by Jacob Batalon’s character, Ned, to help aid Peter Parker in his web-tastic mission.

I was also lucky enough to try out Sony Future Lab Program’s projector-based interactive Find Spider-Man game, where the game’s “screen” is projected on a table from a depth-perceiving projector lamp. A blank board was used as a scroll to maneuver a map of New York City, while piles of movable blocks were used to recognize buildings and individual floors. Sometimes Spidey was found sitting on the roof, while other times he was hiding inside on one of the floors.

All in all, Dell and Sony Pictures Imageworks’ partnership provided some sensational insight to what being Spider-Man is like with their technology and innovation, and I hope to see it evolve even further along side more Spider-Man: Homecoming films.

The Spider-Man: Homecoming Virtual Reality Experience arrives on June 30th for all major VR platforms. Marvel’s Spider-Man: Homecoming releases in theaters on July 7th.


Jay Choi is a Korean-American screenwriter, who has an odd fascination with Lego minifigures, a big heart for his cat Sula, and an obsession with all things Spider-Man. He is currently developing an animated television pitch he sold to Nickelodeon and resides in Brooklyn.

SGO’s Mistika VR is now available

 

SGO’s Mistika VR software app is now available. This solution has been developed using the company’s established Mistika technology and offers advanced realtime stitching capabilities combined with a new intuitive interface and raw format support with incredible speed.

Using Mistika Optical Flow Technology (our main image), the new VR solution takes camera position information and sequences then stitches the images together using extensive and intelligent pre-sets. Its unique stitching algorithms help with the many challenges facing post teams to allow for the highest image quality.

Mistika VR was developed to encompass and work with as many existing VR camera formats as possible, and SGO is creating custom pre-sets for productions where teams are building the rigs themselves.

The Mistika VR solution is part of SGO’s new natively integrated workflow concept. SGO has been dissecting its current turnkey offering “Mistika Ultima” to develop advanced workflow applications aimed at specific tasks.

Mistika VR runs on Mac, and Windows and is available as a personal or professional (with SGO customer support) edition license. Costs for licenses are:

–  30-day license (with no automatic renewals): Evaluation Version is free; Personal Edition: $78; Professional Edition $110

– Monthly subscription: Personal Edition $55; Professional Edition $78 per month

–  Annual subscription: Personal Edition: $556 per year; Professional Edition: $779 per year

VR audio terms: Gaze Activation v. Focus

By Claudio Santos

Virtual reality brings a lot of new terminology to the post process, and we’re all having a hard time agreeing on the meaning of everything. It’s tricky because clients and technicians sometimes have different understandings of the same term, which is a guaranteed recipe for headaches in post.

Two terms that I’ve seen being confused a few times in the spatial audio realm are Gaze Activation and Focus. They are both similar enough to be put in the same category, but at the same time different enough that most of the times you have to choose completely different tools and distribution platforms depending on which technology you want to use.

Field of view

Focus
Focus is what the Facebook Spatial Workstation calls this technology, but it is a tricky one to name. As you may know, ambisonics represents a full sphere of audio around the listener. Players like YouTube and Facebook (which uses ambisonics inside its own proprietary .tbe format) can dynamically rotate this sphere so the relative positions of the audio elements are accurate to the direction the audience is looking at. But the sounds don’t change noticeably in level depending on where you are looking.

If we take a step back and think about “surround sound” in the real world, it actually makes perfect sense. A hair clipper isn’t particularly louder when it’s in front of our eyes as opposed to when its trimming the back of our head. Nor can we ignore the annoying person who is loudly talking on their phone on the bus by simply looking away.

But for narrative construction, it can be very effective to emphasize what your audience is looking at. That opens up possibilities, such as presenting the viewer with simultaneous yet completely unrelated situations and letting them choose which one to pay attention to simply by looking in the direction of the chosen event. Keep in mind that in this case, all events are happening simultaneously and will carry on even if the viewer never looks at them.

This technology is not currently supported by YouTube, but it is possible in the Facebook Spatial Workstation with the use of high Focus Values.

Gaze Activation
When we talk about focus, the key thing to keep in mind is that all the events happen regardless of the viewer looking at them or not. If instead you want a certain sound to only happen when the viewer looks at a certain prop, regardless of the time, then you are looking for Gaze Activation.

This concept is much more akin to game audio then to film sound because of the interactivity element it presents. Essentially, you are using the direction of the gaze and potentially the length of the gaze (if you want your viewer to look in a direction for x amount of seconds before something happens) as a trigger for a sound/video playback.

This is very useful if you want to make impossible for your audience to miss something because they were looking in the “wrong” direction. Think of a jump scare in a horror experience. It’s not very scary if you’re looking in the opposite direction, is it?

This is currently only supported if you build your experience in a game engine or as an independent app with tools such as InstaVR.

Both concepts are very closely related and I expect many implementations will make use of both. We should all keep an eye on the VR content distribution platforms to see how these tools will be supported and make the best use of them in order to make 360 videos even more immersive.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking hours tweaking, fiddling and tinkering away on his computer.

VR Workflows: The Studio | B&H panel during NAB

At this year’s NAB Show in Las Vegas, The Studio B&H hosted a series of panels at their booth. One of those panels addressed workflows for virtual reality, including shooting, posting, best practices, hiccups and trends.

The panel, moderated by postPerspective editor-in-chief Randi Altman, was made up of SuperSphere’s Lucas Wilson, ReDesign’s Greg Ciaccio, Local Hero Post’s Steve Bannerman and Jaunt’s Koji Gardner.

While the panel was streamed live, it also lives on YouTube. Enjoy…

New AMD Radeon Pro Duo graphics card for pro workflows

AMD was at NAB this year with its dual-GPU graphics card designed for pros — the Polaris-architecture-based Radeon Pro Duo. Built on the capabilities of the Radeon Pro WX 7100, the Radeon Pro Duo graphics card is designed for media and entertainment, broadcast and design workflows.

The Radeon Pro Duo is equipped with 32GB of ultra-fast GDDR5 memory to handle larger data sets, more intricate 3D models, higher-resolution videos and complex assemblies. Operating at a max power of 250W, the Radeon Pro Duo uses a total of 72 compute units (4,608 stream processors) for a combined performance of up to 11.45 TFLOPS of single-precision compute performance on one board, and twice the geometry throughput of the Radeon Pro WX 7100.

The Radeon Pro Duo enables pros to work on up to four 4K monitors at 60Hz, drive the latest 8K single monitor display at 30Hz using a single cable or drive an 8K display at 60Hz using a dual cable solution.

The Radeon Pro Duo’s distinct dual-GPU design allows pros the flexibility to divide their workloads, enabling smooth multi-tasking between applications by committing GPU resources to each. This will allow users to focus on their creativity and get more done faster, allowing for a greater number of design iterations in the same time.

On select pro apps (including DaVinci Resolve, Nuke/Care VR, Blender Cycles and VRed), the Radeon Pro Duo offers up to two times faster performance compared with the Radeon Pro WX 7100.

For those working in VR, the Radeon Pro Duo graphics card uses the power of two GPUs to render out separate images for each eye, increasing VR performance over single GPU solutions by up to 50% in the SteamVR test. AMD’s LiquidVR technologies are also supported by the industry’s leading realtime engines, including Unity and Unreal, to help ensure smooth, comfortable and responsive VR experiences on Radeon Pro Duo.

The Radeon Pro Duo’s planned availability is the end of May at an expected price of US $999.