Category Archives: 360

Timecode and GoPro partner to make posting VR easier

Timecode Systems and GoPro’s Kolor team recently worked together to create a new timecode sync feature for Kolor’s Autopano Video Pro stitching software. By combining their technologies, the two companies have developed a VR workflow solution that offers the efficiency benefits of professional standard timecode synchronization to VR and 360 filming.

Time-aligning files from the multiple cameras in a 360° VR rig can be a manual and time-consuming process if there is no easy synchronization point, especially when synchronizing with separate audio. Visually timecode-slating cameras is a disruptive manual process, and using the clap of a slate (or another visual or audio cue) as a sync marker can be unreliable when it comes to the edit process.

The new sync feature, included in the Version 3.0 update to Autopano Video Pro, incorporates full support for MP4 timecode generated by Timecode’s products. The solution is compatible with a range of custom, multi-camera VR rigs, including rigs using GoPro’s Hero 4 cameras with SyncBac Pro for timecode and also other camera models using alternative Timecode Systems products. This allows VR filmmakers to focus on the creative and not worry about whether every camera in the rig is shooting in frame-level synchronization. Whether filming using a two-camera GoPro Hero 4 rig or 24 cameras in a 360° array creating resolutions as high as 32K, the solution syncs with the same efficiency. The end results are media files that can be automatically timecode-aligned in Autopano Video Pro with the push of a button.

“We’re giving VR camera operators the confidence that they can start and stop recording all day long without the hassle of having to disturb filming to manually slate cameras; that’s the understated benefit of timecode,” says Paul Bannister, chief science officer of Timecode Systems.

“To create high-quality VR output using multiple cameras to capture high-quality spherical video isn’t enough; the footage that is captured needs to be stitched together as simply as possible — with ease, speed and accuracy, whatever the camera rig,” explains Alexandre Jenny, senior director of Immersive Media Solutions at GoPro. “Anyone who has produced 360 video will understand the difficulties involved in relying on a clap or visual cue to mark when all the cameras start recording to match up video for stitching. To solve that issue, either you use an integrated solution like GoPro Omni with a pixel-level synchronization, or now you have the alternative to use accurate timecode metadata from SyncBac Pro in a custom, scalable multicamera rig. It makes the workflow much easier for professional VR content producers.”

Hobo’s Howard Bowler and Jon Mackey on embracing full-service VR

By Randi Altman

New York-based audio post house Hobo, which offers sound design, original music composition and audio mixing, recently embraced virtual reality by launching a 360 VR division. Wanting to offer clients a full-service solution, they partnered with New York production/post production studios East Coast Digital and Hidden Content, allowing them to provide concepting through production, post, music and final audio mix in an immersive 360 format.

The studio is already working on some VR projects, using their “object-oriented audio mix” skills to enhance the 360 viewing experience.

We touched base with Hobo’s founder/president, Howard Bowler, and post production producer Jon Mackey to get more info on their foray into VR.

Why was now the right time to embrace 360 VR?
Bowler: We saw the opportunity stemming from the advancement of the technology not only in the headsets but also in the tools necessary to mix and sound design in a 360-degree environment. The great thing about VR is that we have many innovative companies trying to establish what the workflow norm will be in the years to come. We want to be on the cusp of those discoveries to test and deploy these tools as the ecosystem of VR expands.

As an audio shop you could have just offered audio-for-VR services only, but instead aligned with two other companies to provide a full-service experience. Why was that important?
Bowler: This partnership provides our clients with added security when venturing out into VR production. Since the medium is relatively new in the advertising and film world, partnering with experienced production companies gives us the opportunity to better understand the nuances of filming in VR.

How does that relationship work? Will you be collaborating remotely? Same location?
Bowler: Thankfully, we are all based in West Midtown, so the collaboration will be seamless.

Can you talk a bit about object-based audio mixing and its challenges?
Mackey: The challenge of object-based mixing is not only mixing based in a 360-degree environment or converting traditional audio into something that moves with the viewer but determining which objects will lead the viewer, with its sound cue, into another part of the environment.

Bowler: It’s the creative challenge that inspires us in our sound design. With traditional 2D film, the editor controls what you see with their cuts. With VR, the partnership between sight and sound becomes much more important.

Howard Bowler pictured embracing VR.

How different is your workflow — traditional broadcast or spot work versus VR/360?
Mackey: The VR/360 workflow isn’t much different than traditional spot work. It’s the testing and review that is a game changer. Things generally can’t be reviewed live unless you have a custom rig that runs its own headset. It’s a lot of trial and error in checking the mixes, sound design, and spacial mixes. You also have to take into account the extra time and instruction for your clients to review a project.

What has surprised you the most about working in this new realm?
Bowler: The great thing about the VR/360 space is the amount of opportunity there is. What surprised us the most is the passion of all the companies that are venturing into this area. It’s different than talking about conventional film or advertising; there’s a new spark and its fueling the rise of the industry and allowing larger companies to connect with smaller ones to create an atmosphere where passion is the only thing that counts.

What tools are you using for this type of work?
Mackey: The audio tools we use are the ones that best fit into our Avid ProTools workflow. This includes plug-ins from G-Audio and others that we are experimenting with.

Can you talk about some recent projects?
Bowler: We’ve completed projects for Samsung with East Coast Digital, and there are more on the way.

Main Image: Howard Bowler and Jon Mackey

Dell 6.15

The importance of audio in VR

By Anne Jimkes

While some might not be aware, sound is 50 percent of the experience in VR, as well as in film, television and games. Because we can’t physically see the audio, it might not get as much attention as the visual side of the medium. But the balance and collaboration between visual and aural is what creates the most effective, immersive and successful experience.

More specifically, sound in VR can be used to ease people into the experience, what we also call “on boarding.” It can be used subtly and subconsciously to guide viewers by motivating them to look in a specific direction of the virtual world, which completely surrounds them.

In every production process, it is important to discuss how sound can be used to benefit the storytelling and the overall experience of the final project. In VR, especially the many low-budget independent projects, it is crucial to keep the importance and use of audio in mind from the start to save time and money in the end. Oftentimes, there are no real opportunities or means to record ADR after a live-action VR shoot, so it is important to give the production mixer ample opportunity to capture the best production sound possible.

Anne Jimkes at work.

This involves capturing wild lines, making sure there is time to plant and check the mics, and recording room tone. Things that are already required, albeit not always granted, on regular shoots, but even more important on a set where a boom operator cannot be used due to the 360 degree view of the camera. The post process is also very similar to that for TV or film up to the point of actual spatialization. We come across similar issues of having to clean up dialogue and fill in the world through sound. What producers must be aware of, however, is that after all the necessary elements of the soundtrack have been prepared, we have to manually and meticulously place and move around all the “audio objects” and various audio sources throughout the space. Whenever people decide to re-orient the video — meaning when they change what is considered the initial point of facing forward or “north” — we have to rewrite all this information that established the location and movement of the sound, which takes time.

Capturing Audio for VR
To capture audio for virtual reality we have learned a lot about planting and hiding mics as efficiently as possible. Unlike regular productions, it is not possible to use a boom mic, which tends to be the primary and most naturally sounding microphone. Aside from the more common lavalier mics, we also use ambisonic mics, which capture a full sphere of audio and matches the 360 picture — if the mic is placed correctly on axis with the camera. Most of the time we work with Sennheiser and use their Ambeo microphone to capture 360 audio on set, after which we add the rest of the spatialized audio during post production. Playing back the spatialized audio has become easier lately, because more and more platforms and VR apps accept some form of 360 audio playback. There is still a difference between the file formats to which we can encode our audio outputs, meaning that some are more precise and others are a little more blurry regarding spatialization. With VR, there is not yet a standard for deliverables and specs, unlike the film/television workflow.

What matters most in the end is that people are aware of how the creative use of sound can enhance their experience, and how important it is to spend time on capturing good dialogue on set.


Anne Jimkes is a composer, sound designer, scholar and visual artist from the Netherlands. Her work includes VR sound design at EccoVR and work with the IMAX VR Centre. With a Master’s Degree from Chapman University, Jimkes previously served as a sound intern for the Academy of Television Arts & Sciences.


Assimilate’s Scratch VR Suite 8.6 now available

Back in February, Assimilate announced the beta version of its Scratch VR Suite 8.6. Well, now the company is back with a final version of the product, including user requests for features and functions.

Scratch VR Suite 8.6 is a realtime post solution and workflow for VR/360 content. With added GPU stitching of 360-video and ambisonic audio support, as well as live streaming, the Scratch VR Suite 8.6 allows VR content creators — DPs, DITs, post artists — a streamlined, end-to-end workflow for VR/360 content.

The Scratch VR Suite 8.6 workflow automatically includes all the basic post tools: dailies, color grading, compositing, playback, cloud-based reviews, finishing and mastering.

New features and updates include:
– 360 stitching functionality: Load the source media of multiple shots from your 360 cameras. into Scratch VR and easily wrap them into a stitch node to combine the sources into a equirectangular image.
• Support for various stitch template format, such as AutoPano, Hugin, PTGui and PTStitch scripts.
• Either render out the equirectangular format first or just continue to edit, grade and composite on top of the stitched nodes and render the final result.
• Ambisonic audio: Load, set and playback ambisonic audio files to complete the 360 immersive experience.
• Video with 360 sound can be published directly to YouTube 360.
• Additional overlay handles to the existing. 2D-equirectangular feature for more easily positioning. 2D elements in a 360 scene.
• Support for Oculus Rift, Samsung Gear VR, HTC Vive and Google Cardboard.
• Several new features and functions make working in HDR just as easy as SDR.
• Increased Format Support – Added support for all the latest formats for even greater efficiency in the DIT and post production processes.
• More Simplified DIT reporting function – Added features and functions enables even greater efficiencies in a single, streamlined workflow.
• User Interface: Numerous updates have been made to enhance and simplify the UI for content creators, such as for the log-in screen, matrix layout, swipe sensitivity, Player stack, tool bar and tool tips.


SMPTE’s ETCA conference takes on OTT, cloud, AR/VR, more

SMPTE has shared program details for its Entertainment Technology in the Connected Age (ETCA) conference, taking place in Mountain View, California, May 8-9 at the Microsoft Silicon Valley Campus.

Called “Redefining the Entertainment Experience,” this year’s conference will explore emerging technologies’ impact on current and future delivery of compelling connected entertainment experiences.

Bob DeHaven, GM of worldwide communications & media at Microsoft Azure, will present the first conference keynote, titled “At the Edge: The Future of Entertainment Carriage.” The growth of on-demand programming and mobile applications, the proliferation of the cloud and the advent of the “Internet of things” demands that video content is available closer to the end user to improve both availability and the quality of the experience.

DeHaven will discuss the relationships taking shape to embrace these new requirements and will explore the roles network providers, content delivery networks (CDNs), network optimization technologies and cloud platforms will play in achieving the industry’s evolving needs.

Hanno Basse, chief technical officer at Twentieth Century Fox Film, will present “Next-Generation Entertainment: A View From the Fox.” Fox distributes content via multiple outlets ranging — from cinema to Blu-ray, over-the-top (OTT), and even VR. Basse will share his views on the technical challenges of enabling next-generation entertainment in a connected age and how Fox plans to address them.

The first conference session, “Rethinking Content Creation and Monetization in a Connected Age,” will focus on multiplatform production and monetization using the latest creation, analytics and search technologies. The session “Is There a JND in It for Me?” will take a second angle, exploring what new content creation, delivery and display technology innovations will mean for the viewer. Panelists will discuss the parameters required to achieve original artistic intent while maintaining a just noticeable difference (JND) quality level for the consumer viewing experience.

“Video Compression: What’s Beyond HEVC?” will explore emerging techniques and innovations, outlining evolving video coding techniques and their ability to handle new types of source material, including HDR and wide color gamut content, as well as video for VR/AR.

Moving from content creation and compression into delivery, “Linear Playout: From Cable to the Cloud” will discuss the current distribution landscape, looking at the consumer apps, smart TV apps, and content aggregators/curators that are enabling cord-cutters to watch linear television, as well as the new business models and opportunities shaping services and the consumer experience. The session will explore tools for digital ad insertion, audience measurement and monetization while considering the future of cloud workflows.

“Would the Internet Crash If Everyone Watched the Super Bowl Online?” will shift the discussion to live streaming, examining the technologies that enable today’s services as well as how technologies such as transparent caching, multicast streaming, peer-assisted delivery and User Datagram Protocol (UDP) streaming might enable live streaming at a traditional broadcast scale and beyond.

“Adaptive Streaming Technology: Entertainment Plumbing for the Web” will focus specifically on innovative technologies and standards that will enable the industry to overcome inconsistencies of the bitrate quality of the Internet.

“IP and Thee: What’s New in 2017?” will delve into the upgrade to Internet Protocol infrastructure and the impact of next-generation systems such as the ATSC 3.0 digital television broadcast system, the Digital Video Broadcast (DVB) suite of internationally accepted open standards for digital television, and fifth-generation mobile networks (5G wireless) on Internet-delivered entertainment services.

Moving into the cloud, “Weather Forecast: Clouds and Partly Scattered Fog in Your Future” examines how local networking topologies, dubbed “the fog,” are complementing the cloud by enabling content delivery and streaming via less traditional — and often wireless — communication channels such as 5G.

“Giving Voice to Video Discovery” will highlight the ways in which voice is being added to pay television and OTT platforms to simplify searches.

In a session that explores new consumption models, “VR From Fiction to Fact” will examine current experimentation with VR technology, emerging use cases across mobile devices and high-end headsets, and strategies for addressing the technical demands of this immersive format.

You can resister for the conference here.


Quick Chat: Scott Gershin from The Sound Lab at Technicolor

By Randi Altman

Veteran sound designer and feature film supervising sound editor Scott Gershin is leading the charge at the recently launched The Sound Lab at Technicolor, which, in addition to film and television work, focuses on immersive storytelling.

Gershin has more than 100 films to his credit, including American Beauty (which earned him a BAFTA nomination), Guillermo del Toro’s Pacific Rim and Dan Gilroy’s Nightcrawler. But films aren’t the only genre that Gershin has tackled — in addition to television work (he has an Emmy nom for the TV series Beauty and the Beast), this audio post pro has created the sound for game titles such as Resident Evil, Gears of War and Fable. One of his most recent projects was contributing to id Software’s Doom.

We recently reached out to Gershin to find out more about his workflow and this new Burbank-based audio entity.

Can you talk about what makes this facility different than what Technicolor has at Paramount? 
The Sound Lab at Technicolor works in concert with our other audio facilities, tackling film, broadcast and gaming projects. In doing so we are able to use Technicolor’s world-class dubbing, ADR and Foley stages.

One of the focuses of The Sound Lab is to identify and use cutting-edge technologies and workflows not only in traditional mediums, but in those new forms of entertainment such as VR, AR, 360 video/films, as well as dedicated installations using mixed reality. The Sound Lab at Technicolor is made up of audio artists from multiple industries who create a “brain trust” for our clients.

Scott Gershin and The Sound Lab team.

As an audio industry veteran, how has the world changed since you started?
I was one of the first sound people to use computers in the film industry. When I moved from the music industry into film post production, I brought that knowledge and experience with me. It gave me access to a huge number of tools that helped me tell better stories with audio. The same happened when I expanded into the game industry.

Learning the interactive tools of gaming is now helping me navigate into these new immersive industries, combining my film experience to tell stories and my gaming experience using new technologies to create interactive experiences.

One of the biggest changes I’ve seen is that there are so many opportunities for the audience to ingest entertainment — creating competition for their time — whether it’s traveling to a theatre, watching TV (broadcast, cable and streaming) on a new 60- or 70-inch TV, or playing video games alone on a phone or with friends on a console.

There are so many choices, which means that the creators and publishers of content have to share a smaller piece of the pie. This forces budgets to be smaller since the potential audience size is smaller for that specific project. We need to be smarter with the time that we have on projects and we need to use the technology to help speed up certain processes — allowing us more time to be creative.

Can you talk about your favorite tools?
There are so many great technologies out there. Each one adds a different color to my work and provides me with information that is crucial to my sound design and mix. For example, Nugen has great metering and loudness tools that help me zero in on my clients LKFS requirements. With each client having their own loudness requirements, the tools allow me to stay creative, and meet their requirements.

Audi’s The Duel

What are some recent projects you’ve worked on?
I’ve been working on a huge variety of projects lately. Recently, I finished a commercial for Audi called The Duel, a VR piece called My Brother’s Keeper, 10 Webisodes of The Strain and a VR music piece for Pentatonix. Each one had a different requirement.

What is your typical workflow like?
When I get a job in, I look at what the project is trying to accomplish. What is the story or the experience about? I ask myself, how can I use my craft, shaping audio, to better enhance the experience. Once I understand how I am going to approach the project creatively, I look at what the release platform will be. What are the technical challenges and what frequencies and spacial options are open to me? Whether that means a film in Dolby Atmos or a VR project on the Rift. Once I understand both the creative and technical challenges then I start working within the schedule allotted me.

Speed and flow are essential… the tools need to be like musical instruments to me, where it goes from brain to fingers. I have a bunch of monitors in front of me, each one supplying me with different and crucial information. It’s one of my favorite places to be — flying the audio starship and exploring the never-ending vista of the imagination. (Yeah, I know it’s corny, but I love what I do!)


Deluxe VFX

Craig Zerouni joins Deluxe VFX as head of technology

Deluxe has named Craig Zerouni as head of technology for Deluxe Visual Effects. In this role, he will focus on continuing to unify software development and systems architecture across Deluxe’s Method studios in Los Angeles, Vancouver, New York and India, and its Iloura studios in Sydney and Melbourne, as well as LA’s Deluxe VR.

Based in LA and reporting to president/GM of Deluxe VFX and VR Ed Ulbrich, Zerouni will lead VFX and VR R&D and software development teams and systems worldwide, working closely with technology teams across Deluxe’s Creative division.

Zerouni has been working in media technology and production for nearly three decades, joining Deluxe most recently from DreamWorks, where he was director of technology at its Bangalore, India-bsed facility overseeing all technology. Prior to that he spent nine years at Digital Domain, where he was first head of R&D responsible for software strategy and teams in five locations across three countries, then senior director of technology overseeing software, systems, production technology, technical directors and media systems. He has also directed engineering, products and teams at software/tech companies Silicon Grail, Side Effects Software and Critical Path. In addition, he was co-founder of London-based computer animation company CFX.

Zerouni’s work has contributed to features including Tron: Legacy, Iron Man 3, Maleficent, X-Men: Days of Future Past, Ender’s Game and more than 400 commercials and TV IDs and titles. He is a member of BAFTA, ACM/SIGGRAPH, IEEE and the VES. He has served on the AMPAS Digital Imaging Technology Subcommittee and is the author of the technical reference book “Houdini on the Spot.”

Says Ulbrich on the new hire: “Our VFX work serves both the features world, which is increasingly global, and the advertising community, which is increasingly local. Behind the curtain at Method, Iloura, and Deluxe, in general, we have been working to integrate our studios to give clients the ability to tap into integrated global capacity, technology and talent anywhere in the world, while offering a high-quality local experience. Craig’s experience leading global technology organizations and distributed development teams, and building and integrating pipelines is right in line with our focus.”


Assimilate Scratch and Scratch VR Suite upgraded to V.8.6

Assimilate is now offering an open beta for Scratch 8.6 and the Scratch VR Suite 8.6, the latest versions of its realtime post tools and workflow — VR/360 and 2D/3D content, from dailies to conform grading, compositing and finishing. Expanded HDR functions are featured throughout the product line, including in Scratch VR, which now offers stitching capabilities.

Both open beta versions gives pros the opportunity to actively use the full suite of Scratch and Scratch VR tools, while evaluating and submitting requests and recommendations for additional features or updates.

Scratch Web for cloud-based, realtime review and collaboration, and Scratch Play for immediate review and playback, are also included in the ecosystem updates. Both products support VR/360 and 2D/3D content.

Current users of the Scratch VR Suite 8.5 and Scratch Finishing 8.5 can download the Scratch 8.6 open beta. Scratch 8.6 open beta and the Scratch VR Suite open beta are available now.

“V8.6 is a major update for both Scratch and the Scratch VR Suite with significant enhancements to the HDR and ACES workflows. We’ve added stitching to the VR toolset so that creators have a complete and streamlined end-to-end VR workflow,” says Jeff Edson, CEO at Assimilate. “The open Beta helps us to continue developing the best and most useful post production features and techniques all artists need to perfect their creativity in color grading and finishing. We act on all input, much of it immediately and some in regular updates.”

Here are some details of the update:

HDR
• PQ and HLG transfer functions are now an integral part of Scratch color management.
• Scopes automatically switch to HDR mode if needed and show levels in a nit-scale; highlights any reference level that you set.
• At the project level, define the HDR mastering metadata: color space, color primaries and white levels, luminance levels and more. The metadata is automatically included in the Video HDMI interface (AJA, BMD, Bluefish444) for display.
• Static metadata has the function to calculate dynamic luminance metadata like MaxCLL and MaxFall.
• HDR footage can be published directly to YouTube with HDR metadata.

VR/360 – Scratch VR Suite
• 360 stitching functionality: load all your source media from your 360 cameras into Scratch VR and combine it to a single equirectangular image. Support for camera stitch templates: AutoPano projects, Hugin and PTStitch scripts.
• Ambisonic Audio: Scratch VR can load, set and playback ambisonic audio files to complete the 360 immersive experience.
• Video with 360 sound can be published directly to YouTube 360.
• Additional overlay handles to the existing 2D-equirectangular feature for more easily positioning 2D elements in a 360 scene.

DIT Reporting Function
• Create a report of all clips of either a timeline, a project or just a selection of shots.
• Reports include metadata, such as a thumbnail, clip-name, timecode, scene, take, comments and any metadata attached to a clip.
• Choose from predefined templates or create your own.


Rick & Morty co-creator Justin Roiland to keynote VRLA

Justin Roiland, co-creator of Rick & Morty from Cartoon Network’s Adult Swim, will be delivering VRLA’s Saturday keynote. The expo, which takes place April 14 and 15 at the LA Convention Center, will include demos, educational sessions, experimental work and presentations.

The exhibit floor will feature hardware and software developers, content creators and prototype technology that can only be seen at VRLA. Registration is currently open, with the business-focused two-day “Pro” pass at $299 and a one-day pass for Saturday priced at $40.

Roiland, is also the newly-minted founder of the VR studio Squanchtendo, aims to dive into the surreally funny possibilities of the medium in his keynote, remarking, “What does the future of VR hold? Will there be more wizard games? Are grandmas real? What is a wizard really? Are there wizard grandmas? How does this factor into VR? Please come to my incredible keynote address on the state of VR.”

VRLA is currently accepting applications for its Indie Zone, which offers complimentary exhibition space to small teams who have raised less than $500,000 in venture capital funding or generated less than less than that amount in revenue. Click here to apply.

One of Lenovo’s new mobile workstations is VR-ready

Lenovo Workstations launched three new mobile workstations at Solidworks World 2017 — the Lenovo ThinkPad P51s and P51, as well as its VR-ready ThinkPad P71.

ThinkPad P51s

The ThinkPad P51s features a new chassis, Intel’s seventh-generation Core i7 processors and the latest Nvidia Quadro workstation graphics, as well as a 4K UHD IPS display with optional IR camera. With all its new features, the ThinkPad P51s still boasts a lightweight, Ultrabook build, shaving off over half a pound from the previous generation. In fact, the P51s is the lightest and thinnest mobile ThinkPad. It also offers Intel Thunderbolt 3 technology with a docking solution, providing users ultra-fast connectivity and the ability to move massive files quickly.

Also new are the ThinkPad P51 — including 4K IPS display with 100 percent color gamut and X-Rite Pantone color calibrator — and the VR-ready ThinkPad P71. These mobile workstations are MIL-SPEC tested and offer a dual-fan cooling system to allow users to push their system harder for use in the field. These two new offerings feature 2400MHz DDR4 memory, along with massive storage. The ThinkPad P71 handles up to four storage devices. These two workstations also feature the latest Intel Xeon processors for mobile workstations and are ISV.

Taking on VR
The VR-ready ThinkPad P71 (our main image) features Nvidia Pascal-based Quadro GPUs and comes equipped with full Oculus and HTC certifications, along with Nvidia’s VR-ready certification.

SuperSphere, a creative VR company is using the P71. “To create high-quality work on the go, our company requires Lenovo’s industry-leading mobile workstations that allow us to put the performance of a tower in our backpacks,” says SuperSphere partner/director Jason Diamond. “Our company’s focus on VR requires us to travel to a number of locations, and the ThinkPad P71 lets us achieve the same level of work on location as we can in the office, with the same functionality.”

The Lenovo P51s will be available in March, starting at $1,049, while the P51 and P71 will be available in April, starting at $1,399 and $1,849, respectively. .