Tag Archives: VR

Assimilate and Z Cam offer second integrated VR workflow bundle

Z Cam and Assimilate are offering their second VR integrated workflow bundle, which features the Z Cam S1 Pro VR camera and the Assimilate Scratch VR Z post tools. The new Z Cam S1 Pro offers a higher level of image quality that includes better handling of low lights and dynamic range with detailed, well-saturated, noise-free video. In addition to the new camera, this streamlined pro workflow combines Z Cam’s WonderStitch optical-flow stitch feature and the end-to-end Scratch VR Z tools.

Z Cam and Assimilate have designed their combined technologies to ensure as simple a workflow as possible, including making it easy to switch back and forth between the S1 Pro functions and the Scratch VR Z tools. Users can also employ Scratch VR Z to do live camera preview, prior to shooting with the S1 Pro. Once the shoot begins with the S1 Pro, Scratch VR Z is then used for dailies and data management, including metadata. You don’t have to remove the SD cards and copy; it’s a direct connect to the PC and then to the camera via a high-speed Ethernet port. Stitching of the imagery is then done in Z Cam’s WonderStitch — now integrated into Scratch VR Z — as well as traditional editing, color grading, compositing, support for multichannel audio from the S1 or external ambisonic sound, finishing and publishing (to all final online or standalone 360 platforms).

Z Cam S1 Pro/Scratch VR Z  bundle highlights include:
• Lower light sensitivity and dynamic range – 4/3-inch CMOS image sensor
• Premium 220 degree MFT fisheye lens, f/2.8~11
• Coordinated AE (automatic exposure) and AWB ( automatic white-balance)
• Full integration with built-in Z Cam Sync
• 6K 30fps resolution (post stitching) output
• Gig-E port (video stream & setting control)
• WonderStich optical-flow based stitching
• Live Streaming to Facebook, YouTube or a private server, including text overlays and green/composite layers for a virtual set
• Scratch VR Z single, a streamlined, end-to-end, integrated VR post workflow

“We’ve already developed a few VR projects with the S1 Pro VR camera and the entire Neotopy team is awed by its image quality and performance,” says Alex Regeffe, VR post production manager at Neotopy Studio in Paris. “Together with the Scratch VR Z tools, we see this integrated workflow as a game changer in creating VR experiences, because our focus is now all on the creativity and storytelling rather than configuring multiple, costly tools and workflows.”

The Z Cam S1 Pro/Scratch VR Z bundle is available within 30 days of ordering. Priced at $11,999 (US), the bundle includes the following:
– Z CamS1 Pro Camera main unit, Z Cam S1 Pro battery unit (w/o battery cells), AC/DC power adapter unit and power connection cables (US, UK, EU).
– A Z Cam WonderStitch license, which is an optical flow-based stitching feature that performs offline stitching of files from Z Cam S1 Pro. Z Cam WonderStitch requires a valid software license associated with a designated Z Cam S1 Pro, and is nontransferable.
– A Scratch VR Z permanent license: a pro VR end-to-end, post workflow with an all-inclusive, realtime toolset for data management, dailies, conform, color grading, compositing, multichannel and ambisonic sound, and finishing, all integrated within the Z Cam S1 Pro camera. Includes one-year of support/updates.

The companies are offering a tutorial about the bundle.

Nugen adds 3D Immersive Extension to Halo Upmix

Nugen Audio has updated its Halo Upmix with a new 3D Immersive Extension, adding further options beyond the existing Dolby Atmos bed track capability. The 3D Immersive Extension now provides ambisonic-compatible output as an alternative to channel-based output for VR, game and other immersive applications. This makes it possible to upmix, re-purpose or convert channel-based audio for an ambisonic workflow.

With this 3D Immersive Extension, Halo fully supports Avid’s newly announced Pro Tools V.2.8, now with native 7.1.2 stems for Dolby Atmos mixing. The combination of Pro Tools 12.8 and Halo 3D Immersive Extension can provide a more fluid workflow for audio post pros handling multi-channel and object-based audio formats.

Halo Upmix is available immediately at a list price of $499 for both OS X and Windows, with support for Avid AAX, AudioSuite, VST2, VST3 and AU formats. The new 3D Immersive Extension replaces the Halo 9.1 Extension and can now be purchased for $199. Owners of the existing Halo 9.1 Extension can upgrade to the Halo 3D Immersive Extension for no additional cost. Support for native 7.1.2 stems in Avid Pro Tools 12.8 is available on launch.

Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

Dell partners with Sony on Spider-Man film, showcases VR experience

By Jay Choi

Sony Pictures Imageworks used Dell technology during the creation of the Spider-Man: Homecoming. To celebrate, Dell and Sony held a press junket in New York City that included tech demos and details on the film, as well as the Spider-Man: Homecoming Virtual Reality Experience. While I’m a huge Spider-Man fan, I am not biased in saying it was spectacular.

To begin the VR demo, users are given the same suit Tony Stark designs for Peter Parker in Captain America: Civil War and Spider-Man: Homecoming. The first action you perform is grabbing the mask and putting on the costume. You then jump into a tutorial that teaches you how to use your web-shooter mechanics (which implement intuitively with your VR controllers).

Users are then tasked with thwarting the villainous Vulture from attacking you and the city of New York. Admittedly, I didn’t get too far into the demo. I was a bit confused as to where to progress, but also absolutely stunned by the mechanics and details. Along with pulling triggers to fire webs, each button accessed a different type of web cartridge in your web shooter. So, like Spidey, I had to be both strategic and adaptive to each changing scenario. I actually felt like I was shooting webs and pulling large crates around… I honestly spent most of my time seeing how far the webs could go and what they could stick to — it was amazing!

The Tech
With the power of thousands of workstations, servers and over a petabyte of storage from Dell, Sony Pictures Imageworks and other studios, such as MPC and Method, were able to create the visual effects for the Spider-Man: Homecoming film. The Virtual Reality Experience actually pulled the same models, assets and details used in the film, giving users a truly awesome and immersive experience.

When I asked what this particular VR experience would cost your typical consumer, I was told that when developing the game, Dell researched major VR consoles and workstations and set a benchmark to strive for so most consumers should be able to experience the game without too much of a difference.

Along with the VR game, Dell also showcased its new gaming laptop: the Inspiron 15 7000. With a quad-core H-Class 7th-Gen Intel Core and Nvidia GeForce GTX 1050/1050 Ti, the laptop is marketed for hardcore gaming. It has a tough-yet-sleek design that’s appealing to the eye. However, I was more impressed with its power and potential. The junket had one of these new Inspiron laptops running the recently rebooted Killer Instinct fighting game (which ironically was my very first video game on the Super Nintendo… I guess violent video games did an okay job raising me). As a fighting game fanatic and occasional competitor, I have to say the game ran very smoothly. I couldn’t spot latency between inputs from the USB-connected X-Box One controllers or any frame skipping. It does what it says it can do!

The Inspiron 15 7000 was also featured in the Spider-Man: Homecoming film and was used by Jacob Batalon’s character, Ned, to help aid Peter Parker in his web-tastic mission.

I was also lucky enough to try out Sony Future Lab Program’s projector-based interactive Find Spider-Man game, where the game’s “screen” is projected on a table from a depth-perceiving projector lamp. A blank board was used as a scroll to maneuver a map of New York City, while piles of movable blocks were used to recognize buildings and individual floors. Sometimes Spidey was found sitting on the roof, while other times he was hiding inside on one of the floors.

All in all, Dell and Sony Pictures Imageworks’ partnership provided some sensational insight to what being Spider-Man is like with their technology and innovation, and I hope to see it evolve even further along side more Spider-Man: Homecoming films.

The Spider-Man: Homecoming Virtual Reality Experience arrives on June 30th for all major VR platforms. Marvel’s Spider-Man: Homecoming releases in theaters on July 7th.


Jay Choi is a Korean-American screenwriter, who has an odd fascination with Lego minifigures, a big heart for his cat Sula, and an obsession with all things Spider-Man. He is currently developing an animated television pitch he sold to Nickelodeon and resides in Brooklyn.

SGO’s Mistika VR is now available

 

SGO’s Mistika VR software app is now available. This solution has been developed using the company’s established Mistika technology and offers advanced realtime stitching capabilities combined with a new intuitive interface and raw format support with incredible speed.

Using Mistika Optical Flow Technology (our main image), the new VR solution takes camera position information and sequences then stitches the images together using extensive and intelligent pre-sets. Its unique stitching algorithms help with the many challenges facing post teams to allow for the highest image quality.

Mistika VR was developed to encompass and work with as many existing VR camera formats as possible, and SGO is creating custom pre-sets for productions where teams are building the rigs themselves.

The Mistika VR solution is part of SGO’s new natively integrated workflow concept. SGO has been dissecting its current turnkey offering “Mistika Ultima” to develop advanced workflow applications aimed at specific tasks.

Mistika VR runs on Mac, and Windows and is available as a personal or professional (with SGO customer support) edition license. Costs for licenses are:

–  30-day license (with no automatic renewals): Evaluation Version is free; Personal Edition: $78; Professional Edition $110

– Monthly subscription: Personal Edition $55; Professional Edition $78 per month

–  Annual subscription: Personal Edition: $556 per year; Professional Edition: $779 per year

What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

HP offering new ZBooks and DreamColor displays

By Claudio Santos

I have to admit, even though I consider myself to be a very outdoorsy person, I have a very soft spot for technology. How could I not? Working in post means I often spend more time staring at a computer screen than I do sleeping. Since I don’t plan on changing careers the least I can do is indulge myself and stare at a good computer screen.

I recently had the opportunity to meet with the team at HP for an early look at the new products they have brought to market: the ZBook mobile workstation series and two new DreamColor monitors.

DreamColor Displays
While I spent most of my time working in audio post, I don’t really have a reasonable excuse to be so excited about these monitors, but they seem to have been made with such care that it is hard not to get excited. They DreamColor z24x G2 and the DreamColor Z31x Studio are two new entries in the already well-reputed series of displays that is aimed at color professionals and post houses.

HP Z24x

They are both 10-bit displays with accessible options for color calibrations and great color accuracy. The build of both monitors doesn’t leave anything to desire, and they should be able to perform for years before being replaced. The Z24x G2 is a 24-inch monitor with aspect ratio of 16:10 and native resolution of 1920×1200. The Z31x Studio on the other hand is 31.1-inch and has a native resolution of 4096×2160 (cinema 4K).

While the specs alone seem great, it is the workflow enhancements where the HP displays really shine. The Z31x Studio is aimed at post facilities, which often have an IT department responsible for maintaining and managing the hardware. The display makes this usually boring task a breeze by allowing easy remote management over the network, scripting for profiles and user hotkeys and an API that allows facilities to fully integrate them into their system. It also boasts an integrated colorimeter that is embedded into the frame of the monitor and can be scheduled to automatically calibrate the display during off-hours. In order to tackle the common nuisance of having to manage two different machines from the same desk, the monitor has a built-in KVM switch that make the task of sharing a keyboard and mouse with two different systems absolutely painless.

ZBook Mobile Workstation Series
The ZBook workstations cover a range of sizes, starting at ultra-portable 14-inch all the way up to 17.3-inch machines. They are all very powerful machines that compete with all the best high-spec machines in the market. Once again, what sets these apart is the attention to detail HP put into designing them.

ZBook 17-inch

The whole series supports biometric authentication, with the added safety that the biometric authentication offers security at the BIOS level. This means that even if someone tries to tamper with the OS before it loads they will still have to bypass the biometric system before having access to any of the hardware.

They also offer comforts such as tool less access to the battery and hard drive, and easily expandable RAM, so upgrading/swapping parts shouldn’t be a whole-day ordeal. While I was demoing the laptops, I had the chance to try a VR experience that was being completely powered and rendered in realtime by one of the ZBooks on a HTC Vive. The experience was flawless and there were no obvious corners cut in the geometry or lighting to make a “pretend” demo. I had the impression I could confidently rely on one of the machines to work on VR projects.


Claudio Santos is a sound editor and spatial audio mixer at Silver Sound. Slightly too interested in technology and workflow hacks, he spends most of his waking ours tweaking, fiddling and tinkering away on his computer.

VR Workflows: The Studio | B&H panel during NAB

At this year’s NAB Show in Las Vegas, The Studio B&H hosted a series of panels at their booth. One of those panels addressed workflows for virtual reality, including shooting, posting, best practices, hiccups and trends.

The panel, moderated by postPerspective editor-in-chief Randi Altman, was made up of SuperSphere’s Lucas Wilson, ReDesign’s Greg Ciaccio, Local Hero Post’s Steve Bannerman and Jaunt’s Koji Gardner.

While the panel was streamed live, it also lives on YouTube. Enjoy…

New AMD Radeon Pro Duo graphics card for pro workflows

AMD was at NAB this year with its dual-GPU graphics card designed for pros — the Polaris-architecture-based Radeon Pro Duo. Built on the capabilities of the Radeon Pro WX 7100, the Radeon Pro Duo graphics card is designed for media and entertainment, broadcast and design workflows.

The Radeon Pro Duo is equipped with 32GB of ultra-fast GDDR5 memory to handle larger data sets, more intricate 3D models, higher-resolution videos and complex assemblies. Operating at a max power of 250W, the Radeon Pro Duo uses a total of 72 compute units (4,608 stream processors) for a combined performance of up to 11.45 TFLOPS of single-precision compute performance on one board, and twice the geometry throughput of the Radeon Pro WX 7100.

The Radeon Pro Duo enables pros to work on up to four 4K monitors at 60Hz, drive the latest 8K single monitor display at 30Hz using a single cable or drive an 8K display at 60Hz using a dual cable solution.

The Radeon Pro Duo’s distinct dual-GPU design allows pros the flexibility to divide their workloads, enabling smooth multi-tasking between applications by committing GPU resources to each. This will allow users to focus on their creativity and get more done faster, allowing for a greater number of design iterations in the same time.

On select pro apps (including DaVinci Resolve, Nuke/Care VR, Blender Cycles and VRed), the Radeon Pro Duo offers up to two times faster performance compared with the Radeon Pro WX 7100.

For those working in VR, the Radeon Pro Duo graphics card uses the power of two GPUs to render out separate images for each eye, increasing VR performance over single GPU solutions by up to 50% in the SteamVR test. AMD’s LiquidVR technologies are also supported by the industry’s leading realtime engines, including Unity and Unreal, to help ensure smooth, comfortable and responsive VR experiences on Radeon Pro Duo.

The Radeon Pro Duo’s planned availability is the end of May at an expected price of US $999.

Timecode and GoPro partner to make posting VR easier

Timecode Systems and GoPro’s Kolor team recently worked together to create a new timecode sync feature for Kolor’s Autopano Video Pro stitching software. By combining their technologies, the two companies have developed a VR workflow solution that offers the efficiency benefits of professional standard timecode synchronization to VR and 360 filming.

Time-aligning files from the multiple cameras in a 360° VR rig can be a manual and time-consuming process if there is no easy synchronization point, especially when synchronizing with separate audio. Visually timecode-slating cameras is a disruptive manual process, and using the clap of a slate (or another visual or audio cue) as a sync marker can be unreliable when it comes to the edit process.

The new sync feature, included in the Version 3.0 update to Autopano Video Pro, incorporates full support for MP4 timecode generated by Timecode’s products. The solution is compatible with a range of custom, multi-camera VR rigs, including rigs using GoPro’s Hero 4 cameras with SyncBac Pro for timecode and also other camera models using alternative Timecode Systems products. This allows VR filmmakers to focus on the creative and not worry about whether every camera in the rig is shooting in frame-level synchronization. Whether filming using a two-camera GoPro Hero 4 rig or 24 cameras in a 360° array creating resolutions as high as 32K, the solution syncs with the same efficiency. The end results are media files that can be automatically timecode-aligned in Autopano Video Pro with the push of a button.

“We’re giving VR camera operators the confidence that they can start and stop recording all day long without the hassle of having to disturb filming to manually slate cameras; that’s the understated benefit of timecode,” says Paul Bannister, chief science officer of Timecode Systems.

“To create high-quality VR output using multiple cameras to capture high-quality spherical video isn’t enough; the footage that is captured needs to be stitched together as simply as possible — with ease, speed and accuracy, whatever the camera rig,” explains Alexandre Jenny, senior director of Immersive Media Solutions at GoPro. “Anyone who has produced 360 video will understand the difficulties involved in relying on a clap or visual cue to mark when all the cameras start recording to match up video for stitching. To solve that issue, either you use an integrated solution like GoPro Omni with a pixel-level synchronization, or now you have the alternative to use accurate timecode metadata from SyncBac Pro in a custom, scalable multicamera rig. It makes the workflow much easier for professional VR content producers.”