Category Archives: VR

Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.

DG 7.9.18

HP intros new entry-level HP Z lineup

HP is offering new entry-level workstations with their HP Z lineup, which is designed to help accelerate performance and secure pros’ workflows.

The HP Z2 Mini, HP Z2 Small Form Factor and HP Z2 Tower, as well as the HP EliteDesk 800 Workstation Edition, feature built-in end-to-end HP security services, providing protection from evolving malware threats with self-healing BIOS and an HP endpoint security controller. Users get protection from hardware-enforced security solutions, including HP Sure Start Gen4 and HP Sure Run, which help keep critical processes running, even if malware tries to stop them. Additionally, HP’s Manageability Kit Gen 2 manages multiple devices.

All HP Z2 workstations can now connect with Thunderbolt for fast device connections and offer an array of certifications for the apps pros are using in their day-to-day work lives. HP Performance Advisor is available to optimize software and drivers, and users can deploy Intel Xeon processors and ECC memory for added reliability. The customization, expandability, performance upgradeability and I/O options help future-proof HP Z workstation purchases.

Here are some details about the fourth-generation entry HP Z workstation family:

The HP Z2 Mini G4 workstation features what HP calls “next-level performance” in a small form factor (2.7 liters in total volume). Compared to the previous generation HP Z2 Mini, it offers two times more graphics power. Users can choose either the Nvidia Quadro P600 or Nvidia Quadro P1000 GPU. In addition, there is the option for AMD Radeon Pro WX4150 graphics.

Thanks to its size, users can mount it under a desk, behind a display or in a rack — up to 56 HP Z2 Mini workstations will fit in a standard 42U rack with the custom rackmount bracket accessory. With its flexible I/O, users can configure the system for connectivity of legacy serial ports, as well as support for up to six displays for peripheral and display connectivity needs. The HP Z2 G4 Mini comes with six core Intel Xeon Processors.

The HP Z2 Small Form Factor (SFF) G4 workstation offers 50 percent more processing power than the previous generation in the exact same compact size. The six-core CPU provides significant performance boosts. The HP Z2 SFF takes customization to the next level with flexible I/O options that free up valuable PCIe slots, while providing customization for legacy or specialized equipment, and for changing display needs.

The HP Z2 G4 SFF ships with four PCIe slots and dual M.2 storage slots. Its flexible I/O option enables users to customize networking, I/O or display needs without taking up PCIe slots or adding external adapters.

The HP Z2 Tower G4 workstation is designed for complex workloads like rendering with up to Ultra 3D graphics and the latest Intel Core or Intel Xeon processors. The HP Z2 tower can handle demanding 3D projects with over 60 percent more graphics power than the previous generation. With high clock speeds, users can get full, unthrottled performance, even with heavy workloads.

The HP EliteDesk 800 workstation Edition targets users who want to upgrade to a workstation-class desktop with integrated ISV certified applications experience.

Designed for 2D/3D design, it is also out-of-the box optimized for leading VR engines and features the Nvidia GeForce GTX 1080.

The HP Z2 Mini is expected to be available later this month for a starting price of $799; the HP Z2 Small Form Factor is expected to be available later this month for a starting price of
$749; the HP Z2 Tower is expected to be available later this month for a starting price of $769; and the HP EliteDesk 800 is expected to be available later this month for a starting price of $642, including Nvidia Quadro P400 graphics.


Lenovo intros 15-inch VR-ready ThinkPad P52

Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.

“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”

The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.

“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”

The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.

The ThinkPad P52 will be available later this month.


Combining 3D and 360 VR for The Cabiri: Anubis film

Whether you are using 360 VR or 3D, both allow audiences to feel in on the action and emotion of a film narrative or performance, but combine the two together and you can create a highly immersive experience that brings the audience directly into the “reality” of the scenes.

This is exactly what film producers and directors Fred Beahm and Bogdan Darev have done in The Cabiri: Anubis, a 3D/360VR performance art film showing at the Seattle International Film Festival’s (SIFF) VR Zone on May 18 through June 10.

The Cabiri is a Seattle-based performance art group that creates stylistic and athletic dance and entertainment routines at theater venues throughout North America. The 3D/360VR film can now be streamed from the Pixvana app to the new Oculus Go headset, which is specifically designed for 3D and 360 streaming and viewing.

“As a director working in cinema to create worlds where reality is presented in highly stylized stories, VR seemed the perfect medium to explore. What took me by complete surprise was the emotional impact, the intimacy and immediacy the immersive experience allows,” says Darev. “VR is truly a medium that highlights our collective responsibility to create original and diverse content through the power of emerging technologies that foster curiosity and the imagination.”

“Other than a live show, 3D/360VR is the ideal medium for viewers to experience the rhythmic movement in The Cabiri’s performances. Because they have the feeling of being within the scene, the viewers become so engaged in the experience that they feel the emotional and dramatic impact,” explains Beahm, who is also the cinematographer, editor and post talent for The Cabiri film.

Beahm has a long list of credits to his name, and a strong affinity for the post process that requires a keen sense of the look and feel a director or producer is striving to achieve in a film. “The artistic and technical functions of the post process take a film from raw footage to a good result, and with the right post artist and software tools to a great film,” he says. “This is why I put a strong emphasis on the post process, because along with a great story and cinematography, it’s a key component of creating a noteworthy film. VR and 3D require several complex steps, and you want to use tools that simplify the process so you can save time, create high-quality results and stay within budget.”

For The Cabiri film, he used the Kandao Obsidian S camera, filming in 6K 3D360, then SGO’s Mistika VR for their stereo 3D optical-flow stitching. He edited in Adobe’s Premiere Pro CC 2018 and finished in Assimilate’s Scratch VR, using their 3D/360VR painting, tracking and color grading tools. He then delivered in 4K 3D360 to Pixvana’s Spin Studio.”

“Scratch VR is fast. For example, with the VR transform-and-vector paint tools I can quickly paint out the nadir, or easily delete unwanted artifacts like portions of a camera rig and wires, or even a person. It’s also easy to add in graphics and visual effects with the built-in tracker and compositing tools. It’s also the only software I use that renders content in the background while you continue working on your project. Another advantage is that Scratch VR will automatically connect to an Oculus headset for viewing 3D and 360,” he continues. “During our color grading session, Bogdan would wear an Oculus Rift headset and give me suggestions about changes I should make, such as saturation and hues, and I could quickly do these on the fly and save the versions for comparison.”


Behind the Title: Spacewalk Sound’s Matthew Bobb

NAME: Matthew Bobb

COMPANY: Pasadena, California’s SpaceWalk Sound 

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service audio post facility specializing in commercials, trailers and spatial sound for virtual reality (VR). We have a heavy focus on branded content with clients such as Panda Express and Biore and studios like Warner Bros., Universal and Netflix.

WHAT’S YOUR JOB TITLE?
Partner/Sound Supervisor/Composer

WHAT DOES THAT ENTAIL?
I’ve transitioned more into the sound supervisor role. We have a fantastic group of sound designers and mixers that work here, plus a support staff to keep us on track and on budget. Putting my faith in them has allowed me to step away from the small details and look at the bigger picture on every project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
We’re still a small company, so while I mix and compose a little less than before, I find my days being filled with keeping the team moving forward. Most of what falls under my role is approving mixes, prepping for in-house clients the next day, sending out proposals and following up on new leads. A lot of our work is short form, so projects are in and out the door pretty fast — sometimes it’s all in one day. That means I always have to keep one eye on what’s coming around the corner.

The Greatest Showman 360

WHAT’S YOUR FAVORITE PART OF THE JOB?
Lately, it has been showing VR to people who have never tried it or have had a bad first experience, which is very unfortunate since it is a great medium. However, that all changes when you see someone come out of a headset exclaiming,”Wow, that is a game changer!”

We have been very fortunate to work on some well-known and loved properties and to have people get a whole new experience out of something familiar is exciting.

WHAT’S YOUR LEAST FAVORITE?
Dealing with sloppy edits. We have been pushing our clients to bring us into the fold as early as v1 to make suggestions on the flow of each project. I’ll keep my eye tuned to the timing of the dialog in relation to the music and effects, while making sure attention has been paid to the pacing of the edit to the music. I understand that the editor and director will have their attention elsewhere, so I’m trying to bring up potential issues they may miss early enough that they can be addressed.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I would say 3pm is pretty great most days. I should have accomplished something major by this point, and I’m moments away from that afternoon iced coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be crafting the ultimate sandwich, trying different combinations of meats, cheeses, spreads and veggies. I’d have a small shop, preferably somewhere tropical. We’d be open for breakfast and lunch, close around 4pm, and then I’d head to the beach to sip on Russell’s Reserve Small Batch Bourbon as the sun sets. Yes, I’ve given this some thought.

WHY DID YOU CHOOSE THIS PROFESSION?
I came from music but quickly burned out on the road. Studio life suited me much more, except all the music studios I worked at seemed to lack focus, or at least the clientele lacked focus. I fell into a few sound design gigs on the side and really enjoyed the creativity and reward of seeing my work out in the world.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We had a great year working alongside SunnyBoy Entertainment on VR content for the Hollywood studios including IT: Float, The Greatest Showman 360, Annabelle Creation: Bee’s Room and Pacific Rim: Inside the Uprising 360. We also released our first piece of interactive content, IT: Escape from Pennywise, for Gear VR and iOS.

Most recently, I worked on Star Wars: The Last Jedi in Scoring The Last Jedi: A 360 VR Experience. This takes Star Wars fans on a VIP behind-the-scenes intergalactic expedition, giving them on a virtual tour of the The Last Jedi’s production and soundstages and dropping them face-to-face with Academy Award-winning film composer John Williams and film director Rian Johnson.

Personally, I got to compose two Panda Express commercials, which was a real treat considering I sustained myself through college on a healthy diet of orange chicken.

It: Float

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It: Float was very special. It was exciting to take an existing property that was not only created by Stephen King but was also already loved by millions of people, and expand on it. The experience brought the viewer under the streets and into the sewers with Pennywise the clown. We were able to get very creative with spatial sound, using his voice to guide you through the experience without being able to see him. You never knew where he was lurking. The 360 audio really ramped up the terror! Plus, we had a great live activation at San Diego Comic Con where thousands of people came through and left pumped to see a glimpse of the film’s remake.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
It’s hard to imagine my life without these three: Spotify Premium, no ads! Philips Hue lights for those vibes. Lastly, Slack keeps our office running. It’s our not-so-secret weapon.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I treat social media as an escape. I’ll follow The Onion for a good laugh, or Anthony Bourdain to see some far flung corner of earth I didn’t know about.

DO YOU LISTEN TO MUSIC WHEN NOT MIXING OR EDITING?
If I’m doing busy work, I prefer something instrumental like Eric Prydz, Tycho, Bonobo — something with a melody and a groove that won’t make me fall asleep, but isn’t too distracting either.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
The best part about Los Angeles is how easy it is to escape Los Angeles. My family will hit the road for long weekends to Palm Springs, Big Bear or San Diego. We find a good mix of active (hiking) and inactive (2pm naps) things to do to recharge.


The-Artery embraces a VR workflow for Mercedes spots

The-Artery founder and director Vico Sharabani recently brought together an elite group of creative artists and skilled technologists to create a cross-continental VR production pipeline for Mercedes-Benz’s Masters tournament brand campaign called “What Makes Us.”

Emmy-nominated cinematographer Paul Cameron (Westworld) and VFX supervisor Rob Moggach co-directed the project, which features a series six of intense broadcast commercials — including two fully CGI spots that were “shot” in a completely virtual world.

The agency and The-Artery team, including Vico Sharabani (third from the right).

This pair of 30-second commercials, First and Can’t, are the first to be created using a novel, realtime collaborative VR software application called Nu Design with Atom View technology. While in Los Angeles, Cameron worked within a virtual world, choosing camera bodies and lenses inside the space that allowed him to “shoot” for POV and angles that would have taken weeks to complete in the real world.

The software enabled him to grab and move the camera while all artistic camera direction was recorded virtually and used for final renders. This allowed both Sharabani, who was in NYC, and Moggach, who was in Toronto, to interact live and in realtime as if they were standing together on a physical set.

We reached out to Sharabani, Cameron and Moggach for details on VR workflow, and how they see the technology impacting production and creativity.

How did you come to know about Nurulize and the Nu Design Atom View technology?
Vico Sharabani: Scott Metzger, co-founder of Nurulize, is a long-time friend, colleague and collaborator. We have all been supporting each other’s careers and initiatives, so as soon as the alpha version of Nu Design was operational, we jumped on the opportunity of deploying it in real production.

How does the ability to shoot in VR change the production paradigm moving forward?
Rob Moggach: From scout to pre-light to shoot, through to dailies and editorial, it allows us to collaborate on digital productions in a traditional filmmaking process with established roles and procedures that are known to work.

Instead of locking animated productions into a rigid board, previs, animation workflow, a director can make decisions on editorial and find unexpected moments in the capture that wouldn’t necessarily be boarded and animated otherwise. Being able to do all of this without geographical restriction and still feel like you’re together in the same room is remarkable.

What types of projects are ideal for this new production pipeline?
Sharabani: The really beautiful thing for The-Artery, as a first time user of this technology, is to prove that this workflow can be used by companies like us on every project, and not only in films by Steven Spielberg and James Cameron. The obvious ideal fit is for projects like fully CGI productions; previs of big CGI environments that need to be considered in photography; virtual previs of scouted locations in remote or dangerous locations; blocking of digital sets in pre-existing greenscreen or partially built stages; and multiple remote creative teams that need to share a vision and input

What are the specific benefits?
Moggach: With a virtual pipeline, we are able to…
1) Work much faster than traditional previs to quickly capture multiple camera setups.
2) Visualize environments and CGI with a camera in-hand to find shots you didn’t know were there on screen.
3) Interact closely regardless of location and truly feel together in the same place.
4) Use known filmmaking processes, allowing us to capitalize on established wisdom and experience.

What impacts will it have to creativity?
Paul Cameron: For me, the VR workflow added a great impact to the overall creative approach for both commercials. It enabled me to go into the environment and literally grab a camera, move around the car, be in the middle of the car, pull the camera over the car. Basically, it allowed me to put the camera in places I always wanted to put the camera, but it would take hours to get cranes or scaffold for different positions.

The other fascinating thing is that you are able to scale the set up and down. For instance, I was able to scale the car down to 25% its normal size and make a very drastic camera move over the car, handheld with a VR camera, and with the combination of slowing it down, and smoothing it down a bit, we were able to design camera moves that were very organic and very natural.

I think it also allowed me to achieve a greater understanding of the set size and space, the geometry of the set and the relationship of the car to the set. In the past, it would be a process of going through a wireframe, waiting for the rendering — in this case, the car — and programming camera moves. It basically helps with conceptualization of camera moves and shot design in a new way for me.

Also being a director of photography, it is very empowering to be able to grab the camera literally with a controller and move through that space. Again, it just takes a matter of seconds to make very dramatic camera moves, whereas even on set it could take upwards of an hour or two to move a technocrane and actually get a feel for that shot, so it is very empowering overall.

What does it now allow directors to achieve?
Cameron: One of the better features about the VR workflow is that you can actually just teleport yourself around the set while you are inside of it. So, basically, you picture yourself inside this set, and with a left hand controller and one for the right hand, you have the ability to kind of teleport yourself to different perspectives. In this case, the automobile, the geometry and wireframe geometry of the set, so it gives you a very good idea of the perspectives from different angles and you can move around really quickly.

The other thing that I found fascinating was that not only can you move around this set, in this case, I was able to fly… upwards of about 150 feet and look down on the set. This was, while you are immersed in the VR world, quite intoxicating. You are literally flying and hovering above the set, and it kind of feels like you are standing on a beam with no room to move forward or backward without falling.

Paul Cameron

So the ability to move around in an endless set perspective-wise and teleport yourself around and above the set looking down, was amazing. In the case of the Can’t commercial, I was able to teleport on the other side of the wind turbine and look back at the automobile.

Although we had the 3D CADs of sets in the past, and we were able to travel around and look at camera positions, somehow the immediacy and the power of being in the VR environment with the two controllers was quite powerful. I think for one of the sessions I had the glasses on for almost four hours straight. We recorded multiple camera moves, and everybody was quite shocked that I was in the environment for that long. But for me, it was like being on a set, almost like a pre-pre-light or something, where I was able to have my space as a director and move around and get to see my angles and design my shots.

What other tools did you use?
Sharabani: Houdini for CG,Redshift (with support of GridMarkets) for rendering, Nuke for compositing, Flame for finishing, Resolve for color grading and Premiere for editing.


NextComputing, Z Cam, Assimilate team on turnkey VR studio

NextComputing, Z Cam and Assimilate have teamed up to create a complete turnkey VR studio. Foundation VR Studio is designed to provide all aspects of the immersive production process and help the creatives be more creative.

According to Assimilate CEO Jeff Edson, “Partnering with Z Cam last year was an obvious opportunity to bring together the best of integrated 360 cameras with a seamless workflow for both live and post productions. The key is to continue to move the market from a technology focus to a creative focus. Integrated cameras took the discussions up a level of integration away from the pieces. There have been endless discussions regarding capable platforms for 360; the advantage we have is we work with just about every computer maker as well as the component companies, like CPU and GPU manufacturers. These are companies that are willing to create solutions. Again, this is all about trying to help the market focus on the creative as opposed to debates about the technology, and letting creative people create great experiences and content. Getting the technology out of their way and providing solutions that just work helps with this.”

These companies are offering a few options with their Power VR Studio.

The Foundation VR Studio, which costs $8,999 and is available now includes:
• NextComputing Edge T100 workstation
o CPU: 6-core Intel core i7-8700K 3.7GHz processor
o Memory: 16GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

Then there is the Power VR Studio, for $10,999, which is also available now. It includes:
• NextComputing Edge T100 workstation
o CPU: 10-core Intel core i9-7900K 3.3GHz processor
o Memory: 32GB DDR4 2666MHz RAM
• Z Cam S1 6K professional VR camera
• Z Cam WonderStitch software for offline stitching and profile creation
• Assimilate Scratch VR Z post software and live streaming for Z Cam

These companies will be at NAB demoing the systems.

 

 


GTC embraces machine learning and AI

By Mike McCarthy

I had the opportunity to attend GTC 2018, Nvidia‘s 9th annual technology conference in San Jose this week. GTC stands for GPU Technology Conference, and GPU stands for graphics processing unit, but graphics makes up a relatively small portion of the show at this point. The majority of the sessions and exhibitors are focused on machine learning and artificial intelligence.

And the majority of the graphics developments are centered around analyzing imagery, not generating it. Whether that is classifying photos on Pinterest or giving autonomous vehicles machine vision, it is based on the capability of computers to understand the content of an image. Now DriveSim, Nvidia’s new simulator for virtually testing autonomous drive software, dynamically creates imagery for the other system in the Constellation pair of servers to analyze and respond to, but that is entirely machine-to-machine imagery communication.

The main exception to this non-visual usage trend is Nvidia RTX, which allows raytracing to be rendered in realtime on GPUs. RTX can be used through Nvidia’s OptiX API, as well as Microsoft’s DirectX RayTracing API, and eventually through the open source Vulkan cross-platform graphics solution. It integrates with Nvidia’s AI Denoiser to use predictive rendering to further accelerate performance, and can be used in VR applications as well.

Nvidia RTX was first announced at the Game Developers Conference last week, but the first hardware to run it was just announced here at GTC, in the form of the new Quadro GV100. This $9,000 card replaces the existing Pascal-based GP100 with a Volta-based solution. It retains the same PCIe form factor, the quad DisplayPort 1.4 outputs and the NV-Link bridge to pair two cards at 200GB/s, but it jumps the GPU RAM per card from 16GB to 32GB of HBM2 memory. The GP100 was the first Quadro offering since the K6000 to support double-precision compute processing at full speed, and the increase from 3,584 to 5,120 CUDA cores should provide a 40% increase in performance, before you even look at the benefits of the 640 Tensor Cores.

Hopefully, we will see simpler versions of the Volta chip making their way into a broader array of more budget-conscious GPU options in the near future. The fact that the new Nvidia RTX technology is stated to require Volta architecture CPUs leads me to believe that they must be right on the horizon.

Nvidia also announced a new all-in-one GPU supercomputer — the DGX-2 supports twice as many Tesla V100 GPUs (16) with twice as much RAM each (32GB) compared to the existing DGX-1. This provides 81920 CUDA cores addressing 512GB of HBM2 memory, over a fabric of new NV-Link switches, as well as dual Xeon CPUs, Infiniband or 100GbE connectivity, and 32TB of SSD storage. This $400K supercomputer is marketed as the world’s largest GPU.

Nvidia and their partners had a number of cars and trucks on display throughout the show, showcasing various pieces of technology that are being developed to aid in the pursuit of autonomous vehicles.

Also on display in the category of “actually graphics related” was the new Max-Q version of the mobile Quadro P4000, which is integrated into PNY’s first mobile workstation, the Prevail Pro. Besides supporting professional VR applications, the HDMI and dual DisplayPort outputs allow a total of three external displays up to 4K each. It isn’t the smallest or lightest 15-inch laptop, but it is the only system under 17 inches I am aware of that supports the P4000, which is considered the minimum spec for professional VR implementation.

There are, of course, lots of other vendors exhibiting their products at GTC. I had the opportunity to watch 8K stereo 360 video playing off of a laptop with an external GPU. I also tried out the VRHero 5K Plus enterprise-level HMD, which brings the VR experience to whole other level. Much more affordable is TP-Cast’s $300 wireless upgrade Vive and Rift HMDs, the first of many untethered VR solutions. HTC has also recently announced the Vive Pro, which will be available in April for $800. It increases the resolution by 1/3 in both dimensions to 2880×1600 total, and moves from HDMI to DisplayPort 1.2 and USB-C. Besides VR products, they also had all sorts of robots in various forms on display.

Clearly the world of GPUs has extended far beyond the scope of accelerating computer graphics generation, and Nvidia is leading the way in bringing massive information processing to a variety of new and innovative applications. And if that leads us to hardware that can someday raytrace in realtime at 8K in VR, then I suppose everyone wins.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Supersphere offering flypacks for VR/360 streaming

Supersphere, a VR/360° production studio, will be at NAB this year debuting 12G glass-to-glass flypacks optimized for live VR/360° streaming. These multi-geometry (mesh/rectilinear/equirectangular) flypacks can handle 360°, 180°, 4K or HD production and seamlessly mix and match each geometry. They also include built-in VDN (video distribution network) encoding and delivery for live streaming to any platform or custom player.

“Live music, both in streaming and in ticket sales, has posted consistent growth in the US and Worldwide. It’s a multibillion-dollar industry and only getting bigger. We are investing in the immersive streaming market, because we see that trend reflected in our client requests,” explains founder/EP of Supershere. “Clients always want to provide audiences with the most engaging experience possible. An immersive environment is the way to do it.”

Each flypack is standard equipped with Z Cam K1 Pro 180° cameras and Z CAM S1 Pro 360° cameras, and customizable to any camera as productions demand. They are also equipped with Blackmagic’s latest ATEM Production Studio 4K live production switchers to facilitate multi-camera live production across a range of video sources. The included Assimilate Scratch VR Z enables realtime geometry, stitching, color grading, finishing and ambisonic audio. The system also offers fully integrated transcoding and delivery — Teleos Media’s VDN (Video Distribution Network) delivers immersive experiences to any devicewith instant start experience, sustained 16Mbps at high frame rates and 4K + VR resolutions. This allows clients to easily build custom 360° video players on their websites or apps as a destination for live-streamed content, in addition to streaming directly to YouTube, Facebook and other popular platforms.

“These flypacks provide an incredibly robust workflow that takes the complexity out of immersive live production — capable of handling the data required for stunning high-resolution projects in one flexible end-to-end package,” says Wilson. “Plus with Teleos’ VDN capabilities, we make it easy for any client to live stream high-end content directly to whatever device or app best suits their customers’ needs, including the option to quickly build custom, fully integrated 360° live players.”