By Mike McCarthy
Last week, I had the opportunity to attend Nvidia’s GPU Technology Conference, GTC 2016. Five thousand people filled the San Jose Convention Center for nearly a week to learn about GPU technology and how to use it to change our world. GPUs were originally designed to process graphics (hence the name), but are now used to accelerate all sorts of other computational tasks.
The current focus of GPU computing is in three areas:
Virtual reality is a logical extension of the original graphics processing design. VR requires high frame rates with low latency to keep up with user’s head movements, otherwise the lag results in motion sickness. This requires lots of processing power, and the imminent release of the Oculus Rift and HTC Vive head-mounted displays are sure to sell many high-end graphics cards. The new Quadro M6000 24GB PCIe card and M5500 mobile GPU have been released to meet this need.
Autonomous vehicles are being developed that will slowly replace many or all of the driver’s current roles in operating a vehicle. This requires processing lots of sensor input data and making decisions in realtime based on inferences made from that information. Nvidia has developed a number of hardware solutions to meet these needs, with the Drive PX and Drive PX2 expected to be the hardware platform that many car manufacturers rely on to meet those processing needs.
Artificial Intelligence has made significant leaps recently, and the need to process large data sets has grown exponentially. To that end, Nvidia has focused their newest chip development — not on graphics, at least initially — on a deep learning super computer chip. The first Pascal generation GPU, the Tesla P100 is a monster of a chip, with 15 billion 16nm transistors on a 600mm2 die. It should be twice as fast as current options for most tasks, and even more for double precision work and/or large data sets. The chip is initially available in the new DGX-1 supercomputer for $129K, which includes eight of the new GPUs connected in NVLink. I am looking forward to seeing the same graphics processing technology on a PCIe-based Quadro card at some point in the future.
While those three applications for GPU computing all had dedicated hardware released for them, Nvidia has also been working to make sure that software will be developed that uses the level of processing power they can now offer users. To that end, there are all sorts of SDKs and libraries they have been releasing to help developers harness the power of the hardware that is now available. For VR, they have Iray VR, which is a raytracing toolset for creating photorealistic VR experiences, and Iray VR Lite, which allows users to create still renderings to be previewed with HMD displays. They also have a broader VRWorks collection of tools for helping software developers adapt their work for VR experiences. For Autonomous vehicles they have developed libraries of tools for mapping, sensor image analysis, and a deep-learning decision-making neural net for driving called DaveNet. For A.I. computing, cuDNN is for accelerating emerging deep-learning neural networks, running on GPU clusters and supercomputing systems like the new DGX-1.
What Does This Mean for Post Production?
So from a post perspective (ha!), what does this all mean for the future of post production? First, newer and faster GPUs are coming, even if they are not here yet. Much farther off, deep-learning networks may someday log and index all of your footage for you. But the biggest change coming down the pipeline is virtual reality, led by the upcoming commercially available head-mounted displays (HMD). Gaming will drive HMDs into the hands of consumers, and HMDs in the hand of consumers will drive demand for a new type of experience for story-telling, advertising and expression.
As I see it, VR can be created in a variety of continually more immersive steps. The starting point is the HMD, placing the viewer into an isolated and large feeling environment. Existing flat video or stereoscopic content can be viewed without large screens, requiring only minimal processing to format the image for the HMD. The next step is a big jump — when we begin to support head tracking — to allow the viewer to control the direction that they are viewing. This is where we begin to see changes required at all stages of the content production and post pipeline. Scenes need to be created and filmed at 360 degrees.
The cameras required to capture 360 degrees of imagery produce a series of video streams that need to be stitched together into a single image, and that image needs to be edited and processed. Then the entire image is made available to the viewer, who then chooses which angle they want to view as it is played. This can be done as a flatten image sphere or, with more source data and processing, as a stereoscopic experience. The user can control the angle they view the scene from, but not the location they are viewing from, which was dictated by the physical placement of the 360-camera system. Video-Stitch just released a new all-in-one package for capturing, recording and streaming 360 video called the Orah 4i, which may make that format more accessible to consumers.
Allowing the user to fully control their perspective and move around within a scene is what makes true VR so unique, but is also much more challenging to create content for. All viewed images must be rendered on the fly, based on input from the user’s motion and position. These renders require all content to exist in 3D space, for the perspective to be generated correctly. While this is nearly impossible for traditional camera footage, it is purely a render challenge for animated content — rendering that used to take weeks must be done in realtime, and at much higher frame rates to keep up with user movement.
For any camera image, depth information is required, which is possible to estimate with calculations based on motion, but not with the level of accuracy required. Instead, if many angles are recorded simultaneously, a 3D analysis of the combination can generate a 3D version of the scene. This is already being done in limited cases for advance VFX work, but it would require taking it to a whole new level. For static content, a 3D model can be created by processing lots of still images, but storytelling will require 3D motion within this environment. This all seems pretty far out there for a traditional post workflow, but there is one case that will lend itself to this format.
Motion capture-based productions already have the 3D data required to render VR perspectives, because VR is the same basic concept as motion tracking cinematography, except that the viewer controls the “camera” instead of the director. We are already seeing photorealistic motion capture movies showing up in theaters, so these are probably the first types of productions that will make the shift to producing full VR content.
Viewing this content is still a challenge, where again Nvidia GPUs are used on the consumer end. Any VR viewing requires sensor input to track the viewer, which much be processed, and the resulting image must be rendered, usually twice for stereo viewing. This requires a significant level of processing power, so Nvidia has created two tiers of hardware recommendations to ensure that users can get a quality VR experience. For consumers, the VR-Ready program includes complete systems based on the GeForce 970 or higher GPUs, which meet the requirements for comfortable VR viewing. VR-Ready for Professionals is a similar program for the Quadro line, including the M5000 and higher GPUs, included in complete systems from partner ISVs. Currently, MSI’s new WT72 laptop with the new M5500 GPU is the only mobile platform certified VR Ready for Pros. The new mobile Quadro M5500 has the same system architecture as the desktop workstation Quadro M5000, with all 2048 CUDA cores and 8GB RAM.
While the new top-end Maxwell-based Quadro GPUs are exciting, I am really looking forward to seeing Nvidia’s Pascal technology used for graphics processing in the near future. In the meantime, we have enough performance with existing systems to start processing 360-degree videos and VR experiences.
Mike McCarthy is a freelance post engineer and media workflow consultant based in Northern California. He shares his 10 years of technology experience on www.hd4pc.com, and he can be reached at email@example.com.