Tag Archives: Nvidia

PNY’s PrevailPro mobile workstations feature 4K displays, are VR-capable

PNY has launched the PNY PrevailPro P4000 and P3000, thin and light mobile workstations. With their Nvidia Max-Q design, these innovative systems are designed from the Quadro GPU out.

“Our PrevailPro [has] the ability to drive up to four 4K UHD displays at once, or render vividly interactive VR experiences, without breaking backs or budgets,” says Steven Kaner, VP of commercial and OEM sales at PNY Technologies. “The increasing power efficiency of Nvidia Quadro graphics and our P4000-based P955 Nvidia Max-Q technology platform, allows PNY to deliver professional performance and features in thin, light, cool and quiet form factors.”

P3000

PrevailPro features the Pascal architecture within the P4000 and P3000 mobile GPUs, with Intel Core i7-7700HQ CPUs and the HM175 Express chipset.

“Despite ever increasing mobility, creative professionals require workstation class performance and features from their mobile laptops to accomplish their best work, from any location,” says Bob Pette, VP, Nvidia Professional Visualization. “With our new Max-Q design and powered by Quadro P4000 and P3000 mobile GPUs, PNY’s new PrevailPro lineup offers incredibly light and thin, no-compromise, powerful and versatile mobile workstations.”

The PrevailPro systems feature either a 15.6-inch 4K UHD or FHD display – and the ability to drive three external displays (2x mDP 1.4 and HDMI 2.0 with HDCP), for a total of four simultaneously active displays. The P4000 version supports fully immersive VR, the Nvidia VRWorks software development kit and innovative immersive VR environments based on the Unreal or Unity engines.

With 8GB (P4000) or 6GB (P3000) of GDDR5 GPU memory, up to 32GB of DDR4 2400MHz DRAM, 512GB SSD availability, HDD options up to 2TB, a comprehensive array of I/O ports, and the latest Wi-Fi and Bluetooth implementations, PrevailPro is compatible with all commonly used peripherals and network environments — and provides pros with the interfaces and storage capacity needed to complete business-critical tasks. Depending on the use case, Mobile Mark 2014 projects the embedded Li polymer battery can reach five hours over a lifetime of 1,000 charge/discharge cycles.

PrevailPro’s thin and light form factor measures 14.96×9.8×0.73 inches (379mm x 248mm x 18mm) and weighs 4.8 lbs.

 

Choosing the right workstation set-up for the job

By Lance Holte

Like virtually everything in the world of filmmaking, the number of available options for a perfect editorial workstation are almost infinite. The vast majority of systems can be greatly customized and expanded, whether by custom order, upgraded internal hardware or with expansion chassis and I/O boxes. In a time when many workstations are purchased, leased or upgraded for a specific project, the workstation buying process is largely determined by the project’s workflow and budget.

One of Harbor Picture Company’s online rooms.

In my experience, no two projects have identical workflows. Even if two projects are very similar, there are usually some slight differences — a different editor, a new camera, a shorter schedule, bigger storage requirements… the list goes on and on. The first step for choosing the optimal workstation(s) for a project is to ask a handful of broad questions that are good starters for workflow design. I generally start by requesting the delivery requirements, since they are a good indicator of the size and scope of the project.

Then I move on to questions like:

What are the camera/footage formats?
How long is the post production schedule?
Who is the editorial staff?

Often there aren’t concrete answers to these questions at the beginning of a project, but even rough answers point the way to follow-up questions. For instance, Q: What are the video delivery requirements? A: It’s a commercial campaign — HD and SD ProRes 4444 QTs.

Simple enough. Next question.

Christopher Lam from SF’s Double Fine Productions/ Courtesy of Wacom.

Q: What is the camera format? A: Red Weapon 6K, because the director wants to be able to do optical effects and stabilize most of the shots. This answer makes it very clear that we’re going to be editing offline, since the commercial budget doesn’t allow for the purchase of a blazing system with a huge, fast storage array.

Q: What is the post schedule? A: Eight weeks. Great. This should allow enough time to transcode ProRes proxies for all the media, followed by offline and online editorial.

At this point, it’s looking like there’s no need for an insanely powerful workstation, and the schedule looks like we’ll only need one editor and an assistant. Q: Who is the editorial staff? A: The editor is an Adobe Premiere guy, and the ad agency wants to spend a ton of time in the bay with him. Now, we know that agency folks really hate technical slowdowns that can sometimes occur with equipment that is pushing the envelope, so this workstation just needs to be something that’s simple and reliable. Macs make agency guys comfortable, so let’s go with a Mac Pro for the editor. If possible, I prefer to connect the client monitor directly via HDMI, since there are no delay issues that can sometimes be caused by HDMI to SDI converters. Of course, since that will use up the Mac Pro’s single HDMI port, the desktop monitors and the audio I/O box will use up two or three Thunderbolt ports. If the assistant editor doesn’t need such a powerful system, a high-end iMac could suffice.

(And for those who don’t mind waiting until the new iMac Pro ships in December, Apple’s latest release of the all-in-one workstation seems to signal a committed return for the company to the professional creative world – and is an encouraging sign for the Mac Pro overhaul in 2018. The iMac Pro addresses its non-upgradability by futureproofing itself as the most powerful all-in-one machine ever released. The base model starts at a hefty $4,999, but boasts options for up to a 5K display, 18-core Xeon processor, 128GB of RAM, and AMD Radeon Vega GPU. As more and more applications add OpenCL acceleration (AMD GPUs), the iMac Pro should stay relevant for a number of years.)

Now, our workflow would be very different if the answer to the first question had instead been A: It’s a feature film. Technicolor will handle the final delivery, but we still want to be able to make in-house 4K DCPs for screenings, EXR and DPX sequences for the VFX vendors, Blu-ray screeners, as well as review files and create all the high-res deliverables for mastering.

Since this project is a feature film, likely with a much larger editorial staff, the workflow might be better suited to editorial in Avid (to use project sharing/bin locking/collaborative editing). And since it turns out that Technicolor is grading the film in Blackmagic Resolve, it makes sense to online the film in Resolve and then pass the project over to Technicolor. Resolve will also cover any in-house temp grading and DCP creation and can handle virtually any video file.

PCs
For the sake of comparison, let’s build out some workstations on the PC side that will cover our editors, assistants, online editors, VFX editors and artists, and temp colorist. PC vs. Mac will likely be a hotly debated topic in this industry for some time, but there is no denying that a PC will return more cost-effective power at the expense of increased complexity (and potential for increased technical issues) than a Mac with similar specs. I also appreciate the longer lifespan of machines with easy upgradability and expandability without requiring expansion chassis or external GPU enclosures.

I’ve had excellent success with the HP Z line — using z840s for serious finishing machines and z440s and z640s for offline editorial workstations. There are almost unlimited options for desktop PCs, but only certain workstations and components are certified for various post applications, so it pays to do certification research when building a workstation from the ground up.

The Molecule‘s artist row in NYC.

It’s also important to keep the workstation components balanced. A system is only as strong as its weakest link, so a workstation with an insanely powerful GPU, but only a handful of CPU cores will be outperformed by a workstation with 16-20 cores and a moderately high-end GPU. Make sure the CPU, GPU, and RAM are similarly matched to get the best bang for your buck and a more stable workstation.

Relationships!
Finally, in terms of getting the best bang for your buck, there’s one trick that reigns supreme: build great relationships with hardware companies and vendors. Hardware companies are always looking for quality input, advice and real-world testing. They are often willing to lend (or give) new equipment in exchange for case studies, reviews, workflow demonstrations and press. Creating relationships is not only a great way to stay up to date with cutting edge equipment, it expands support options, your technical network and is the best opportunity to be directly involved with development. So go to trade shows, be active on forums, teach, write and generally be as involved as possible and your equipment will thank you.

Our Main Image Courtesy of editor/compositor Fred Ruckel.

 


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.

Netflix’s ‘Unbreakable Kimmy Schmidt’ gets crisper look via UHD

NYC’s Technicolor Postworks created a dedicated post workflow for the upgrade.

Having compiled seven Emmy Award nominations in its debut season, Netflix’s Unbreakable Kimmy Schmidt returned in mid-April with 13 new episodes in a form that is, quite literally, bigger and better.

The sitcom, from co-creators Tina Fey and Robert Carlock, features the ever-cheerful and ever-hopeful Kimmy Schmidt, whose spirit refuses to be broken, even after being held captive during her formative years. This season the series has boosted its delivery format from standard HD to the crisper, clearer, more detailed look of Ultra High Definition (UHD).

L-R: Pat Kelleher and Roger Doran

As with the show’s first season, post finishing was done at Technicolor PostWorks New York. Online editor Pat Kelleher and colorist Roger Doran once again served as the finishing team, working under the direction of series producer Dara Schnapper, post supervisor Valerie Landesberg and director of photography John Inwood. Almost everything else, however, was different.

The first season had been shot by Inwood with Arri Alexa, capturing in 1080p, and finished in ProRes 4444. The new episodes were shot with Red Dragon, capturing in 5K, and needed to be finished in UHD. That meant that the hardware and workflow used by Kelleher and Doran had to be retooled to efficiently manage UHD files four times larger than ProRes.

“It was an eye opener,” recalls Kelleher of the change. “Obviously, the amount of drive space needed for storage is huge. Everyone from our data manager through to the people who did the digital deliveries had to contend with the higher volume of data. The actual hands-on work is not that different from an HD show, but you need the horses to do it.”

Before post work began, engineers from Technicolor PostWorks’ in-house research unit, The Test Lab, analyzed the workflow requirements of UHD and began making changes. They built an entirely new hardware Unbreakable Kimmy Schmidtsystem for Kelleher to use, running Autodesk’s Flame Premium. It consisted of an HP Z820 workstation with Nvidia Quadro K6000 graphics, 64GB of RAM and dual Intel Xeon Processor E5-2687Ws (20M Cache, 3.10 GHz, 8.00 GT/s Intel QPI). Kelleher described its performance in handling UHD media as “flawless.”

Doran’s color grading suite got a similar overhaul. For him, engineers built a Linux-based workstation to run Blackmagic’s DaVinci Resolve, V11, and set up a dual monitoring system. That included a Panasonic 300 series display to view media in 1080p and a Samsung 9500 series curved LED to view UHD. Doran could then review color decisions in both formats (while maintaining a UHD signal throughout) and spot details or noise issues in UHD that might not be apparent at lower resolution.

While the extra firepower enabled Kelleher and Doran to work with UHD as efficiently as HD, they faced new challenges. “We do a lot of visual effects for this show,” notes Kelleher. “And now that we’re working in UHD, everything has to be much more precise. My mattes have to be tight because you can see so much more.”

Doran’s work in the color suite similarly required greater finesse. “You have to be very, very aware,” he says. “Cosmetically, it’s different. The lighting is different. You have to pay close attention to how the stars look.”

Doran is quick to add that, while grading UHD might require closer scrutiny, it’s justified by the results. “I like the increased range and greater detail,” he says. “I enjoy the extra control. Once you move up, you never want to go back.”

Both Doran and Kelleher credited the Technicolor PostWorks engineering team of Eric Horwitz, Corey Stewart and Randy Main for their ability to “move up” with a minimum of strain. “The engineers were amazing,” Kelleher insists. “They got the workflow to where all I had to think about was editing and compositing. The transition was so smooth, you almost forgot you were working in UHD, except for the image quality. That was amazing.”

Pixspan at NAB with 4K storage workflow solutions powered by Nvidia

During the NAB Show, Pixspan was demonstrating new storage workflows for full-quality 4K images powered by the Nvidia Quadro M6000. Addressing the challenges that higher resolutions and increasing amounts of data present for storage and network infrastructures, Pixspan is offering a solution that reduces storage requirements by 50-80 percent, in turn supporting 4K workflows on equipment designed for 2K while enabling data access times that are two to four times faster.

Pixspan software and the Nvidia Quadro M6000 GPU together deliver bit-accurate video decoding at up to 1.3GBs per second — enough to handle 4K digital intermediates or 4K/6K camera RAW files in realtime. Pixspan’s solution is based on its bit-exact compression technology, where each image is compressed into a smaller data file while retaining all the information from the original image, demonstrating how the processing power of the Quadro M6000 can be put to new uses in imaging storage and networking to save time and help users  meet tight deadlines.

Nvidia’s GTC 2016: VR, A.I. and self driving cars, oh my!

By Mike McCarthy

Last week, I had the opportunity to attend Nvidia’s GPU Technology Conference, GTC 2016. Five thousand people filled the San Jose Convention Center for nearly a week to learn about GPU technology and how to use it to change our world. GPUs were originally designed to process graphics (hence the name), but are now used to accelerate all sorts of other computational tasks.

The current focus of GPU computing is in three areas:

Virtual reality is a logical extension of the original graphics processing design. VR requires high frame rates with low latency to keep up with user’s head movements, otherwise the lag results in motion sickness. This requires lots of processing power, and the imminent release of the Oculus Rift and HTC Vive head-mounted displays are sure to sell many high-end graphics cards. The new Quadro M6000 24GB PCIe card and M5500 mobile GPU have been released to meet this need.

Autonomous vehicles are being developed that will slowly replace many or all of the driver’s current roles in operating a vehicle. This requires processing lots of sensor input data and making decisions in realtime based on inferences made from that information. Nvidia has developed a number of hardware solutions to meet these needs, with the Drive PX and Drive PX2 expected to be the hardware platform that many car manufacturers rely on to meet those processing needs.

This author calls the Tesla P100 "a monster of a chip."

This author calls the Tesla P100 “a monster of a chip.”

Artificial Intelligence has made significant leaps recently, and the need to process large data sets has grown exponentially. To that end, Nvidia has focused their newest chip development — not on graphics, at least initially — on a deep learning super computer chip. The first Pascal generation GPU, the Tesla P100 is a monster of a chip, with 15 billion 16nm transistors on a 600mm2 die. It should be twice as fast as current options for most tasks, and even more for double precision work and/or large data sets. The chip is initially available in the new DGX-1 supercomputer for $129K, which includes eight of the new GPUs connected in NVLink. I am looking forward to seeing the same graphics processing technology on a PCIe-based Quadro card at some point in the future.

While those three applications for GPU computing all had dedicated hardware released for them, Nvidia has also been working to make sure that software will be developed that uses the level of processing power they can now offer users. To that end, there are all sorts of SDKs and libraries they have been releasing to help developers harness the power of the hardware that is now available. For VR, they have Iray VR, which is a raytracing toolset for creating photorealistic VR experiences, and Iray VR Lite, which allows users to create still renderings to be previewed with HMD displays. They also have a broader VRWorks collection of tools for helping software developers adapt their work for VR experiences. For Autonomous vehicles they have developed libraries of tools for mapping, sensor image analysis, and a deep-learning decision-making neural net for driving called DaveNet. For A.I. computing, cuDNN is for accelerating emerging deep-learning neural networks, running on GPU clusters and supercomputing systems like the new DGX-1.

What Does This Mean for Post Production?
So from a post perspective (ha!), what does this all mean for the future of post production? First, newer and faster GPUs are coming, even if they are not here yet. Much farther off, deep-learning networks may someday log and index all of your footage for you. But the biggest change coming down the pipeline is virtual reality, led by the upcoming commercially available head-mounted displays (HMD). Gaming will drive HMDs into the hands of consumers, and HMDs in the hand of consumers will drive demand for a new type of experience for story-telling, advertising and expression.

As I see it, VR can be created in a variety of continually more immersive steps. The starting point is the HMD, placing the viewer into an isolated and large feeling environment. Existing flat video or stereoscopic content can be viewed without large screens, requiring only minimal processing to format the image for the HMD. The next step is a big jump — when we begin to support head tracking — to allow the viewer to control the direction that they are viewing. This is where we begin to see changes required at all stages of the content production and post pipeline. Scenes need to be created and filmed at 360 degrees.

At the conference, this high-fidelity VR simulation that uses scientifically accurate satellite imagery and data from NASA was shown.

The cameras required to capture 360 degrees of imagery produce a series of video streams that need to be stitched together into a single image, and that image needs to be edited and processed. Then the entire image is made available to the viewer, who then chooses which angle they want to view as it is played. This can be done as a flatten image sphere or, with more source data and processing, as a stereoscopic experience. The user can control the angle they view the scene from, but not the location they are viewing from, which was dictated by the physical placement of the 360-camera system. Video-Stitch just released a new all-in-one package for capturing, recording and streaming 360 video called the Orah 4i, which may make that format more accessible to consumers.

Allowing the user to fully control their perspective and move around within a scene is what makes true VR so unique, but is also much more challenging to create content for. All viewed images must be rendered on the fly, based on input from the user’s motion and position. These renders require all content to exist in 3D space, for the perspective to be generated correctly. While this is nearly impossible for traditional camera footage, it is purely a render challenge for animated content — rendering that used to take weeks must be done in realtime, and at much higher frame rates to keep up with user movement.

For any camera image, depth information is required, which is possible to estimate with calculations based on motion, but not with the level of accuracy required. Instead, if many angles are recorded simultaneously, a 3D analysis of the combination can generate a 3D version of the scene. This is already being done in limited cases for advance VFX work, but it would require taking it to a whole new level. For static content, a 3D model can be created by processing lots of still images, but storytelling will require 3D motion within this environment. This all seems pretty far out there for a traditional post workflow, but there is one case that will lend itself to this format.

Motion capture-based productions already have the 3D data required to render VR perspectives, because VR is the same basic concept as motion tracking cinematography, except that the viewer controls the “camera” instead of the director. We are already seeing photorealistic motion capture movies showing up in theaters, so these are probably the first types of productions that will make the shift to producing full VR content.

The Maxwell Kepler family of cards.

Viewing this content is still a challenge, where again Nvidia GPUs are used on the consumer end. Any VR viewing requires sensor input to track the viewer, which much be processed, and the resulting image must be rendered, usually twice for stereo viewing. This requires a significant level of processing power, so Nvidia has created two tiers of hardware recommendations to ensure that users can get a quality VR experience. For consumers, the VR-Ready program includes complete systems based on the GeForce 970 or higher GPUs, which meet the requirements for comfortable VR viewing. VR-Ready for Professionals is a similar program for the Quadro line, including the M5000 and higher GPUs, included in complete systems from partner ISVs. Currently, MSI’s new WT72 laptop with the new M5500 GPU is the only mobile platform certified VR Ready for Pros. The new mobile Quadro M5500 has the same system architecture as the desktop workstation Quadro M5000, with all 2048 CUDA cores and 8GB RAM.

While the new top-end Maxwell-based Quadro GPUs are exciting, I am really looking forward to seeing Nvidia’s Pascal technology used for graphics processing in the near future. In the meantime, we have enough performance with existing systems to start processing 360-degree videos and VR experiences.

Mike McCarthy is a freelance post engineer and media workflow consultant based in Northern California. He shares his 10 years of technology experience on www.hd4pc.com, and he can be reached at mike@hd4pc.com.

Dell embraces VR via Precision Towers

It’s going to be hard to walk the floor at NAB this year without being invited to demo some sort of virtual reality experience. More and more companies are diving in and offering technology that optimizes the creation and viewing of VR content. Dell is one of the latest to jump in.

Dell has been working closely on this topic with their hardware and software partners, and are formalizing their commitment to the future of VR by offering solutions that are optimized for VR consumption and creation alongside the mainstream professional ISV apps used by industry pros.

Dell has introduced new, recommended minimum system hardware configurations to support an optimal VR experience for pro users with HTC Vive or Oculus Rift VR solutions. The VR-ready solutions feature a set of three criteria, whether users are consuming or creating VR content; minimum CPU, memory and graphics requirements to support VR viewing experiences; graphics drivers that are qualified to work with these solutions; and pass performance tests conducted by the company using test criteria based on HMD (head-mounted display) suppliers, ISVs or third-party benchmarks.

Dell has also made upgrades to their Dell Precision Tower, including increased performance, graphics and memory for VR content creation. The refreshed Dell Precision Tower 5810, 7810 and 7910 workstations and rack 7910 have been upgraded with new Intel Broadwell EP processors that have more cores and performance for multi-threaded applications that support professional modeling, analysis and calculations.

Additional upgrades include the latest pro graphics technology from AMD and Nvidia, Dell Precision Ultra-Speed PCle drives with up to 4x faster performance than traditional SATA SSD storage, and up to 1TB of DDR4 Memory running at 2400MHz speed.

Raytracing today and in the future

By Jon Peddie

More papers, patents and PhDs have been written and awarded on ray tracing than any other computer graphic technique.

Ray tracing is a subset of the rendering market. The rendering market is a subset of software for larger markets, including media and entertainment (M&E), architecture, engineering and construction (AEC), computer-aided design (CAD), scientific, entertainment content creation and simulation-visualization. Not all users who have rendering capabilities in their products use it. At the same time there are products that have been developed solely as rendering tools and there are products that include 3D modeling, animation and rendering capabilities, and they may be used primarily for rendering, primarily for modeling or primarily for animation.

Because ray tracing is so important, and at the same time computationally burdensome, individuals and organizations have spent years and millions of dollars trying to speed things up. A typical ray traced scene on an old-fashioned HD screen can tax a CPU so heavily the image can only be upgraded maybe every second or two — certainly not the 33ms needed for realtime rendering.

GPUs can’t help much because one of the characteristics of ray tracing is it has no memory and every frame is a new frame, so the computational load is immutable. Also, the branching that occurs in raytracing defeats the power of a GPU’s SIMD architecture.

Material Libraries Critical
Prior to 2015, all ray tracer engines came with their own materials libraries. Cataloging the characteristics of all the types of materials in the world is beyond the resources of any company’s ability to develop and support. And the lack of standards has held back any cooperative development in the industry. However, a few companies have agreed to work together and share their libraries.

I believe we will see an opening up of libraries and the ability of various ray tracing engines to be able to avail themselves of a much larger library of materials. Nvidia is developing a standard-like capability they are calling the Material Definition Language — (MDL) and using it to allow various libraries to work with a wide range of ray tracing engines.

Rendering Becomes a Function of Price
In the near future, I expect to see 3D rendering become a capability offered as an online service. While it’s not altogether clear how this will affect the market, I think it will boost the use of ray tracing and lower the cost to an as-needed basis. It also offers the promise of being able to apply huge quantities of processing power limited only by the amount of money the user is willing to pay. Ray tracing will resolve to time (to render a scene) divided by cost.

That will continue to bring down the time to generate a ray traced frame for an animation for example, but it probably won’t get us to realtime ray tracing at 4K or beyond.

Shortcuts and Semiconductors
Work continues on finding clever ways to short circuit the computational load by using intelligent algorithms to look at the scene and deterministically allocate what objects will be seen, and which surfaces need to be considered.

Hybrid techniques are being improved and evolved where only certain portions of a scene are ray traced. Objects in the distance for example don’t need to be ray traced and flat, dull colored objects don’t need it.

Chaos Group says the use of variance-based adaptive sampling on this model of Christmas cookies from Autodesk 3ds Max provided a better final image in record time. (Source: Chaos Group)

Semiconductors are being developed to specifically accelerate ray tracing. Imagination Technologies, the company that designs Apple’s iPhone and iPad GPU, has a specific ray tracing engine that, when combined with the advance techniques just described can render an HD scene with partial ray traced elements several times a second. Siliconarts is a startup in Korea that has developed a ray tracing accelerator and I have seen demonstrations of it generating images at 30fps. And Nvidia is working ways to make a standard GPU more ray-tracing friendly.

All these ideas and developments will come together in the very near future and we will begin to realize realtime ray tracing.

Market Size
It is impossible to know how many users there are of ray tracing programs because the major 3D modeling and CAD programs, both commercial and free (e.g., Autodesk, Blender, etc.) have built-in ray tracing engines, as well as the ability to use pluggable add-on software programs for ray tracing.

The potentially available market vs. the totally available market (TAM).

Also, not all users make use of ray tracing on a regular basis— some use it every day, others maybe occasionally or once a project. Furthermore, some users will use multiple ray tracing programs in a project, depending upon their materials library, user interface, specific functional requirements or pipeline functionality.

Free vs. Commercial
A great deal of the raytracing software available on the market is the result of university projects. Some of the developers of such programs have formed companies, others have chosen to stay in academia or work as independent programmers.

The number of new suppliers has not slowed down indicating a continued demand for ray tracing

The non-commercial developers continue to offer their ray tracing rendering software as an open source and for free — and continue to support it, either individually or as part of a group.

Raytracing Engine Suppliers
The market for ray tracing is entering into a new phase. This is partially due to improved and readily available low-cost processors (thank you, Moore’s law), but more importantly it is because of the demand and need for accurate virtual prototyping and improved workflows.

Rendering in the cloud using GPUs (Source OneRender).

As with any market, there is a 20/80 rule, where 20 percent of the suppliers represent 80 percent of the market. The ray tracing market may be even more unbalanced. There would appear to be too many suppliers in the market despite failures and merger and acquisition activities. At the same time many competing suppliers have been able to successfully coexist by offering features customized for their most important customers.

Conclusion
Ray tracing is to manufacturing what a storyboard is to film — the ability to visualize the product before it’s built. Movies couldn’t be made today with the quality they have without ray tracing. Think of how good the characters in Cars looked — that imagery made it possible for you to suspend disbelief and get into the story. It used to be: “Ray tracing — Who needs it?” Today it’s: “Ray tracing? Who doesn’t use it?”

Our Main Image: An example of different materials being applied to the same object (Source Nvidia)

Dr. Jon Peddie is president of Jon Peddie Research, which just completed an in-depth market study on the ray tracing market. He is the former president of Siggraph Pioneers and  serves on advisory boards of several companies. In 2015, he was given the Life Time Achievement award from the CAAD society. His most recent book is “The History of Visual Magic in Computers.”

Pixar to make Universal Scene Description open-sourced

Pixar Animation Studios, whose latest feature film is Inside Out,  will release Universal Scene Description software (USD) as an open-source project by summer 2016. USD addresses the growing need in the CG film and game industries for an effective way to describe, assemble, interchange and modify high-complexity virtual scenes between digital content creation tools employed by studios.

At the core of USD are Pixar’s techniques for composing and non-destructively editing graphics “scene graphs,” techniques that Pixar has been cultivating for close to 20 years, dating back to A Bug’s Life. These techniques, such as file-referencing, layered overrides, variation and inheritance, were completely overhauled into a robust and uniform design for Pixar’s next-generation animation system, Presto.

Although it is still under active development and optimization, USD has been in use for nearly a year in the making of Pixar’s production Finding Dory.

The open-source Alembic project brought standardization of cached geometry interchange to the VFX industry. USD hopes to build on Alembic’s success, taking the next step of standardizing the “algebra” by which assets are aggregated and refined in-context.

The USD distribution will include embeddable direct 3D visualization provided by Pixar’s modern GPU renderer, Hydra, as well as plug-ins for several key VFX DCCs, comprehensive documentation, tutorials and complete python bindings.

Pixar has already been sharing early USD snapshots with a number of industry vendors and studios for evaluation, feedback and advance incorporation. Among the vendors helping to evaluate USD are The Foundry and Fabric Software.

——

In related news, to accelerate production of its computer-animated feature films and short film content, Pixar Animation Studios is licensing a suite of Nvidia technologies related to image rendering.

The multiyear strategic licensing agreement gives Pixar access to Nvidia’s quasi-Monte Carlo (QMC) rendering methods. These methods can make rendering more efficient, especially when powered by GPUs and other massively parallel computing architectures.

As part of the agreement, Nvidia will also contribute raytracing technology to Pixar’s OpenSubdiv Project, an open-source initiative to promote high-performance subdivision surface evaluation on massively parallel CPU and GPU architectures. The OpenSubdiv technology will enable rendering of complex Catmull-Clark subdivision surfaces in animation with incredible precision.