Tag Archives: AMD

Choosing the right workstation set-up for the job

By Lance Holte

Like virtually everything in the world of filmmaking, the number of available options for a perfect editorial workstation are almost infinite. The vast majority of systems can be greatly customized and expanded, whether by custom order, upgraded internal hardware or with expansion chassis and I/O boxes. In a time when many workstations are purchased, leased or upgraded for a specific project, the workstation buying process is largely determined by the project’s workflow and budget.

One of Harbor Picture Company’s online rooms.

In my experience, no two projects have identical workflows. Even if two projects are very similar, there are usually some slight differences — a different editor, a new camera, a shorter schedule, bigger storage requirements… the list goes on and on. The first step for choosing the optimal workstation(s) for a project is to ask a handful of broad questions that are good starters for workflow design. I generally start by requesting the delivery requirements, since they are a good indicator of the size and scope of the project.

Then I move on to questions like:

What are the camera/footage formats?
How long is the post production schedule?
Who is the editorial staff?

Often there aren’t concrete answers to these questions at the beginning of a project, but even rough answers point the way to follow-up questions. For instance, Q: What are the video delivery requirements? A: It’s a commercial campaign — HD and SD ProRes 4444 QTs.

Simple enough. Next question.

Christopher Lam from SF’s Double Fine Productions/ Courtesy of Wacom.

Q: What is the camera format? A: Red Weapon 6K, because the director wants to be able to do optical effects and stabilize most of the shots. This answer makes it very clear that we’re going to be editing offline, since the commercial budget doesn’t allow for the purchase of a blazing system with a huge, fast storage array.

Q: What is the post schedule? A: Eight weeks. Great. This should allow enough time to transcode ProRes proxies for all the media, followed by offline and online editorial.

At this point, it’s looking like there’s no need for an insanely powerful workstation, and the schedule looks like we’ll only need one editor and an assistant. Q: Who is the editorial staff? A: The editor is an Adobe Premiere guy, and the ad agency wants to spend a ton of time in the bay with him. Now, we know that agency folks really hate technical slowdowns that can sometimes occur with equipment that is pushing the envelope, so this workstation just needs to be something that’s simple and reliable. Macs make agency guys comfortable, so let’s go with a Mac Pro for the editor. If possible, I prefer to connect the client monitor directly via HDMI, since there are no delay issues that can sometimes be caused by HDMI to SDI converters. Of course, since that will use up the Mac Pro’s single HDMI port, the desktop monitors and the audio I/O box will use up two or three Thunderbolt ports. If the assistant editor doesn’t need such a powerful system, a high-end iMac could suffice.

(And for those who don’t mind waiting until the new iMac Pro ships in December, Apple’s latest release of the all-in-one workstation seems to signal a committed return for the company to the professional creative world – and is an encouraging sign for the Mac Pro overhaul in 2018. The iMac Pro addresses its non-upgradability by futureproofing itself as the most powerful all-in-one machine ever released. The base model starts at a hefty $4,999, but boasts options for up to a 5K display, 18-core Xeon processor, 128GB of RAM, and AMD Radeon Vega GPU. As more and more applications add OpenCL acceleration (AMD GPUs), the iMac Pro should stay relevant for a number of years.)

Now, our workflow would be very different if the answer to the first question had instead been A: It’s a feature film. Technicolor will handle the final delivery, but we still want to be able to make in-house 4K DCPs for screenings, EXR and DPX sequences for the VFX vendors, Blu-ray screeners, as well as review files and create all the high-res deliverables for mastering.

Since this project is a feature film, likely with a much larger editorial staff, the workflow might be better suited to editorial in Avid (to use project sharing/bin locking/collaborative editing). And since it turns out that Technicolor is grading the film in Blackmagic Resolve, it makes sense to online the film in Resolve and then pass the project over to Technicolor. Resolve will also cover any in-house temp grading and DCP creation and can handle virtually any video file.

PCs
For the sake of comparison, let’s build out some workstations on the PC side that will cover our editors, assistants, online editors, VFX editors and artists, and temp colorist. PC vs. Mac will likely be a hotly debated topic in this industry for some time, but there is no denying that a PC will return more cost-effective power at the expense of increased complexity (and potential for increased technical issues) than a Mac with similar specs. I also appreciate the longer lifespan of machines with easy upgradability and expandability without requiring expansion chassis or external GPU enclosures.

I’ve had excellent success with the HP Z line — using z840s for serious finishing machines and z440s and z640s for offline editorial workstations. There are almost unlimited options for desktop PCs, but only certain workstations and components are certified for various post applications, so it pays to do certification research when building a workstation from the ground up.

The Molecule‘s artist row in NYC.

It’s also important to keep the workstation components balanced. A system is only as strong as its weakest link, so a workstation with an insanely powerful GPU, but only a handful of CPU cores will be outperformed by a workstation with 16-20 cores and a moderately high-end GPU. Make sure the CPU, GPU, and RAM are similarly matched to get the best bang for your buck and a more stable workstation.

Relationships!
Finally, in terms of getting the best bang for your buck, there’s one trick that reigns supreme: build great relationships with hardware companies and vendors. Hardware companies are always looking for quality input, advice and real-world testing. They are often willing to lend (or give) new equipment in exchange for case studies, reviews, workflow demonstrations and press. Creating relationships is not only a great way to stay up to date with cutting edge equipment, it expands support options, your technical network and is the best opportunity to be directly involved with development. So go to trade shows, be active on forums, teach, write and generally be as involved as possible and your equipment will thank you.

Our Main Image Courtesy of editor/compositor Fred Ruckel.

 


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

New AMD Radeon Pro Duo graphics card for pro workflows

AMD was at NAB this year with its dual-GPU graphics card designed for pros — the Polaris-architecture-based Radeon Pro Duo. Built on the capabilities of the Radeon Pro WX 7100, the Radeon Pro Duo graphics card is designed for media and entertainment, broadcast and design workflows.

The Radeon Pro Duo is equipped with 32GB of ultra-fast GDDR5 memory to handle larger data sets, more intricate 3D models, higher-resolution videos and complex assemblies. Operating at a max power of 250W, the Radeon Pro Duo uses a total of 72 compute units (4,608 stream processors) for a combined performance of up to 11.45 TFLOPS of single-precision compute performance on one board, and twice the geometry throughput of the Radeon Pro WX 7100.

The Radeon Pro Duo enables pros to work on up to four 4K monitors at 60Hz, drive the latest 8K single monitor display at 30Hz using a single cable or drive an 8K display at 60Hz using a dual cable solution.

The Radeon Pro Duo’s distinct dual-GPU design allows pros the flexibility to divide their workloads, enabling smooth multi-tasking between applications by committing GPU resources to each. This will allow users to focus on their creativity and get more done faster, allowing for a greater number of design iterations in the same time.

On select pro apps (including DaVinci Resolve, Nuke/Care VR, Blender Cycles and VRed), the Radeon Pro Duo offers up to two times faster performance compared with the Radeon Pro WX 7100.

For those working in VR, the Radeon Pro Duo graphics card uses the power of two GPUs to render out separate images for each eye, increasing VR performance over single GPU solutions by up to 50% in the SteamVR test. AMD’s LiquidVR technologies are also supported by the industry’s leading realtime engines, including Unity and Unreal, to help ensure smooth, comfortable and responsive VR experiences on Radeon Pro Duo.

The Radeon Pro Duo’s planned availability is the end of May at an expected price of US $999.

Virtual Reality Roundtable

By Randi Altman

Virtual reality is seemingly everywhere, especially this holiday season. Just one look at your favorite electronics store’s website and you will find VR headsets from the inexpensive, to the affordable, to the “if I win the lottery” ones.

While there are many companies popping up to service all aspects of VR/AR/360 production, for the most part traditional post and production companies are starting to add these services to their menu, learning best practices as they go.

We reached out to a sampling of pros who are working in this area to talk about the problems and evolution of this burgeoning segment of the industry.

Nice Shoes Creative Studio: Creative director Tom Westerlin

What is the biggest issue with VR productions at the moment? Is it lack of standards?
A big misconception is that a VR production is like a standard 2D video/animation commercial production. There are some similarities, but it gets more complicated when we add interaction, different hardware options, realtime data and multiple distribution platforms. It actually takes a lot more time and man hours to create a 360 video or VR experience relative to a 2D video production.

tom

Tom Westerlin

More development time needs to be scheduled for research, user experience and testing. We’re adding more stages to the overall production. None of this should discourage anyone from exploring a concept in virtual reality, but there is a lot of consideration and research that should be done in the early stages of a project. The lack of standards presents some creative challenges for brands and agencies considering a VR project. The hardware and software choices made for distribution can have an impact on the size of the audience you want to reach as well as the approach to build it.

The current landscape provides the following options:
YouTube and Facebook can hit a ton of people with a 360 video, but has limited VR functionality; a WebVR experience, works within certain browsers like Chrome or Firefox, but not others, limiting your audience; a custom app or experimental installation using the Oculus or HTC Vive, allows for experiences with full interactivity, but presents the issue of audience limitations. There is currently no one best way to create a VR experience. It’s still very much a time of discovery and experimentation.

What should clients ask of their production and post teams when embarking on their VR project?
We shouldn’t just apply what we’ve all learned from 2D filmmaking to the creation of a VR experience, so it is crucial to include the production, post and development teams in the design phase of a project.

The current majority of clients are coming from a point of view where many standard constructs within the world of traditional production (quick camera moves or cuts, extreme close-ups) have negative physiological implications (nausea, disorientation, extreme nausea). The impact of seemingly simple creative or design decisions can have huge repercussions on complexity, time, cost and the user experience. It’s important for clients to be open to telling a story in a different manner than they’re used to.

What is the biggest misconception about VR — content, process or anything relating to VR?
The biggest misconception is clients thinking that 360 video and VR are the same. As we’ve started to introduce this technology to our clients, we’ve worked to explain the core differences between these extremely difference experiences: VR is interactive and most of the time a full CG environment, while 360 is video and although immersive, it’s a more passive experience. Each have their own unique challenges and rewards, so as we think about the end user’s experiences, we can determine what will work best.

There’s also the misconception that VR will make you sick. If executed poorly, VR can make a user sick, but the right creative ideas executed with the right equipment can result in an experience that’s quite enjoyable and nausea free.

Nice Shoes’ ‘Mio Garden’ 360 experience.

Another misconception is that VR is capable of anything. While many may confuse VR and 360 and think an experience is limited to passively looking around, there are others who have bought into the hype and inflated promises of a new storytelling medium. That’s why it’s so important to understand the limitations of different devices at the early stages of a concept, so that creative, production and post can all work together to deliver an experience that takes advantage of VR storytelling, rather than falling victims to the limitations of a specific device.

The advent of affordable systems that are capable of interactivity, like the Google Daydream, should lead to more popular apps that show off a higher level of interactivity. Even sharing video of people experiencing VR while interacting with their virtual worlds could have a huge impact on the understanding of the difference between passively watching and truly reaching out and touching.

How do we convince people this isn’t stereo 3D?
In one word: Interactivity. By definition VR is interactive and giving the user the ability to manipulate the world and actually affect it is the magic of virtual reality.

Assimilate: CEO Jeff Edson

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue in VR is straightforward workflows — from camera to delivery — and then, of course, delivery to what? Compared to a year ago, shooting 360/VR video today has made big steps in ease of use because more people have experience doing it. But it is a LONG way from point and shoot. As integrated 360/VR video cameras come to market more and more, VR storytelling will become much more straightforward and the creators can focus more on the story.

Jeff Edson

And then delivery to what? There are many online platforms for 360/VR video playback today: Facebook, YouTube 360 and others for mobile headset viewing, and then there is delivery to a PC for non-mobile headset viewing. The viewing perspective is different for all of these, which means extra work to ensure continuity on all the platforms. To cover all possible viewers one needs to publish to all. This is not an optimal business model, which is really the crux of this issue.

Can standards help in this? Standards as we have known in the video world, yes and no. The standards for 360/VR video are happening by default, such as equirectangular and cubic formats, and delivery formats like H.264, Mov and more. Standards would help, but they are not the limiting factor for growth. The market is not waiting on a defined set of formats because demand for VR is quickly moving forward. People are busy creating.

What should clients ask of their production and post teams when embarking on their VR project?
We hear from our customers that the best results will come when the director, DP and post supervisor collaborate on the expectations for look and feel, as well as the possible creative challenges and resolutions. And experience and budget are big contributors. A key issue is, what camera/rig requirements are needed for your targeted platform(s)? For example, how many cameras and what type of cameras (4K, 6K, GoPro, etc.) as well as lighting? When what about sound, which plays a key role in the viewer’s VR experience.

unexpected concert

This Yael Naim mini-concert was posted in Scratch VR by Alex Regeffe at Neotopy.

What is the biggest misconception about VR — content, process or anything relating to VR?
I see two. One: The perception that VR is a flash in the pan, just a fad. What we see today is just the launch pad. The applications for VR are vast within entertainment alone, and then there is the extensive list of other markets like training and learning in such fields as medical, military, online universities, flight, manufacturing and so forth. Two: That VR post production is a difficult process. There are too many steps and tools. This definitely doesn’t need to be the case. Our Scratch VR customers are getting high-quality results within a single, simplified VR workflow

How do we convince people this isn’t stereo 3D?
The main issue with stereo 3D is that it has really never scaled beyond a theater experience. Whereas with VR, it may end up being just the opposite. It’s unclear if VR can be a true theater experience other than classical technologies like domes and simulators. 360/VR video in the near term is, in general, a short-form media play. It’s clear that sooner than later smart phones will be able to shoot 360/VR video as a standard feature and usage will sky rocket overnight. And when that happens, the younger demographic will never shoot anything that is not 360. So the Snapchat/Instagram kinds of platforms will be filled with 360 snippets. VR headsets based upon mobile devices make the pure number of displays significant. The initial tethered devices are not insignificant in numbers, but with the next-generation of higher-resolution and untethered devices, maybe most significantly at a much lower price point, we will see the numbers become massive. None of this was ever the case with stereo 3D film/video.

Pixvana: Executive producer Aaron Rhodes

What is the biggest issue with VR productions at the moment? Is it lack of standards?
There are many issues with VR productions, many of them are just growing pains: not being able to see a live stitch, how to direct without being in the shot, what to do about lighting — but these are all part of the learning curve and evolution of VR as a craft. Resolution and management around big data are the biggest issues I see on the set. Pixvana is all about resolution — it plays a key role in better immersion. Many of the cameras out there only master at 4K and that just doesn’t cut it. But when they do shoot 8K and above, the data management is extreme. Don’t under estimate the responsibility you are giving to your DIT!

aaron rhodes

Aaron Rhodes

The biggest issue is this is early days for VR capture. We’re used to a century of 2D filmmaking and decade of high-definition capture with an assortment of camera gear. All current VR camera rigs have compromises, and will, until technology catches up. It’s too early for standards since we’re still learning and this space is changing rapidly. VR production and post also require different approaches. In some cases we have to unlearn what worked in standard 2D filmmaking.

What should clients ask of their production and post teams when embarking on their VR project?
Give me a schedule, and make it realistic. Stitching takes time, and unless you have a fleet of render nodes at your disposal, rendering your shot locally is going to take time — and everything you need to update or change it will take more time. VR post has lots in common with a non-VR spot, but the magnitude of data and rendering is much greater — make sure you plan for it.

Other questions to ask, because you really can’t ask enough:
• Why is this project being done as VR?
• Does the client have team members who understand the VR medium?
• If not will they be willing to work with a production team to design and execute with VR in mind?
• Has this project been designed for VR rather than just a 2D project in VR?
• Where will this be distributed? (Headsets? Which ones? YouTube? Facebook? Etc.)
• Will this require an app or will it be distributed to headsets through other channels?
• If it is an app, who will build the app and submit it to the VR stores?
• Do they want to future proof it by finishing greater than 4K?
• Is this to be mono or stereo? (If it’s stereo it better be very good stereo)
• What quality level are they aiming for? (Seamless stitches? Good stereo?)
• Is there time and budget to accomplish the quality they want?
• Is this to have spatialized audio?

What is the biggest misconception about VR — content, process or anything relating to VR?
VR is a narrative component, just like any actor or plot line. It’s not something that should just be done to do it. It should be purposeful to shoot VR. It’s the same with stereo. Don’t shoot stereo just because you can — sure, you can experiment and play (we need to do that always), but don’t without purpose. The medium of VR is not for every situation.
Other misconceptions because there are a lot out there:
• it’s as easy as shooting normal 2D.
• you need to have action going on constantly in 360 degrees.
• everything has to be in stereo.
• there are fixed rules.
• you can simply shoot with a VR camera and it will be interesting, without any idea of specific placement, story or design.
How do we convince people this isn’t stereo 3D?
Education. There are tiers of immersion with VR, and stereo 3D is one of them. I see these tiers starting with the desktop experience and going up in immersion from there, and it’s important to the strengths and weakness of each:
• YouTube/Facebook on the desktop [low immersion]
• Cardboard, GearVR, Daydream 2D/3D low-resolution
• Headset Rift and Vive 2D/3D 6 degrees of freedom [high immersion]
• Computer generated experiences [high immersion]

Maxon US: President/CEO Paul Babb

paul babb

Paul Babb

What is the biggest issue with VR productions at the moment? Is it lack of standards?
Project file size. Huge files. Lots of pixels. Telling a story. How do you get the viewer to look where you want them to look? How do you tell and drive a story in a 360 environment.

What should clients ask of their production and post teams when embarking on their VR project?
I think it’s more that production teams are going to have to ask the questions to focus what clients want out of their VR. Too many companies just want to get into VR (buzz!) without knowing what they want to do, what they should do and what the goal of the piece is.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
Oh boy. Let me tell you, that’s a tough one. People don’t even know that “3D” is really “stereography.”

Experience 360°: CEO Ryan Moore

What is the biggest issue with VR productions at the moment? Is it lack of standards?
One of the biggest issues plaguing the current VR production landscape is the lack of true professionals that exist in the field. While a vast majority of independent filmmakers are doing their best at adapting their current techniques, they have been unsuccessful in perceiving ryan moorehow films and VR experiences genuinely differ. This apparent lack of virtual understanding generally leads to poor UX creation within finalized VR products.

Given the novelty of virtual reality and 360 video, standards are only just being determined in terms of minimum quality and image specifications. These, however, are constantly changing. In order to keep a finger on the pulse, it is encouraged for VR companies to be plugged into 360 video communities through social media platforms. It is through this essential interaction that VR production technology can continually be reintroduced.

What should clients ask of their production and post teams when embarking on their VR project?
When first embarking on a VR project, it is highly beneficial to walk prospective clients through the entirety of the process, before production actually begins. This allows the client a full understanding of how the workflow is used, while also ensuring client satisfaction with the eventual partnership. It’s vital that production partners convey an ultimate understanding of VR and its use, and explain their tactics in “cutting” VR scenes in post — this can affect the user’s experience in a pronounced way.

‘The Backwoods Tennessee VR Experience’ via Experience 360.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people that this isn’t stereo 3D?
The biggest misconception about VR and 360 video is that it is an offshoot of traditional storytelling, and can be used in ways similar to both cinematic and documentary worlds. The mistake in the VR producer equating this connection is that it can often limit the potential of the user’s experience to that of a voyeur only. Content producers need to think much farther out of this box, and begin to embrace having images paired with interaction and interactivity. It helps to keep in mind that the intended user will feel as if these VR experiences are very personal to them, because they are usually isolated in a HMD when viewing the final product.

VR is being met with appropriate skepticism, and is widely still considered a ‘“fad” without the media landscape. This is often because the critic has not actually had a chance to try a virtual reality experience firsthand themselves, and does not understand the wide reaching potential of immersive media. At three years in, a majority of the adults in the United States have never had a chance to try VR themselves, relying on what they understand from TV commercials and online reviews. One of the best ways to convince a doubtful viewer is to give them a chance to try a VR headset themselves.

Radeon Technologies Group at AMD: Head of VR James Knight

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue for us is (or was) probably stitching and the excessive amount of time it takes, but we’re tacking that head on with Project Loom. We have realtime stitching with Loom. You can already download an early version of it on GPUopen.com. But you’re correct, there is a lack of standards in VR/360 production. It’s mainly because there are no really established common practices. That’s to be expected though when you’re shooting for a new medium. Hollywood and entertainment professionals are showing up to the space in a big way, so I suspect we’ll all be working out lots of the common practices in 2017 on sets.

James Knight

What should clients ask of their production and post teams when embarking on their VR project?
Double check they have experience shooting 360 and ask them for a detailed post production pipeline outline. Occasionally, we hear horror stories of people awarding projects to companies that think they can shoot 360 without having personally explored 360 shooting themselves and making mistakes. You want to use an experienced crew that’s made the mistakes, and mostly is cognizant of what works and what doesn’t. The caveat there though is, again, there’s no established rules necessarily, so people should be willing to try new things… sometimes it takes someone not knowing they shouldn’t do something to discover something great, if that makes sense.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
That’s a fun question. The overarching misconception for me, honestly, is just as though a cliché politician might, for example, make a fleeting judgment that video games are bad for society, people are often times making assumptions that VR if for kids or 16 year old boys at home in their boxer shorts. It isn’t. This young industry is really starting to build up a decent library of content, and the payoff is huge when you see well produced content! It’s transformative and you can genuinely envision the potential when you first put on a VR headset.

The biggest way to convince them this isn’t 3D is to convince a naysayer put the headset on… let’s agree we all look rather silly with a VR headset on, and once you get over that, you’ll find out what’s inside. It’s magical. I had the CEO of BAFTA LA, Chantal Rickards, tell me upon seeing VR for the first time, “I remember when my father had arrived home on Christmas Eve with a color TV set in the 1960s and the excitement that brought to me and my siblings. The thrill of seeing virtual reality for the first time was like seeing color TV for the first time, but times 100!”

Missing Pieces: Head of AR/VR/360 Catherine Day

Catherine Day

What is the biggest issue with VR productions at the moment?
The biggest issue with VR production today is the fact that everything keeps changing so quickly. Every day there’s a new camera, a new set of tools, a new proprietary technology and new formats to work with. It’s difficult to understand how all of these things work, and even harder to make them work together seamlessly in a deadline-driven production setting. So much of what is happening on the technology side of VR production is evolving very rapidly. Teams often reinvent the wheel from one project to the next as there are endless ways to tell stories in VR, and the workflows can differ wildly depending on the creative vision.

The lack of funding for creative content is also a huge issue. There’s ample funding to create in other mediums, and we need more great VR content to drive consumer adoption.

Is it lack of standards?
In any new medium and any pioneering phase of an industry, it’s dangerous to create standards too early. You don’t want to stifle people from trying new things. As an example, with our recent NBA VR project, we broke all of the conventional rules that exist around VR — there was a linear narrative, fast cut edits, it was over 25 minutes long — yet still was very well received. So it’s not a lack of standards, just a lack of bravery.

What should clients ask of their production and post teams when embarking on their VR project?
Ask to see what kind of work that team has done in the past. They should also delve in and find out exactly who completed the work and how much, if any, of it was outsourced. There is a curtain that often closes between the client and the production/post company and it closes once the work is awarded. Clients need to know who exactly is working on their project, as much of the legwork involved in creating a VR project — stitching, compositing etc. — is outsourced.

It’s also important to work with a very experienced post supervisor — one with a very discerning eye. You want someone who really knows VR that can evaluate every aspect of what a facility will assemble. Everything from stitching, compositing to editorial and color — the level of attention to detail and quality control for VR is paramount. This is key not only for current releases, but as technology evolves — and as new standards and formats are applied — you want your produced content to be as future-proofed as possible so that if it requires a re-render to accommodate a new, higher-res format in the future, it will still hold up and look fantastic.

What is the biggest misconception about VR — content, process or anything relating to VR?
On the consumer level, the biggest misconception is that people think that 360 video on YouTube or Facebook is VR. Another misconception is that regular filmmakers are the creative talents best suited to create VR content. Many of them are great at it, but traditional filmmakers have the luxury of being in control of everything, and in a VR production setting you have no box to work in and you have to think about a billion moving parts at once. So it either requires a creative that is good with improvisation, or a complete control freak with eyes in the back of their head. It’s been said before, but film and theater are as different as film and VR. Another misconception is that you can take any story and tell it in VR — you actually should only embark on telling stories in VR if they can, in some way, be elevated through the medium.

How do we convince people this isn’t stereo 3D?
With stereo 3D, there was no simple, affordable path for consumer adoption. We’re still getting there with VR, but today there are a number of options for consumers and soon enough there will be a demand for room-scale VR and more advanced immersive technologies in the home.

AMD’s Radeon Pro WX series graphics cards shipping this month

AMD is getting ready to ship the Radeon Pro WX Series of graphics cards, the company’s new workstation graphics solutions targeting creatives pros. The Radeon Pro WX Series are AMD’s answer to the rise of realtime game engines in professional settings, the emergence of virtual reality, the popularity of new low-overhead APIs (such as DirectX 12 and Vulkan) and the rise of open-source tools and applications.

The Radeon Pro WX Series takes advantage of the Polaris architecture-based GPUs featuring fourth-generation Graphics Core Next (GCN) technology and engineered on the 14nm FinFET process. The cards have future-proof monitor support, are able to run a 5K HDR display via DisplayPort 1.4, include state-of-the-art multimedia IP with support for HEVC encoding and decoding and TrueAudio Next for VR, and feature cool and quiet operation with an emphasis on energy efficiency. Each retail Radeon Pro WX graphics card comes with 24/7, VIP customer support, a three-year limited warranty and now features a free, optional seven-year extended limited warranty upon product and customer registration.

Available November 10 for $799, the Radeon Pro WX 7100 graphics card offers 5.7 TFLOPS of single precision floating point performance in a single slot, and is designed for professional VR content creators. Equipped with 8GB GDDR5 memory and 36 compute units (2304 Stream Processors) the Radeon Pro WX 7100 is targeting high-quality visualization workloads.

Also available on November 10, for $399, the Radeon Pro WX 4100 graphics cards targets CAD professionals. The Pro WX 4100 breaks the 2 TFLOPS single precision compute performance barrier. With 4GB of GDDR5 memory and 16 compute units (1024 stream processors), users can drive four 4K monitors or a single 5K monitor at 60Hz, a feature which competing low-profile CAD focused cards in its class can’t touch.radeon

Available November 18 for $499, the Radeon Pro WX 5100 graphics card (pictured right) offers 3.9 TFLOPS of single precision compute performance while using just 75 watts of power. The Radeon Pro WX 5100 graphics card features 8GB of GDDR5 memory and 28 compute units (1792 stream processors) suited for high-resolution realtime visualization for industries such as automotive and architecture.

In addition, AMD recently introduced Radeon Pro Software Enterprise drivers, designed to combine AMD’s next-gen graphics with the specific needs of pro enterprise users. Radeon Pro Software Enterprise drivers offer predictable software release dates, with updates issued on the fourth Thursday of each calendar quarter, and feature prioritized support with AMD working with customers, ISVs and OEMs. The drivers are certified in numerous workstation applications covering the leading professional use cases.

AMD says it’s also committed to furthering open source software for content creators. Following news that later this year AMD plans to open source its physically-based rendering engine Radeon ProRender, the company recently announced that a future release of Maxon’s Cinema 4D application for 3D modeling, animation and rendering will support Radeon ProRender. Radeon ProRender plug-ins are available today for many popular 3D content creation apps, including Autodesk 3ds Max and Maya, and as beta plug-ins for Dassault Systèmes SolidWorks and Rhino. Radeon ProRender works across Windows, MacOS and Linux and supports AMD GPUs, CPUs and APUs as well as those of other vendors.

A look at the new AMD Radeon Pro SSG card

By Dariush Derakhshani

My first video card review was on the ATI FireGL 8800 more than 14 years ago. It was one of the first video cards that could support two monitors with only one card, which to me was a revolution. Up until then I had to jam two 3DLabs Oxygen VX1 cards in my system (one AGP and the other PCI) and wrestle them to handle OpenGL with Maya 4.0 running on two screens. It was either that or sit in envy as my friends taunted me with their two screen setups, like waving a cupcake in front of a fat kid (me).

Needless to say, two cards were not ideal, and the 128MB ATI FireGL8800 was a huge shift in how I built my own systems from then on. Fourteen years later, I’m fatter, balder and have two 27-inch HP screens sitting on my desk (one at 4K) that are always hungry for new video cards. I run mulitple applications at once, and I demand to push around a lot of geometry as fast as possible. And now, I’m even rendering a fair amount on the GPU, so my video card is ever more the centerpiece of my home-built rigs.

So when I stopped by AMD’s booth at SIGGRAPH 2016 in Anaheim recently. I was quite interested in what AMD’s John Swinimer had to say about the announcements the company was making at the show. AMD acquired ATI in 2006.

First, I’m just going to jump right into what got me the most wide-eyed, and that is the announcement of the AMD Radeon Pro SSG. This professional card mates a 1TB SSD to the frame buffer of the video card, giving you a huge boost in how much the GPU system can load into memory. Keep in mind that professional card frame buffers range from about 4GB in entry level cards up to 24-32GB in super high-end cards, so 1TB is a huge number to be sure.

One of the things that slows down GPU rendering the most is having to flush and reload textures from its frame buffer, so the idea of having a 1TB frame buffer is intriguing, to say the least (i.e. a lot of drooling). In their press release, AMD mentions that “8K raw video timeline scrubbing was accelerated from 17 frames per second to a stunning 90+ frames per second” in the first demonstration of the Radeon Pro SSG.

Details are still forthcoming, but two PCIe 3.0 m.2 slots on the SSG card can get us up to 1TB of frame buffer. But the question is, how fast will it be? In traditional SSD drives, m.2 enjoys a large bandwidth advantage over regular SATA drives as long as it can access the PICe bus directly. Things are different if the SSG card is an island in and of itself, with the storage bandwidth contained on the card itself, so it’s unclear how the m.2 bus on the SSG card will do in communicating with the GPU directly. I tend to doubt we’ll see the same performance in bandwidth between GDDR5 memory and an on-board m.2 card, but only real-world testing will be able to suss that out.

But, I believe we’ll immediately see great speed improvements in GPU rendering of huge datasets since the SSG will circumvent the offloading and reloading times between the GPU and CPU memories, as well as potentially boosting multi-frame GPU rendering of CG scenes. But in cases where the graphics sub-system doesn’t need to load more than a dozen or so GBs of data, on board GDDR5 memory will certainly still have an edge in communication speed with the GPU.

So, needless to say, but I’m going to say it anyway, I am very much looking forward to slapping one of these into my rig to see GPU render times, as well as operability using large datasets in Maya and 3ds Max. And as long as the Radeon Pro SSG can avoid hitting up the CPU and main system memory, GPU render gains should be quite large on the whole.

Wait, There’s More
On to other AMD announcements at the show: The affordable Radeon Pro WX line-up (due in the fourth quarter of 2016), refreshing the FirePro branded line. The Radeon Pro WX cards are based on AMD’s RX consumer cards (like the RX 480), but with a higher-level professional driver support and certification with professional apps. The end-goal of professional work is stability as well as performance, and AMD promises a great dedicated support system around their Radeon Pro line to give us professionals the warm and fuzzies we always need over consumer level cards.

The top-of-the line Radeon Pro WX7100 features 256-bit 8GB memory and workstation class performance, but at less than $1,000, which I believe replaces the FirePro W8100. This puts the four-simultaneous-display-capable WX7100 in line to compete with the Nvidia Quadro M4000 card in pricing at least, if not in specs as well. But it’s hard to say where the WX7100 will sit with performance. I do hope it’s somewhere in-between the Quadro M4000 and the $1,800 M5000 card. It’s difficult to answer that based on paper specs, as the number of (OpenCL) Compute Units vs. the number of CUDA cores are hard to compare.

The 8GB Radeon Pro WX5100 and 4GB WX4100 round out the new announcements from SIGGRAPH 2016, putting them in line to compete somewhere between the 8GB Quadro M4000 and 4GB M2000 and K1200 cards in performance. Seems though that AMD’s top-of-the-line will still be the $3,400+ FirePro W9100 with 16GB of memory, though a 32GB version is also available.

I have always thought AMD brought a really good price-to-performance ratio, and it seems like the Radeon Pro WX line will continue that tradition, and I look forward to benchmarking these cards in real world CG use.

Dariush Derakhshani is a professor and VFX supervisor in the Los Angeles area and author of Maya and 3ds Max books and videos. He is bald and has flat feet.

Today: AMD/Radeon event at SIGGRAPH introducing Capsaicin graphics tech

At the SIGGRAPH 2016 show, AMD will webcast a live showcase of new creative graphics solutions during their “Capsaicin” event for content creators. Taking place today at 6:30pm PDT, it’s hosted by Radeon Technologies Group’s SVP and chief architect Raja Koduri.

The Capsaicin event at SIGGRAPH will showcase advancements in rendering and interactive experiences. The event will feature:
▪ Guest speakers sharing updates on new technologies, tools and workflows.
▪ The latest in virtual reality with demonstrations and technology announcements.
▪ Next-gengraphics products and technologies for both content creation and consumption, powered by the Polaris architecture.

A realtime video webcast of the event will be accessible from the AMD channel on YouTube, where a replay of the webcast can be accessed a few hours after the conclusion of the live event. It will be available for one year after the event.

For more info on the Caspaicin event and live feed, click here.

AMD offering FireRender plug-in for 3ds Max

AMD, makers of the line of FirePro graphics cards and engines, has released a free software–based rendering plug-in, the FireRender for Autodesk 3ds Max, which is designed for content creators with 4K workflows and who are looking for photorealistic rendering. FireRender for Max offers physically accurate raytracing and comes with an extensive material library.

AMD FireRender is built on OpenCL 1.2, which means it can run on any hardware. It also provides a CPU backend, which means that FireRender can run on GPU, CPU, CPU+GPU, or a variety of combinations of multiple CPUs and GPUs. Within the FireRender, integrated materials are editable in the 3ds Max Material Slate Editor as nodes. There is also Active Shade Viewport Integration, which means you can work with FireRender in realtime and see your changes as you make them. Physically Correct materials and lighting help with true design decisions via global illumination — including caustics. Emissive and Photometric Lighting, as well as lights from HDRI environments, enable artists to blend a scene in with its surroundings.

AMD says to keep an eye out for other upcoming free software plug-ins for other animation software, including Autodesk Maya and Rhino.

 

In other AMD news, at the NAB show last month, the company introduced the AMD FirePro W9100 32GB workstation graphics card designed for large asset workflows with creative applications. It will be available in Q2 of this year. The FirePro W9100 16GB is currently available.

NAB: AMD intros FirePro workstation graphics card, FireRender plug-in for 3ds Max

At the 2016 NAB Show, AMD has introduced the AMD FirePro W9100 32GB, a workstation graphics card with 32GB memory support for large asset workflows with creative applications. The company also introduced the AMD FireRender plug-in for Autodesk 3ds Max (shown), which enables VR storytellers to use enhanced 4K workflows and photorealistic rendering functionality.

Throughout the show, StudioXperience’s AMD FirePro GPU Zone will be featuring leading applications in a demo of efficient content creation workloads with high visual quality, application responsiveness and compute performance. The zone showcases solutions from Adobe, Apple, Autodesk, Avid, Blackmagic Design, Dell, HP and Rhino, offering attendees a range of hands-on workflow experiences powered by AMD FirePro professional graphics. Demos include a VR production workflow, computer-aided engineering and visualization and 4k workflows, among others.

Dell embraces VR via Precision Towers

It’s going to be hard to walk the floor at NAB this year without being invited to demo some sort of virtual reality experience. More and more companies are diving in and offering technology that optimizes the creation and viewing of VR content. Dell is one of the latest to jump in.

Dell has been working closely on this topic with their hardware and software partners, and are formalizing their commitment to the future of VR by offering solutions that are optimized for VR consumption and creation alongside the mainstream professional ISV apps used by industry pros.

Dell has introduced new, recommended minimum system hardware configurations to support an optimal VR experience for pro users with HTC Vive or Oculus Rift VR solutions. The VR-ready solutions feature a set of three criteria, whether users are consuming or creating VR content; minimum CPU, memory and graphics requirements to support VR viewing experiences; graphics drivers that are qualified to work with these solutions; and pass performance tests conducted by the company using test criteria based on HMD (head-mounted display) suppliers, ISVs or third-party benchmarks.

Dell has also made upgrades to their Dell Precision Tower, including increased performance, graphics and memory for VR content creation. The refreshed Dell Precision Tower 5810, 7810 and 7910 workstations and rack 7910 have been upgraded with new Intel Broadwell EP processors that have more cores and performance for multi-threaded applications that support professional modeling, analysis and calculations.

Additional upgrades include the latest pro graphics technology from AMD and Nvidia, Dell Precision Ultra-Speed PCle drives with up to 4x faster performance than traditional SATA SSD storage, and up to 1TB of DDR4 Memory running at 2400MHz speed.

NAB Day 1: Me, myself and Monday

By William Rogers

Let’s dive right into the craziness.

RED sat me down with the other members of the press in a comfortably dark theater, as they blasted my face with footage demoed from their new Weapon-sensor equipped cameras. There was a bit of awkwardness in the air shared between the RED representatives and the press members—RED admitted that they hadn’t done this sort of sleek, private reveal before at NAB.

Continue reading