OWC 12.4

Category Archives: SIGGRAPH

An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.

Nvidia at SIGGRAPH with new RTX Studio laptops, more

By Mike McCarthy

Nvidia made a number of new announcements at the SIGGRAPH conference in LA this week.  While the company didn’t have any new GPU releases, Nvidia was showing off new implementations of its technology — combining AI image analysis with raytracing acceleration for an Apollo 11-themed interactive AR experience. Nvidia has a number of new 3D software partners supporting RTX raytracing through its Optix raytracing engine.  It allows programs like Blender Cycles, Keyshot, Substance, and Flame to further implement GPU acceleration, using RTX cores for raytracing and tensor cores for AI de-noising.

Nvidia was also showing off a number of new RTX Studio laptop models from manufacturers like HP, Dell, Lenovo and Boxx. These laptops all support Nvidia’s new unified Studio Driver, which, now on its third release, offers full, 10-bit color support for all cards, blurring the feature-set lines between the GeForce and Quadro products. Quadro variants still offer more frame buffer memory, but support for the Studio Drive makes the GeForce cards even more appealing to professionals on a tight budget.

Broader support for 10-bit color makes sense as we move toward more HDR content that requires the higher bit depth, even at the consumer level. And these new Studio Drivers also support both desktop and mobile GPUs, which will simplify eGPU solutions that utilize both on a single system. So if you are a professional with a modern Nvidia RTX GPU, you should definitely check out the new Studio Driver options.

Nvidia is also promoting its cloud-based AI image-generating program Gaugan, which you can check out for free here. It is a fun toy and there are a few potential uses in the professional world, especially for previz backgrounds and concept art.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

OWC 12.4

Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”


SIGGRAPH making-of sessions: Toy Story 4, GoT, more

The SIGGRAPH 2019 Production Sessions program offers attendees a behind-the-scenes look at the making of some of the year’s most impressive VFX films, shows, games and VR projects. The 11 production sessions will be held throughout the conference week of July 28 through August 1 at the Los Angeles Convention Center.

With 11 total sessions, attendees will hear from creators who worked on projects such as Toy Story 4, Game of Thrones, The Lion King and First Man.

Other highlights include:

Swing Into Another Dimension: The Making of Spider-Man: Into the Spider-Verse
This production session will explore the art and innovation behind the creation of the Academy Award-winning Spider-Man: Into the Spider-Verse. The filmmaking team behind the first-ever animated Spider-Man feature film took significant risks to develop an all-new visual style inspired by the graphic look of comic books.

Creating the Immersive World of BioWare’s Anthem
The savage world of Anthem is volatile, lush, expansive and full of unexpected characters. Bringing these aspects to life in a realtime, interactive environment presented a wealth of problems for BioWare’s technical artists and rendering engineers. This retrospective panel will highlight the team’s work, alongside reflections on innovation and the successes and challenges of creating a new IP.

The VFX of Netflix Series
From the tragic tales of orphans to a joint force of super siblings to sinister forces threatening 1980s Indiana, the VFX teams on Netflix series have delivered some of the year’s most best visuals. Creatives behind A Series of Unfortunate Events, The Umbrella Academy and Stranger Things will present the work and techniques that brought these worlds and characters into being.

The Making of Marvel Studios’ Avengers: Endgame
The fourth installment in the Avengers saga is the culmination of 22 interconnected films and has drawn audiences to witness the turning point of this epic journey. SIGGRAPH 2019 keynote speaker Victoria Alonso will join Marvel Studios, Digital Domain, ILM and Weta Digital as they discuss how the diverse collection of heroes, environments, and visual effects were assembled into this ultimate, climactic final chapter.

Space Explorers — Filming VR in Microgravity
Felix & Paul Studios, along with collaborators from NASA and the ISS National Lab, share insights from one of the most ambitious VR projects ever undertaken. In this session, the team will discuss the background of how this partnership came to be before diving into the technical challenges of capturing cinematic virtual reality on the ISS.

Productions Sessions are open to conference participants with Select Conference, Full Conference or Full Conference Platinum registrations. The Production Gallery can be accessed with an Experiences badge and above.


Marvel Studios’ Victoria Alonso to keynote SIGGRAPH 2019

Marvel Studios executive VP of production Victoria Alonso has been name keynote speaker for SIGGRAPH 2019, which will run from July 28 through August 1 in downtown Los Angeles. Registration is now open. The annual SIGGRAPH conference is a melting pot for researchers, artists and technologists, among other professionals.

“Victoria is the ultimate symbol of where the computer graphics industry is headed and a true visionary for inclusivity,” says SIGGRAPH 2019 conference chair Mikki Rose. “Her outlook reflects the future I envision for computer graphics and for SIGGRAPH. I am thrilled to have her keynote this summer’s conference and cannot wait to hear more of her story.”

One of few women in Hollywood to hold such a prominent title, Alonso’s dedication to the industry has been admired for a long time, leading to multiple awards and honors, including the 2015 New York Women in Film & Television Muse Award for Outstanding Vision and Achievement, the Advanced Imaging Society’s first female Harold Lloyd Award recipient, and the 2017 VES Visionary Award (another female first). A native of Buenos Aires, her career began in visual effects and included a four-year stint at Digital Domain.

Alonso’s film credits include productions such as Ridley Scott’s Kingdom of Heaven, Tim Burton’s Big Fish, Andrew Adamson’s Shrek, and numerous Marvel titles — Iron Man, Iron Man 2, Thor, Captain America: The First Avenger, Iron Man 3, Captain America: The Winter Soldier, Captain America: Civil War, Thor: The Dark World, Avengers: Age of Ultron, Ant-Man, Guardians of the Galaxy, Doctor Strange, Guardians of the Galaxy Vol. 2, Spider-Man: Homecoming, Thor: Ragnarok, Black Panther, Avengers: Infinity War, Ant-Man and the Wasp and, most recently, Captain Marvel.

“I’ve been attending SIGGRAPH since before there was a line at the ladies’ room,” says Alonso. “I’m very much looking forward to having a candid conversation about the state of visual effects, diversity and representation in our industry.”

She adds, “At Marvel Studios, we have always tried to push boundaries with both our storytelling and our visual effects. Bringing our work to SIGGRAPH each year offers us the opportunity to help shape the future of filmmaking.”

The 2019 keynote session will be presented as a fireside chat, allowing attendees the opportunity to hear Alonso discuss her life and career in an intimate setting.


Reallusion intros three tools for mocap, characters

Reallusion has launched three new motion capture and character creation products: Character Creator 3, a stand-alone character creation tool; Motion Live, a realtime motion capture solution; and 3D Face Motion Capture with Live Face for iPhone X. With these products Reallusion is offering a total solution to build, morph, animate and gamify 3D characters.

Character Creator 3 (CC3), the new generation of iClone Character Creator, has separated from iClone to become a professional stand-alone tool. With a new quad base, roundtrip editing with ZBrush and photorealistic rendering using Iray, Character Creator 3 is a full character-creation solution for generating optimized 3D characters that are ready for games or intensive artistic design.

CC3 provides a new game character base with topology optimized for mobile, game and AR/VR developers. The big breakthrough is the integration with InstaLOD’s model and material optimization technologies to generate game-ready characters that are animatable on the fly, fulfilling the complete character pipeline on polygon reduction, material merge, texture baking, remeshing and LOD generation.

CC3 launches this month and is available now for preorder for $199. More details can be found here. iClone Motion Live, the multidevice motion capture system, connects industry-standard motion gear — including Rokoko, Leap Motion, Xsens, Faceware, OptiTrack, Noitom and iPhone X — into one solution.

Motion Live’s intuitive plug-and-play design makes connecting complicated mocap devices simple by animating custom imported characters or fully rigged 3D characters generated by Character Creator, Daz Studio or other industry-standard sources.

Reallusion has also debuted the combination of the 3D Face Motion Capture with the iPhone X solution with the Live Face app for iClone. As a result, users can record instant facial motion capture on any 3D character with an iPhone X. Reallusion has expanded the technology behind Animoji and Memoji to lift iPhone X animation and motion capture to the next level for studios and independent creators. The solution combines the power of iPhone X mocap with iClone Motion Live to blend face motion capture with Xsens, Perception Neuron, Rokoko, OptiTrack and Leap Motion for a truly realtime live experience in full-body mocap.


Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018


postPerspective Impact Award winners from SIGGRAPH 2018

postPerspective has announced the winners of our Impact Awards from SIGGRAPH 2018 in Vancouver. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and professionals. It’s working pros who are going to be using new tools — so we let them make the call.

The awards honor innovative products and technologies for the visual effects, post production and production industries that will influence the way people work. They celebrate companies that push the boundaries of technology to produce tools that accelerate artistry and actually make users’ working lives easier.

While SIGGRAPH’s focus is on VFX, animation, VR/AR, AI and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production, which makes these SIGGRAPH Impact Awards doubly interesting.

The winners are as follows:

postPerspective Impact Award — SIGGRAPH 2018 MVP Winner:

They generated a lot of buzz at the show, as well as a lot of votes from our team of judges, so our MVP Impact Award goes to Nvidia for its Quadro RTX raytracing GPU.

postPerspective Impact Awards — SIGGRAPH 2018 Winners:

  • Maxon for its Cinema 4D R20 3D design and animation software.
  • StarVR for its StarVR One headset with integrated eye tracking.

postPerspective Impact Awards — SIGGRAPH 2018 Horizon Winners:

This year we have started a new Imapct Award category. Our Horizon Award celebrates the next wave of impactful products being previewed at a particular show. At SIGGRAPH, the winners were:

  • Allegorithmic for its Substance Alchemist tool powered by AI.
  • OTOY and Epic Games for their OctaneRender 2019 integration with UnrealEngine 4.

And while these products and companies didn’t win enough votes for an award, our voters believe they do deserve a mention and your attention: Wrnch, Google Lightfields, Microsoft Mixed Reality Capture and Microsoft Cognitive Services integration with PixStor.

 


DeepMotion’s Neuron cloud app trains digital characters using AI

DeepMotion has launched DeepMotion Neuron, the first tool for completely procedural, physical character animation, for presale. The cloud application trains digital characters to develop physical intelligence using advanced artificial intelligence (AI), physics and deep learning. With guidance and practice, digital characters can now achieve adaptive motor control just as humans do, in turn allowing animators and developers to create more lifelike and responsive animations than those possible using traditional methods.

DeepMotion Neuron is a behavior-as-a-service platform that developers can use to upload and train their own 3D characters, choosing from hundreds of interactive motions available via an online library. Neuron will enable content creators to tell more immersive stories by adding responsive actors to games and experiences. By handling large portions of technical animation automatically, the service also will free up time for artists to focus on expressive details.

DeepMotion Neuron is built on techniques identified by researchers from DeepMotion and Carnegie Mellon University who studied the application of reinforcement learning to the growing domain of sports simulation, specifically basketball, where real-world human motor intelligence is at its peak. After training and optimization, the researchers’ characters were able to perform interactive ball-handling skills in real-time simulation. The same technology used to teach digital actors how to dribble can be applied to any physical movement using Neuron.

DeepMotion Neuron’s cloud platform is slated for release in Q4 of 2018. During the DeepMotion Neuron prelaunch, developers and animators can register on the DeepMotion website for early access and discounts.

SIGGRAPH: Nvidia intros Quadro RTX raytracing GPU

At SIGGRAPH, Nvidia announced its first Turing architecture-based GPUs, which enable artists to render photorealistic scenes in realtime, add new AI-based capabilities to their workflows and experience fluid interactivity with complex models and scenes.

The Nvidia Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 enable hardware-accelerated raytracing, AI, advanced shading and simulation. Also announced was the Quadro RTX Server, a reference architecture for highly configurable, on-demand rendering and virtual workstation solutions from the datacenter.

“Quadro RTX marks the launch of a new era for the global computer graphics industry,” says Bob Pette, VP of professional visualization at Nvidia. “Users can now enjoy powerful capabilities that weren’t expected to be available for at least five more years. Designers and artists can interact in realtime with their complex designs and visual effects in raytraced photo-realistic detail. And film studios and production houses can now realize increased throughput with their rendering workloads, leading to significant time and cost savings.”

Quadro RTX GPUs are designed for demanding visual computing workloads, such as those used in film and video content creation, automotive and architectural design and scientific visualization.

Quadro RTX Server

Features include:
• New RT cores to enable realtime raytracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination.
• Turing Tensor Cores to accelerate deep neural network training and inference, which are critical to powering AI-enhanced rendering, products and services.
• New Turing Streaming Multiprocessor architecture, featuring up to 4,608 CUDA cores, that delivers up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second to accelerate complex simulation of real-world physics.
• Advanced programmable shading technologies to improve the performance of complex visual effects and graphics-intensive experiences.
• First implementation of ultra-fast Samsung 16Gb GDDR6 memory to support more complex designs, massive architectural datasets, 8K movie content and more.
• Nvidia NVLink to combine two GPUs with a high-speed link to scale memory capacity up to 96GB and drive higher performance with up to 100GB/s of data transfer.
• Hardware support for USB Type-C and VirtualLink, a new open industry standard being developed to meet the power, display and bandwidth demands of next-generation VR headsets through a single USB-C connector.• New and enhanced technologies to improve performance of VR applications, including Variable-Rate Shading, Multi-View Rendering and VRWorks Audio.

The Quadro RTX Server combines Quadro RTX GPUs with new Quadro Infinity software (available in the 1st quarter of 2019) to deliver a flexible architecture to meet the demands of creative pros. Quadro Infinity will enable multiple users to access a single GPU through virtual workstations, dramatically increasing the density of the datacenter. End-users can also easily provision render nodes and workstations based on their specific needs.

Quadro RTX GPUs will be available starting in the 4th quarter. Pricing is as follows:
Quadro RTX 8000 with 48GB memory: $10,000 estimated street price
Quadro RTX 6000 with 24GB memory: $6,300 ESP
Quadro RTX 5000 with 16GB memory: $2,300 ESP

Siggraph: StarVR One’s VR headset with integrated eye tracking

StarVR was at SIGGRAPH 2018 with its StarVR One, its next-generation VR headset built to support the most optimal lifelike VR experience. Featuring advanced optics, VR-optimized displays, integrated eye tracking and a vendor-agnostic tracking architecture, StarVR One is built from the ground up to support use cases in the commercial and enterprise sectors.

The StarVR One VR head-mounted display provides a nearly 100 percent human viewing angle — a 210-degree horizontal and 130-degree vertical field-of-view — and supports a more expansive user experience. Approximating natural human peripheral vision, StarVR One can support rigorous and exacting VR experiences such as driving and flight simulations, as well as tasks such as identifying design issues in engineering applications.

StarVR’s custom AMOLED displays serve up 16 million subpixels at a refresh rate of 90 frames per second. The proprietary displays are designed specifically for VR with a unique full-RGB-per-pixel arrangement to provide a professional-grade color spectrum for real-life color. Coupled with StarVR’s custom Fresnel lenses, the result is a clear visual experience within the entire field of view.

StarVR One automatically measures interpupillary distance (IPD) and instantly provides the best image adjusted for every user. Integrated Tobii eye-tracking technology enables foveated rendering, a technology that concentrates high-quality rendering only where the eyes are focused. As a result, the headset pushes the highest-quality imagery to the eye-focus area while maintaining the right amount of peripheral image detail.

StarVR One eye-tracking thus opens up commercial possibilities that leverage user-intent data for content gaze analysis and improved interactivity, including heat maps.

Two products are available with two different integrated tracking systems. The StarVR One is ready out of the box for the SteamVR 2.0 tracking solution. Alternatively, StarVR One XT is embedded with active optical markers for compatibility with optical tracking systems for more demanding use cases. It is further enhanced with ready-to-use plugins for a variety of tracking systems and with additional customization tools.

The StarVR One headset weighs 450 grams, and its ergonomic headband design evenly distributes this weight to ensure comfort even during extended sessions.

The StarVR software development kit (SDK) simplifies the development of new content or the upgrade of an existing VR experience to StarVR’s premium wide-field-of-view platform. Developers also have the option of leveraging the StarVR One dual-input VR SLI mode, maximizing the rendering performance. The StarVR SDK API is designed to be familiar to developers working with existing industry standards.

The development effort that culminated in the launch of StarVR One involved extensive collaboration with StarVR technology partners, which include Intel, Nvidia and Epic Games.

Siggraph: Chaos Group releases the open beta for V-Ray for Houdini

With V-Ray for Houdini now in open beta, Chaos Group is ensuring that its rendering technology can be used on to each part of the VFX pipeline. With V-Ray for Houdini, artists can apply high-performance raytracing to all of their creative projects, connecting standard applications like Autodesk’s 3ds Max and Maya, and Foundry’s Katana and Nuke.

“Adding V-Ray for Houdini streamlines so many aspects of our pipeline,” says Grant Miller, creative director at Ingenuity Studios. “Combined with V-Ray for Maya and Nuke, we have a complete rendering solution that allows look-dev on individual assets to be packaged and easily transferred between applications.” V-Ray for Houdini was used by Ingenuity on the Taylor Swift music video for Look What You Made Me Do. (See our main image.) 

V-Ray for Houdini uses the same smart rendering technology introduced in V-Ray Next, including powerful scene intelligence, fast adaptive lighting and production-ready GPU rendering. V-Ray for Houdini includes two rendering engines – V-Ray and V-Ray GPU – allowing visual effects artists to choose the one that best takes advantage of their hardware.

V-Ray for Houdini, Beta 1 features include:
• GPU & CPU Rendering – High-performance GPU & CPU rendering capabilities for high-speed look development and final frame rendering.
• Volume Rendering – Fast, accurate illumination and rendering of VDB volumes through the V-Ray Volume Grid. Support for Houdini volumes and Mac OS are coming soon.
• V-Ray Scene Support – Easily transfer and manipulate the properties of V-Ray scenes from applications such as Maya and 3ds Max.
• Alembic Support – Full support for Alembic workflows including transformations, instancing and per object material overrides.
• Physical Hair – New Physical Hair shader renders realistic-looking hair with accurate highlights. Only hair as SOP geometry is supported currently.
• Particles – Drive shader parameters such as color, alpha and particle size through custom, per-point attributes.
• Packed Primitives – Fast and efficient handling of Houdini’s native packed primitives at render time.
• Material Stylesheets – Full support for material overrides based on groups, bundles and attributes. VEX and per-primitive string overrides such as texture randomization are planned for launch.
• Instancing – Supports copying any object type (including volumes) using Packed Primitives, Instancer and “instancepath” attribute.
• Light Instances – Instancing of lights is supported, with options for per-instance overrides of the light parameters and constant storage of light link settings.

To join the beta, check out the Chaos Group website.

V-Ray for Houdini is currently available for Houdini and Houdini Indie 16.5.473 and later. V-Ray for Houdini supports Windows, Linux and Mac OS.

2nd-gen AMD Ryzen Threadripper processors

At the SIGGRAPH show, AMD announced the availability of its 2nd-generation AMD Ryzen Threadripper 2990WX processor with 32 cores and 64 threads. These new AMD Ryzen Threadripper processors are built using 12nm “Zen+” x86 processor architecture. Second-gen AMD Ryzen Threadripper processors support the most I/O and are compatible with existing AMD X399 chipset motherboards via a simple BIOS update, offering builders a broad choice for designing the ultimate high-end desktop or workstation PC.

The 32-core/64-thread Ryzen Threadripper 2990WX and the 24-core/48-thread Ryzen Threadripper 2970WX are purpose-built for prosumers who crave raw computational compute power to dispatch the heaviest workloads. The 2nd-gen AMD Ryzen Threadripper 2990WX offers up to 53 percent faster multithread performance and up to 47 percent more rendering performance for creators than the core i9-7980XE.

This new AMD Ryzen Threadripper X series comes with a higher base and boost clocks for users who need high performance. The 16 cores and 32 threads in the 2950X model offer up to 41 percent more multithreaded performance than the Core i9-7900X.

Additional performance and value come from:
• AMD StoreMI technology: All X399 platform users will now have free access to AMD StoreMI technology, enabling configured PCs to load files, games and applications from a high-capacity hard drive at SSD-like read speeds.
• Ryzen Master Utility: Like all AMD Ryzen processors, the 2nd-generation AMD Ryzen Threadripper CPUs are fully unlocked. With the updated AMD Ryzen Master Utility, AMD has added new features, such as fast core detection both on die and per CCX; advanced hardware controls; and simple, one-click workload optimizations.
• Precision Boost Overdrive (PBO): A new performance-enhancing feature that allows multithreaded boost limits to be raised by tapping into extra power delivery headroom in premium motherboards.

With a simple BIOS update, all 2nd-generation AMD Ryzen Threadripper CPUs are supported by a full ecosystem of new motherboards and all existing X399 platforms. Designs are available from top motherboard manufacturers, including ASRock, ASUS, Gigabyte and MSI.

The 32-core, 64-thread AMD Ryzen Threadripper 2990WX is available now from global retailers and system integrators. The 16-core, 32-thread AMD Ryzen Threadripper 2950X processor is expected to launch on August 31, and the AMD Ryzen Threadripper 2970WX and 2920X models are slated for launch in October.

Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.

Ziva VFX 1.4 adds real-world physics to character creation

Ziva Dynamics has launched Ziva VFX 1.4, a major update that gives the company’s character-creation technology five new tools for production artists. With this update, creators can apply real-world physics to even more of the character creation process — muscle growth, tissue tension and the effects of natural elements, such as heavy winds and water pressure — while removing difficult steps from the rigging process.

Ziva VFX 1.4 combines the effects of real-world physics with the rapid creation of soft-tissue materials like muscles, fat and skin. By mirroring the fundamental properties of nature, users can produce CG characters that move, flex and jiggle just as they would in real life.

With External Forces, users are able to accurately simulate how natural elements like wind and water interact with their characters. Making a character’s tissue flap or wrinkle in the wind, ripple and wave underwater, or even stretch toward or repel away from a magnetic field can all be done quickly, in a physically accurate way.

New Pressure and Surface Tension properties can be used to “fit” fat tissues around muscles, augmenting the standard Ziva VFX anatomy tools. These settings allow users to remove fascia from a Ziva simulation while still achieving the detailed wrinkling and sliding effects that make humans and creatures look real.

Muscle growth can rapidly increase the overall muscle definition of a character or body part without requiring the user to remodel the geometry. A new Rest Scale for Tissue feature lets users grow or shrink a tissue object equally in all directions. Together, these tools improve collaboration between modelers and riggers while increasing creative control for independent artists.

Ziva VFX 1.4 also now features Ziva Scene Panel, which allows artists working on complex builds to visualize their work more simply. Ziva Scene Panel’s tree-like structure shows all connections and relationships between an asset’s objects, functions and layers, making it easier to find specific items and nodes within an Autodesk Maya scene file.

Ziva VFX 1.4 is available now as a Maya plug-in for Windows and Linux users.

Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.

Quick Chat: Joyce Cox talks VFX and budgeting

Veteran VFX producer Joyce Cox has a long and impressive list of credits to her name. She got her start producing effects shots for Titanic and from there went on to produce VFX for Harry Potter and the Sorcerer’s Stone, The Dark Knight and Avatar, among many others. Along the way, Cox perfected her process for budgeting VFX for films and became a go-to resource for many major studios. She realized that the practice of budgeting VFX could be done more efficiently if there was a standardized way to track all of the moving parts in the life cycle of a project’s VFX costs.

With a background in the finance industry, combined with extensive VFX production experience, she decided to apply her process and best practices into developing a solution for other filmmakers. That has evolved into a new web-based app called Curó, which targets visual effects budgeting from script to screen. It will be debuting at Siggraph in Vancouver this month.

Ahead of the show we reached out to find out more about her VFX producer background and her path to becoming a the maker of a product designed to make other VFX pros’ lives easier.

You got your big break in visual effects working on the film Titanic. Did you know that it would become such an iconic landmark film for this business while you were in the throes of production?
I recall thinking the rough cut I saw in the early stage was something special, but had no idea it would be such a massive success.

Were there contacts made on that film that helped kickstart your career in visual effects?
Absolutely. It was my introduction into the visual effects community and offered me opportunities to learn the landscape of digital production and develop relationships with many talented, inventive people. Many of them I continued to work with throughout my career as a VFX producer.

Did you face any challenges as a woman working in below-the-line production in those early days of digital VFX?
It is a bit tricky. Visual effects is still a primarily male dominated arena, and it is a highly competitive environment. I think what helped me navigate the waters is my approach. My focus is always on what is best for the movie.

Was there anyone from those days that you would consider a professional mentor?
Yes. I credit Richard Hollander, a gifted VFX supervisor/producer with exposing me to the technology and methodologies of visual effects; how to conceptualize a VFX project and understand all the moving parts. I worked with Richard on several projects producing the visual effects within digital facilities. Those experiences served me well when I moved to working on the production side, navigating the balance between the creative agenda, the approved studio budgets and the facility resources available.

You’ve worked as a VFX producer on some of the most notable studio effects films of all time, including X-Men 2, The Dark Night, Avatar and The Jungle Book. Was there a secret to your success or are you just really good at landing top gigs?
I’d say my skills lie more in doing the work than finding the work. I believe I continued to be offered great opportunities because those I’d worked for before understood that I facilitated their goals of making a great movie. And that I remain calm while managing the natural conflicts that arise between creative desire and financial reality.

Describe what a VFX producer does exactly on a film, and what the biggest challenges are of the job.
This is a tough question. During pre-production, working with the director, VFX supervisor and other department heads, the VFX producer breaks down the movie into the digital assets, i.e., creatures, environments, matte paintings, etc., that need to be created, estimate how many visual effects shots are needed to achieve the creative goals as well as the VFX production crew required to support the project. Since no one knows exactly what will be needed until the movie is shot and edited, it is all theory.

During production, the VFX producer oversees the buildout of the communications, data management and digital production schedule that are critical to success. Also, during production the VFX producer is evaluating what is being shot and tries to forecast potential changes to the budget or schedule.

Starting in production and going through post, the focus is on getting the shots turned over to digital facilities to begin work. This is challenging in that creative or financial changes can delay moving forward with digital production, compressing the window of time within which to complete all the work for release. Once everything is turned over that focus switches to getting all the shots completed and delivered for the final assembly.

What film did you see that made you want to work in visual effects?
Truthfully, I did not have my sights set on visual effects. I’ve always had a keen interest in movies and wanted to produce them. It was really just a series of unplanned events, and I suppose my skills at managing highly complex processes drew me further into the world of visual effects.

Did having a background in finance help in any particular way when you transitioned into VFX?
Yes, before I entered into production, I spent a few years working in the finance industry. That experience has been quite helpful and perhaps is something that gave me a bit of a leg up in understanding the finances of filmmaking and the ability to keep track of highly volatile budgets.

You pulled out of active production in 2016 to focus on a new company, tell me about Curó.
Because of my background in finance and accounting, one of the first things I noticed when I began working in visual effects was, unlike production and post, the lack of any unified system for budgeting and managing the finances of the process.  So, I built an elaborate system of worksheets in Excel that I refined over the years. This design and process served as the basis for Curó’s development.

To this day the entire visual effects community manages the finances, which can be tens, if not hundreds, of millions in spend with spreadsheets. Add to that the fact that everyone’s document designs are different, which makes the job of collaborating, interpreting and managing facility bids unwieldy to say the least.

Why do you think the industry needs Curó, and why is now the right time? 
Visual effects is the fastest growing segment of the film industry, demonstrated in the screen credits of VFX-heavy films. The majority of studio projects are these tent-pole films, which heavily use visual effects. The volatility of visual effects finances can be managed more efficiently with Curó and the language of VFX financial management across the industry would benefit greatly from a unified system.

Who’s been beta testing Curó, and what’s in store for the future, after its Siggraph debut?
We’ve had a variety of beta users over the past year. In addition to Sony and Netflix a number of freelance VFX producers and supervisors as well as VFX facilities have beta access.

The first phase of the Curó release focuses on the VFX producers and studio VFX departments, providing tools for initial breakdown and budgeting of digital and overhead production costs. After Siggraph we will be continuing our development, focusing on vendor bid packaging, bid comparison tools and management of a locked budget throughout production and post, including the accounting reports, change orders, etc.

We are also talking with visual effects facilities about developing a separate but connected module for their internal granular bidding of human and technical resources.

 

SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 

Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.

Dell intros new Precision workstations, Dell Canvas and more

To celebrate the 20th anniversary of Dell Precision workstations, Dell announced additions to its Dell Precision fixed workstation portfolio, a special anniversary edition of its Dell Precision 5520 mobile workstation and the official availability of Dell Canvas, the new workspace device for digital creation.

Dell is showcasing its next-generation, fixed workstations at SIGGRAPH, including the Dell Precision 5820 Tower, Precision 7820 Tower, Precision 7920 Tower and Precision 7920 Rack, completely redesigned inside and out.

The three new Dell Precision towers combine a brand-new flexible chassis with the latest Intel Xeon processors, next-generation Radeon Pro graphics and highest-performing Nvidia Quadro professional graphics cards. Certified for professional software applications, the new towers are configured to complete the most complex projects, including virtual reality. Dell’s Reliable Memory Technology (RMT) Pro ensures memory challenges don’t kill your workflow, and Dell Precision Optimizer (DPO) tailors performance for your unique hardware and software combination.

The fully-customizable configuration options deliver the flexibility to tackle virtually any workload, including:

  • AI: The latest Intel Xeon processors are an excellent choice for artificial intelligence (AI), with agile performance across a variety of workloads, including machine learning (ML) and deep learning (DL) inference and training. If you’re just starting AI workloads, the new Dell Precision tower workstations allow you to use software optimized to your existing Intel infrastructure.
  • VR: The Nvidia Quadro GP100 powers the development and deployment of cognitive technologies like DL and ML applications. Additional Nvidia Pascal GPU options like HBM2 memory, and NVLink technologies allow professional users to create complex designs in computer-aided engineering (CAE) and experience life-like VR environments.
  • Editing and playback: Radeon Pro SSG Graphics with HBM2 memory and 2TB of SSD onboard allows real-time 8K video editing and playback, high-performance computing of massive datasets, and rendering of large projects.

The Dell Precision 7920 Rack is ideal for secure, remote workers and delivers the same power and scalability as the highest-performing tower workstation in a 2U form factor.  The Dell Precision 5820, 7820, 7920 towers and 7920 Rack will be available for order beginning October 3.

“Looking back at 20 years of Dell Precision workstations, you get a sense of how the capabilities of our workstations, combined with certified and optimized software and the creativity of our awesome customers, have achieved incredible things,” said Rahul Tikoo, vice president and general manager for Dell Precision workstations. “As great as those achievements are, this new lineup of Dell Precision workstations enables our customers to be ready for the next big technology revolution that is challenging business models and disrupting industries.”

Dell Canvas

Dell has also announced its highly-anticipated Dell Canvas, available now. Dell Canvas is a new workspace designed to make digital creative more natural. It features a 27” QHD touch screen that sits horizontally on your desk and can be powered by your current PC ecosystem and the latest Windows 10 Creator’s Update. Additionally, a digital pen provides precise tactile accuracy and the totem offers diverse menu and shortcut interaction.

For the 20th anniversary of Dell Precision, Dell is introducing a limited-edition anniversary model of its award-winning mobile workstation, the Dell Precision 5520. The Dell Precision 5520 Anniversary Edition is Dell’s thinnest, lightest, and smallest mobile workstation, available for a limited time, in hard-anodized aluminum, with a brushed metallic finish in a brand-new Abyss color with anti-finger print coating. The device is available now with two high-end configuration options.

Blackmagic’s Fusion 9 is now VR-enabled

At SIGGRAPH, Blackmagic was showing Fusion 9, its newly upgraded visual effects, compositing, 3D and motion graphics software. Fusion 9 features new VR tools, an entirely new keyer technology, planar tracking, camera tracking, multi-user collaboration tools and more.

Fusion 9 is available now with a new price point — Blackmagic has lowered the price of its Studio version from $995 to $299 Studio Version. (Blackmagic is also offering a free version of Fusion.) The software now works on Mac, PC and Linux.

Those working in VR get a full 360º true 3D workspace, along with a new panoramic viewer and support for popular VR headsets such as Oculus Rift and HTC Vive. Working in VR with Fusion is completely interactive. GPU acceleration makes it extremely fast so customers can wear a headset and interact with elements in a VR scene in realtime. Fusion 9 also supports stereoscopic VR. In addition, the new 360º spherical camera renders out complete VR scenes, all in a single pass and without the need for complex camera rigs.

The new planar tracker in Fusion 9 calculates motion planes for accurately compositing elements onto moving objects in a scene. For example, the new planar tracker can be used to replace signs or other flat objects as they move through a scene. Planar tracking data can also be used on rotoscope shapes. That means users don’t have to manually animate motion, perspective, position, scale or rotation of rotoscoped elements as the image changes.

Fusion 9 also features an entirely new camera tracker that analyzes the motion of a live-action camera in a scene and reconstructs the identical motion path in 3D space for use with cameras inside of Fusion. This lets users composite elements with precisely matched movement and perspective of the original. Fusion can also use lens metadata for proper framing, focal length and more.

The software’s new delta keyer features a complete set of matte finesse controls for creating clean keys while preserving fine image detail. There’s also a new clean plate tool that can smooth out subtle color variations on blue- and greenscreens in live action footage, making them easier to key.

For multi-user collaboration, Fusion 9 Studio includes Studio Player, a new app that features a playlist,
storyboard and timeline for playing back shots. Studio Player can track version history, display annotation notes, has support for LUTs and more. The new Studio Player is suited for customers that need to see shots in a suite or theater for review and approval. Remote synchronization lets artists  sync Studio Players in multiple locations.

In addition, Fusion 9 features a bin server so shared assets and tools don’t have to be copied onto each user’s local workstation.

PNY’s PrevailPro mobile workstations feature 4K displays, are VR-capable

PNY has launched the PNY PrevailPro P4000 and P3000, thin and light mobile workstations. With their Nvidia Max-Q design, these innovative systems are designed from the Quadro GPU out.

“Our PrevailPro [has] the ability to drive up to four 4K UHD displays at once, or render vividly interactive VR experiences, without breaking backs or budgets,” says Steven Kaner, VP of commercial and OEM sales at PNY Technologies. “The increasing power efficiency of Nvidia Quadro graphics and our P4000-based P955 Nvidia Max-Q technology platform, allows PNY to deliver professional performance and features in thin, light, cool and quiet form factors.”

P3000

PrevailPro features the Pascal architecture within the P4000 and P3000 mobile GPUs, with Intel Core i7-7700HQ CPUs and the HM175 Express chipset.

“Despite ever increasing mobility, creative professionals require workstation class performance and features from their mobile laptops to accomplish their best work, from any location,” says Bob Pette, VP, Nvidia Professional Visualization. “With our new Max-Q design and powered by Quadro P4000 and P3000 mobile GPUs, PNY’s new PrevailPro lineup offers incredibly light and thin, no-compromise, powerful and versatile mobile workstations.”

The PrevailPro systems feature either a 15.6-inch 4K UHD or FHD display – and the ability to drive three external displays (2x mDP 1.4 and HDMI 2.0 with HDCP), for a total of four simultaneously active displays. The P4000 version supports fully immersive VR, the Nvidia VRWorks software development kit and innovative immersive VR environments based on the Unreal or Unity engines.

With 8GB (P4000) or 6GB (P3000) of GDDR5 GPU memory, up to 32GB of DDR4 2400MHz DRAM, 512GB SSD availability, HDD options up to 2TB, a comprehensive array of I/O ports, and the latest Wi-Fi and Bluetooth implementations, PrevailPro is compatible with all commonly used peripherals and network environments — and provides pros with the interfaces and storage capacity needed to complete business-critical tasks. Depending on the use case, Mobile Mark 2014 projects the embedded Li polymer battery can reach five hours over a lifetime of 1,000 charge/discharge cycles.

PrevailPro’s thin and light form factor measures 14.96×9.8×0.73 inches (379mm x 248mm x 18mm) and weighs 4.8 lbs.

 

Foundry’s Nuke and Hiero 11.0 now available

Foundry has made available Nuke and Hiero 11.0, the next major release for the Nuke line of products, including Nuke, NukeX, Nuke Studio, Hiero and HieroPlayer. The Nuke family is being updated to VFX Platform 2017, which includes several major updates to key libraries used within Nuke, including Python, Pyside and Qt.

The update also introduces a new type of group node, which offers a powerful new collaborative workflow for sharing work among artists. Live Groups referenced in other scripts automatically update when a script is loaded, without the need to render intermediate stages.

Nuke Studio’s intelligent background rendering is now available in Nuke and NukeX. The Frame Server takes advantage of available resource on your local machine, enabling you to continue working while rendering is happening in the background. The LensDistortion node has been completely revamped, with added support for fisheye and wide-angle lenses and the ability to use multiple frames to produce better results. Nuke Studio now has new GPU-accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.

 

 

Quick Chat: SIGGRAPH’S production sessions chair Emily Hsu

With SIGGRAPH 2017 happening in LA next week, we decided to reach out to Emily Hsu, this year’s production sessions chair to find out more about the sessions and the process in picking what to focus on. You can check out this year’s sessions here. By the way, Hsu’s day job is production coordinator at Portland, Oregon’s Laika Studios. So she comes at this from an attendee’s perspective.

How did you decide what panels to offer?
When deciding the production sessions line-up, my team and I consider many factors. One of the first is a presentation’s appeal to a wide range of SIGGRAPH attendees, which means that it strikes a nice harmony between the technical and the artistic. In addition, we consider the line-up as whole. While we retain strong VFX and animated feature favorites, we also want to round out the show with new additions in VR, gaming, television and more.

Ultimately, we are looking for work that stands out — will it inspire and excite attendees? Does it use technology that is groundbreaking or apply existing technologies in a groundbreaking way? Has it received worthy praise and accolades? Does it take risks? Does it tell a story in a unique way? Is it something that we’ve never seen within the production sessions program before? And, of course, does it epitomize the conference theme: “At the Heart of Computer Graphics & Interactive Techniques?”

These must be presentations that truly get to the heart of a project — not just the obvious successes, but also the obstacles, struggles and hard work that made it possible for it all to come together.

How do you make sure there is a balance between creative workflow and technology?
With the understanding that Production Sessions’ subject matter is targeted toward a broad SIGGRAPH audience, the studios and panelists are really able determine that balance.

Production Session proposals are often accompanied by varied line-ups of speakers from either different areas of the companies or different companies altogether. What’s especially incredible is when studio executives or directors are present on a panel and can speak to over-arching visions and goals and how everything interacts in the bigger picture.

These presentations often showcase the cross-pollination and collaboration that is needed across different teams. The projects are major undertakings by mid-to-large size crews that have to work together in problem solving, developing new systems and tools, and innovating new ways to get to the finish line — so the workflow, technology and art all go hand-in-hand. It’s almost impossible to talk about one without talking about the other.

Can you talk more about the new Production Gallery?
The Production Gallery has been a very special project for the Production Sessions team this year. Over the years since Production Sessions began, we’ve had special appearances by Marvel costumes, props, Laika puppets, and an eight-foot tall Noisy Boy robot from Real Steel, but they have only been available for viewing in the presentation time slots.

In creating a new space that runs Sunday through Wednesday of the conference, we’re hoping to give attendees a true up-close-and-personal experience and also honor more studio work that may often go unnoticed or unseen.

When you go behind-the-scenes of a film set or on a studio tour, there are tens of thousands of elements involved – storyboards, concept artwork, maquettes, costumes, props, and more. This space focuses on those physical elements that are lovingly created for each project, beyond the final rendered piece you see in the movie theater. In peeling back the curtain, we’re trying to bring a bit of the studios straight to the attendees.

The Production Gallery is one of the accomplishments from this year that I’m most proud of, and I hope it grows in future SIGGRAPH conferences.

If someone has never been to SIGGRAPH before, what can you tell them to convince them it’s not a show to miss?
SIGGRAPH is a conference to be experienced, not to hear about later. It opens up worlds, inspires creativity, creates connections and surrounds you in genius. I always come out of it reinvigorated and excited for what’s to come.

At SIGGRAPH, you get a glimpse into the future right now — what non-attendees may only be able to see or experience in many years or even decades. If it’s a show you don’t attend, you’re not just missing — you’re missing out.

If they have been in the past, how is this year different and why should they come?
My first SIGGRAPH was 2011 in Vancouver, and I haven’t skipped a single conference since then. Technology changes and evolves in the blink of an eye and I’ve blinked a lot since last year. There’s always something new to be learned or something exciting to see.

The SIGGRAPH 2017 Committee has put an exceptional amount of effort into the attendee experience this year. There are hands-on must-see-it-to-believe-it kinds of experiences in VR Village, the Studio, E-Tech and the all-new VR Theater, as well as improvements to the overall SIGGRAPH experience to make the conference smoother, more fun, collaborative and interactive.

I won’t reveal any surprises here, but I can say that there will be quite a few that you’ll have to see for yourself! And on top of all that, a giraffe named Tiny at SIGGRAPH? That’s got to be one for the SIGGRAPH history books, so come join us in making history.

Disney Animation legend Floyd Norman keynoting SIGGRAPH 2017

Floyd Norman, the first African-American animator to work for Walt Disney Animation Studios, has been named SIGGRAPH 2017’s keynote speaker. The keynote session featuring Norman will be presented as a fireside chat, allowing attendees the opportunity to hear a Disney legend discuss his life and career within an intimate setting. SIGGRAPH 2017 will be held July 30-August 3 at Los Angeles.

Norman was the subject of a 2016 documentary called Floyd Norman: An Animated Life from filmmakers Michael Fiore and Erik Sharkey. The film covers Norman’s life story, also includes interviews with from voice actors and former colleagues.

Norman was hired as the first African-American animator at Walt Disney Studios in 1956 and was later hand-picked by Walt Disney himself to join the story team on The Jungle Book. After Walt’s death, Norman left Disney to start his own company, Vignette Films and produce films on the subject of black history for high schools. He and his partners would later work with Hanna-Barbera to animate the original Fat Albert TV special Hey, Hey, Hey, It’s Fat Albert, as well as the opening title sequence for the TV series Soul Train.

Norman returned to Disney in the 1980s to work in their publishing department, and in 1998 moved to the story department to work on Mulan. After all this, an invite to the Bay Area in the late ‘90s became a career highlight when Norman began working with leaders in the next wave of animation — Pixar and Steve Jobs — adding Toy Story 2 and Monsters, Inc. to his film credits.

Though he technically retired at the age of 65 in 2000, Norman is not one to quit and chose, instead, to occupy an open cubicle at Disney Publishing Worldwide for the last 15 years. As he puts it, “I just won’t leave.”

While not on staff, Norman’s proximity to other Disney personnel has led him to pick up freelance work and continue his impact on animation as both an artist and a mentor. As to his future plans, he says, “I plan to die at the drawing board!

“I’ve been fascinated by computer graphics since I purchased my first computer. I began attending SIGGRAPH when a kiosk was all Pixar could afford,” he says. “Since then, I’ve had the pleasure of working for this fine company and being a part of this amazing technology as it continues to mature. I’ve also enjoyed sharing insights I’ve garnered over the years in this fantastic industry. In recent years, I’ve spoken at several universities and even Apple. Creative imagination and technological innovation have always been a part of my life, and I’m delighted to share my enthusiasm with the fans at SIGGRAPH this year.”

Images courtesy of Michael Fiore Films

Jim Hagarty Photography

Blue Sky Studios’ Mikki Rose named SIGGRAPH 2019 conference chair

Mikki Rose has been named conference chair of SIGGRAPH 2019. Fur technical director at Greenwich, Connecticut-based Blue Sky Studios, Rose chaired the Production Sessions during SIGGRAPH 2016 this past July in Anaheim and has been a longtime volunteer and active member of SIGGRAPH for the last 15 years.

Rose has worked on such film as The Peanuts Movie and Hotel Transylvania. She refers to herself a “CG hairstylist” due to her specialization in fur at Blue Sky Studios — everything from hair to cloth to feathers and even vegetation. She studied general CG production at college and holds BS degrees in Computer Science and Digital Animation from Middle Tennessee State University as well as an MFA in Digital Production Arts from Clemson University. Prior to Blue Sky, she lived in California and held positions with Rhythm & Hues Studios and Sony Pictures Imageworks.

“I have grown to rely on each SIGGRAPH as an opportunity for renewal of inspiration in both my professional and personal creative work. In taking on the role of chair, my goal is to provide an environment for those exact activities to others,” said Rose. “Our industries are changing and developing at an astounding rate. It is my task to incorporate new techniques while continuing to enrich our long-standing traditions.”

SIGGRAPH 2019 will take place in Los Angeles from July 29 to August 2, 2019.


Main Image: SIGGRAPH 2016 — Jim Hagarty Photography

A look at the new AMD Radeon Pro SSG card

By Dariush Derakhshani

My first video card review was on the ATI FireGL 8800 more than 14 years ago. It was one of the first video cards that could support two monitors with only one card, which to me was a revolution. Up until then I had to jam two 3DLabs Oxygen VX1 cards in my system (one AGP and the other PCI) and wrestle them to handle OpenGL with Maya 4.0 running on two screens. It was either that or sit in envy as my friends taunted me with their two screen setups, like waving a cupcake in front of a fat kid (me).

Needless to say, two cards were not ideal, and the 128MB ATI FireGL8800 was a huge shift in how I built my own systems from then on. Fourteen years later, I’m fatter, balder and have two 27-inch HP screens sitting on my desk (one at 4K) that are always hungry for new video cards. I run mulitple applications at once, and I demand to push around a lot of geometry as fast as possible. And now, I’m even rendering a fair amount on the GPU, so my video card is ever more the centerpiece of my home-built rigs.

So when I stopped by AMD’s booth at SIGGRAPH 2016 in Anaheim recently. I was quite interested in what AMD’s John Swinimer had to say about the announcements the company was making at the show. AMD acquired ATI in 2006.

First, I’m just going to jump right into what got me the most wide-eyed, and that is the announcement of the AMD Radeon Pro SSG. This professional card mates a 1TB SSD to the frame buffer of the video card, giving you a huge boost in how much the GPU system can load into memory. Keep in mind that professional card frame buffers range from about 4GB in entry level cards up to 24-32GB in super high-end cards, so 1TB is a huge number to be sure.

One of the things that slows down GPU rendering the most is having to flush and reload textures from its frame buffer, so the idea of having a 1TB frame buffer is intriguing, to say the least (i.e. a lot of drooling). In their press release, AMD mentions that “8K raw video timeline scrubbing was accelerated from 17 frames per second to a stunning 90+ frames per second” in the first demonstration of the Radeon Pro SSG.

Details are still forthcoming, but two PCIe 3.0 m.2 slots on the SSG card can get us up to 1TB of frame buffer. But the question is, how fast will it be? In traditional SSD drives, m.2 enjoys a large bandwidth advantage over regular SATA drives as long as it can access the PICe bus directly. Things are different if the SSG card is an island in and of itself, with the storage bandwidth contained on the card itself, so it’s unclear how the m.2 bus on the SSG card will do in communicating with the GPU directly. I tend to doubt we’ll see the same performance in bandwidth between GDDR5 memory and an on-board m.2 card, but only real-world testing will be able to suss that out.

But, I believe we’ll immediately see great speed improvements in GPU rendering of huge datasets since the SSG will circumvent the offloading and reloading times between the GPU and CPU memories, as well as potentially boosting multi-frame GPU rendering of CG scenes. But in cases where the graphics sub-system doesn’t need to load more than a dozen or so GBs of data, on board GDDR5 memory will certainly still have an edge in communication speed with the GPU.

So, needless to say, but I’m going to say it anyway, I am very much looking forward to slapping one of these into my rig to see GPU render times, as well as operability using large datasets in Maya and 3ds Max. And as long as the Radeon Pro SSG can avoid hitting up the CPU and main system memory, GPU render gains should be quite large on the whole.

Wait, There’s More
On to other AMD announcements at the show: The affordable Radeon Pro WX line-up (due in the fourth quarter of 2016), refreshing the FirePro branded line. The Radeon Pro WX cards are based on AMD’s RX consumer cards (like the RX 480), but with a higher-level professional driver support and certification with professional apps. The end-goal of professional work is stability as well as performance, and AMD promises a great dedicated support system around their Radeon Pro line to give us professionals the warm and fuzzies we always need over consumer level cards.

The top-of-the line Radeon Pro WX7100 features 256-bit 8GB memory and workstation class performance, but at less than $1,000, which I believe replaces the FirePro W8100. This puts the four-simultaneous-display-capable WX7100 in line to compete with the Nvidia Quadro M4000 card in pricing at least, if not in specs as well. But it’s hard to say where the WX7100 will sit with performance. I do hope it’s somewhere in-between the Quadro M4000 and the $1,800 M5000 card. It’s difficult to answer that based on paper specs, as the number of (OpenCL) Compute Units vs. the number of CUDA cores are hard to compare.

The 8GB Radeon Pro WX5100 and 4GB WX4100 round out the new announcements from SIGGRAPH 2016, putting them in line to compete somewhere between the 8GB Quadro M4000 and 4GB M2000 and K1200 cards in performance. Seems though that AMD’s top-of-the-line will still be the $3,400+ FirePro W9100 with 16GB of memory, though a 32GB version is also available.

I have always thought AMD brought a really good price-to-performance ratio, and it seems like the Radeon Pro WX line will continue that tradition, and I look forward to benchmarking these cards in real world CG use.

Dariush Derakhshani is a professor and VFX supervisor in the Los Angeles area and author of Maya and 3ds Max books and videos. He is bald and has flat feet.

SIGGRAPH: Autodesk updates Maya, Maya LT, Shotgun, more

Autodesk was at SIGGRAPH with their latest design animation solutions, updating Maya to 2017 and adding a plug-in for 3ds Max. Maya 2017 features integrated rendering with Arnold, new motion graphics tools and numerous features and enhancements. Autodesk also announced that Solid Angle’s Arnold renderer will support 3ds Max with a plugin called MAXtoA.

Maya 2017 includes a full set of 3D tools for creating motion graphics. The MASH procedural toolset, first introduced in Maya 2016 extension 2, has been improved with new nodes and new capabilities that allow designers to quickly create unique animations and motion effects. Maya 2017 also features a more intuitive UI and improvements to 3D text tools that enable artists to work faster.

Autodesk has also updated its 3D animation and modeling software for indie game developers, Autodesk Maya LT, to version 2017, as well as Autodesk Stingray 1.4, the newest version of its 3D game engine and realtime rendering offering.

One of the most significant updates for Maya LT is the time editor, a new tool that helps indie game developers streamline animation of complex characters. Additional updates include improvements to existing animation tools and a new way to organize the tools and user interface (UI) into customized workspaces.

Maya LT 2017 is available via subscription. Subscribers of Maya LT receive access to Stingray 1.4 as part of their subscription.

Pixar open sources Universal Scene Description for CG workflows

Pixar Animation Studios has released Universal Scene Description (USD) as an open source technology in order to help drive innovation in the industry. Used for the interchange of 3D graphics data through various digital content creation tools, USD provides a scalable solution for the complex workflows of CG film and game studios. 

With this initial release, Pixar is opening up its development process and providing code used internally at the studio.

“USD synthesizes years of engineering aimed at integrating collaborative production workflows that demand a constantly growing number of software packages,” says Guido Quaroni, VP of software research and development at Pixar.

 USD provides a toolset for reading, writing, editing and rapidly previewing 3D scene data. With many of its features geared toward performance and large-scale collaboration among many artists, USD is ideal for the complexities of the modern pipeline. One such feature is Hydra, a high-performance preview renderer capable of interactively displaying large data sets.

“With USD, Hydra, and OpenSubdiv, we’re sharing core technologies that can be used in filmmaking tools across the industry,” says George ElKoura, supervising lead software engineer at Pixar. “Our focus in developing these libraries is to provide high-quality, high-performance software that can be used reliably in demanding production scenarios.”

Along with USD and Hydra, the distribution ships with USD plug-ins for some common DCCs, such as Autodesk’s Maya and The Foundry’s Katana.

 To prepare for open-sourcing its code, Pixar gathered feedback from various studios and vendors who conducted early testing. Studios such as MPC, Double Negative, ILM and Animal Logic were among those who provided valuable feedback in preparation for this release.

SIGGRAPH: Maxon Cinema 4D updates to R18

By Brady Betzel

During SIGGRAPH 2016, Maxon announced an update to its Cinema 4D to R18. The new release is scheduled to ship this September. While I am planning on doing a full review of R18 once it becomes available, I got a preview of the update from Maxon US president/CEO Paul Babb and Maxon Cineversity tutorialist-staple and VP of operations for Maxon US Rick Barrett. Once you hear Barrett’s voice you will know who I am talking about; he’s definitely given a lot of us some great tips and awesome entry into working in Cinema 4D.

My three favorite updates based off my preview are: the Voronoi Fracture Object, Object Motion Tracking and Thin Film Shader (and a bonus the OpenGL viewport display previews Reflections, Ambient Occlusion and Displacement Mapping).

Voronoi Fracture Object works in conjunction with dynamics and allows you to quickly break through a wall or even procedurally slice and dice vegetables as Babb and Barrett showed using spline or polygon shapes.

Building on Cinema 4D’s existing Motion Tracking, Object Motion Tracking allows the user to track models and other 3D-based objects into real-world footage with less back and forth round-tripping in Adobe After Effects and Cinema 4D via Cineware. In Maxon’s example, they used puff balls purchased from Jo-Ann Fabric and Craft Store as track points, measured the physical distance between them, tracked the objects in Cinema 4D R18, entered the distance between the puff balls and boom! A sweet Transformer-like helmet was tracked with the actor’s head movement needing only minor adjustments.

While there are many other big updates, I was oddly entranced by the Thin Film Shader. If you ever have trouble building materials with that oil slick type of glisten or a bubble with the rainbow-like translucence, Cinema 4D R18 is your friend.

I can’t wait to see some of the presentations that Cinema 4D and the team from GreyScaleGorilla.com have in store, along with other 3D artists. Check out their lineup, and follow them on Twitter @gsg3d. With so many updates like the enhancements to the OpenGL viewport, it will be a long wait until Cinema 4D R18 is released to the public. Check out www.maxon.com for their updated website, Cineversity’s Cinema 4D R18 highlights video, and follow them on Twitter @maxon3d.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter @allbetzroff. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Today: AMD/Radeon event at SIGGRAPH introducing Capsaicin graphics tech

At the SIGGRAPH 2016 show, AMD will webcast a live showcase of new creative graphics solutions during their “Capsaicin” event for content creators. Taking place today at 6:30pm PDT, it’s hosted by Radeon Technologies Group’s SVP and chief architect Raja Koduri.

The Capsaicin event at SIGGRAPH will showcase advancements in rendering and interactive experiences. The event will feature:
▪ Guest speakers sharing updates on new technologies, tools and workflows.
▪ The latest in virtual reality with demonstrations and technology announcements.
▪ Next-gengraphics products and technologies for both content creation and consumption, powered by the Polaris architecture.

A realtime video webcast of the event will be accessible from the AMD channel on YouTube, where a replay of the webcast can be accessed a few hours after the conclusion of the live event. It will be available for one year after the event.

For more info on the Caspaicin event and live feed, click here.

Vicon at SIGGRAPH with two new motion tracking cameras

Vicon, which makes precision motion tracking systems and match-moving software, will be at SIGGRAPH this year showing its two new camera families, Vero and Vue. The new offerings join Vicon’s flagship camera, Vantage.

Vero is a range of high-def, synchronized optical video cameras for providing realtime video footage and 3D overlay in motion capture. Designed as an economical system for many types of applications, the Vero range includes a custom 6-12 mm variable focus lens that delivers an optimized field of view, as well as 2.2 megapixel resolution at 330Hz.

With these features, users can capture fast sport movements and multiple actors, drones or robots with low latency. The range also includes a 1.3 megapixel camera. Vero is compatible with existing Vicon T-series, Bonita and Vantage cameras as well as Vicon’s Control app, which allows users to calibrate the system and make adjustments on the fly.

With HD resolution and variable focal lengths, the Vicon Vue camera incorporates a sharp video image into the motion capture volume. It also enables seamless calibration between optical and video volumes, ensuring the optical and video views are aligned to capture fine details.

The Foundry’s Katana now supports Windows

The Foundry has updated its look development and lighting tool Katana to version 2.5. This updates includes support for Windows, allowing more artists working in VFX, broadcast and animation to take advantage of the tool. Additionally, the toolset is easier to install and supports a number of plug-ins including RenderMan, V-Ray, Arnold and 3Delight.

Used in studios from Industrial Light & Magic and Pixar to Atomic Fiction, Katana allows artists — working in short- and long-form — to turn creative lighting setups into “recipes” that can be shared amongst the team, cutting down the time it takes to turn out complicated shots.

Katana 2.5 is available now as a beta release and will be shipping soon. Supported platforms are Linux RHEL Centos 6 and Windows 7 64-bit.

“Katana has become the bedrock of our pipeline,” reports Kevin Baillie, co-founder at Atomic Fiction (The Walk, Flight). “Big scenes, experimental lighting set-ups, we can throw anything at it and it’ll give us production-ready results that we can share up and down the chain. With timelines getting shorter and shorter, you need tools like Katana around; there’s no other way to get the work done.”

To request a trial as part of the beta program, email sales@thefoundry.co.uk.

Review: Red Giant Trapcode Suite 13, Part 2

By Brady Betzel

In my recent Red Giant Trapcode Suite 13 for After Effects review, Part 1, I touched on updates to Particular, Shine, Lux and Starglow. In this installment, I am going to blaze through the remaining seven plug-ins that make up the Trapcode Suite. Those include Form, Mir, Tao, 3D Stroke, Echospace, Sound Keys and Horizon. While Particular is the most well-known plug-in in the Suite, the following seven all are incredibly useful and can help make you money.

Form 2.1
Trapcode Form 2.1 is best described as a particle system, much like Particular, but with particles that live forever and are used in forms like cubes. If you’ve used Element 3D by Video CoPilot you probably know that you can load objects from Maxon Cinema 4D into your Adobe After Effects projects pretty easily and, for all intents and purposes, quickly. Form allows you to load these 3D OBJ files and alter them inside of After Effects.

When you load the OBJ file, Form applies particles at each vertices point. The more vertices you have in your 3D object, the more detail you will have in your Form. It is really a cool way to create a techy kind of look for a HUD (heads up display) or sweet motion graphics piece that needs that futuristic pointillism type look. The original function of Form was to create particle grids that could be exploded or tightly wound and that would live on forever, as opposed to Particular, which creates particle systems with a birth and a death.

Form

Form 2.1

A simple way to think of how Form works is to imagine the ability to take simple text and transform it into “particles” to create a sandy explosion or turn everyday objects into particles that live forever. From Grids to Strings and Spheres to Sprites, with enough practice you can create some of the most stunning backgrounds or motion graphics wizardry inside of Trapcode Form, all of which is affected by After Effect lights and cameras in 3D space.

I was really surprised at how powerful and smooth Trapcode Form can run. I am running a tablet with an Intel i7 processor and I was able to get very reasonable performance, even with my camera depth-of-field turned on.

Mir 2.0
Trapcode Mir is an extremely useful plug-in for those wanting to create futuristic terrains or modern triangulated environments with tunnels and valleys. Mir is versatile and can go from creating smooth ocean floors to spiky mountain tops to extreme wireframe structures. Some of the newest updates in Mir 2.0 are the ability to add a spiral to the Mir landscape mesh you create (think galaxy); seamless looping under the fractal menu; ability to choose between triangles and quads for your surfaces; the really cool ability to add a second pass wireframe on top of your surface for that futuristic grid look; texture sampling from smooth gradients to solid colors; control of the maximums and minimums under z-range (basically allows for easier peaks and valleys); multi-, smoothridge, multi-smoothridge and regular fractals for differing displacements on your textures; and improved VRAM management for speedy processing.

Mir 2

Mir 2.0

These days GIFs are all the rage, so I am really impressed with the seamless loop option. It might seem ridiculous but if you’ve seen what is popular on social media you will know it’s emojis and GIFs. If you want to prep your seamless loop, check out this quick video from Trapcode creator Peder Norrby (@trapcode_lab).

Simply, you create beginning and end keyframes, find the seamless loop options under the Fractal category, step back one frame from your end loop point, mark your end-of-work area, go to the loop point (which should be one frame past where you marked the end to your work area) and click Set End Keyframe. From there Trapcode Mir will fill in the rest of the details and create your seamless loop ready to be exported as a GIF and blasted on Twitter. It’s really that easy.

If you are looking for an animated GIF export setting, try exporting through Adobe Media Encoder and searching “GIF” in the presets. You will find an “Animated GIF” preset, which I resized to something more appropriate like 1280×720 but that still came out at 49MB — way over the 5MB Twitter upload limit. I tried a few times, first with 50% quality at 640×360, which got me to 13.7MB. I even changed the quality down to 5% in Media Encoder, but I kept getting 13.7MB until I brought the size down to 320×180. That got me just under 4MB, which is perfect! If you do a lot of GIF work, an easy way to compress them is to use http://ezgif.com/optimize and to fiddle with their optimization settings to get under 5MB. It’s quick and it all lives online.

As with all Trapcode Suite plug-ins (or anything for that matter), the only way to get good is to experiment and allow yourself to fail or succeed. This holds true for Mir. I was making garbage one minute and with a couple changes I made some motion graphics that made me see the potential of the plug-in and how I could actually make content that people would be blown away with.

3D Stroke

3D Stroke

3D Stroke
One plug-in that isn’t new but will lead into the next one is Trapcode 3D Stroke. 3D Stroke takes the built-in After Effects plug-in Stroke to a new level. Traditional Stroke is an 8-bit plug-in while Trapcode 3D Stroke can run on the color-burning 32-bits-per-channel mode. If you want to add a stroke along a path that interacts with your comp cameras in 3D space, Trapcode 3D Stroke is what you want. From creating masks of your text and applying a sweet 3D Stroke to them to intricate 3D paths that zoom in between objects with a HDR-like glow, 3D Stroke is one of those tools to have in your After Effects tool box.

When using it I really fell in love with the repeater. Much like Element 3D’s particle arrays, the repeater can create multiple instances of your paths or text paths to create some interesting and infinitely adjustable objects.

Tao
Trapcode Tao is new to the Trapcode Suite of plug-ins. Tao gives us the ability to create 3D geometry along a path, and boy did people immediately fall in love with this tool when it was released. You can find tons of examples and tutorials of Tao from experts like VinhSon Nguyen, better known as @CreativeDojo on Twitter. Check out his tutorial on Vimeo, too. Tao is a tricky beast, and one way I learned about it in-depth was to download Peder Norrby’s project files over at http://www.trapcode.com and dissect them as best I could.

Tao

Tao

If you remember Trapcode 3D Stroke from earlier, you know that it allows us to create awesome glows and strokes along paths in 3D space. Trapcode Tao operates in much the same way as 3D Stroke except that it uses particles like Mir to create organic flowing forms in 3D space that interact with After Effects’ cameras and lights.

Trapcode Tao is about as close as you can get to modeling 3D geometry inside of After Effects at realtime speeds with image-based lighting. The only other way to achieve this is with Video CoPilot’s Element 3D or by using Cinema 4D via Cineware, which is sometimes a painstaking process.

Horizon 1.1
Another product that I was surprised by was Trapcode Horizon 1.1. In the age of virtual reality and 360 video you can never have too many ways to make your own worlds to pan cameras around in. With a quick Spherical Map search on Google, I found all the equi-rectangular maps I could handle. Once inside of After Effects, you need to import and resize your map to your comp size, add a new solid and camera, throw Horizon on top of your solid, under Image Map > Layer, choose the layer name containing your spherical image, and BAM! You have a 360-world. You can then add elements like Trapcode Particular, 3D Stroke or Tao and pan and zoom around to make some pretty great opening titles or even make your own B-Roll!

Echospace

Echospace 1.1

Echospace 1.1
Trapcode Echospace 1.1 is a powerful section in the Trapcode Suite 13 plug-in library. It is one of those plug-ins where you watch the tutorials and wonder why people don’t talk about it more. In simple terms, Echospace replicates layers and creates interdependent parenting links to the original layer, allowing you to create complex repeated element animations and layouts. In essence it feels more like a complex script as opposed to a plug-in.

Let’s say you want to create some offset animation of multiple shape layers in three-dimensional space, Echospace is your tool. It’s a little hard to use and if you don’t Shy the replicated layers and nulls, it will be intimidating. When you create the repeated layers, Echospace automatically sets your layers to Shy if you enable Shy layers in your tool bar. A great Harry Frank (@graymachine) tutorial/Red Giant Live episode can be found on the Red Giant website: http://www.redgiant.com/tutorial/red-giant-tv-live-episode-8-motion-graphics-with-trapcode-echospace.

Sound Keys 1.3
The last plug-in in the massive Trapcode Suite v13 library is Sound Keys 1.3. Sound Keys analyzes audio files and can draw keyframes based on their rhythm. One reason I left this until the end of my review is that you can attach any of the parameters from the other Trapcode Suite 13 plug-ins to the outputs of the Sound Keys 1.3 keyframes via a pick whip. If I just lost you by saying pick whip, snap back into it.

If you learn one thing in the After Effects scripting world, it’s that you can attach one parameter to another by alt+clicking (command+clicking) on the stopwatch of the parameter that you want to be driven by another parameter and dragging the curly-looking icon over the other parameter. So in the Sound Keys case, you can attach the scale of an object to the rhythm of a bass drum.

Soundkeys Color Orientation

Sound Keys 1.3

What I really liked about Sound Keys is that it not only can create a dynamically driven piece of motion graphics, but you can also use the audio meters it draws to visualize the audio. You see this a lot in lyric music videos or YouTube videos that are playing music only but still want a touch of visual flare, and with Sound Keys 1.3 you can change the visual representation of the audio including color, quantization (little dots that you see on audio meters) and size.

Easily isolate an audio frequency with the onscreen controls, find the effect you want to drive by the audio, and pick whip your way to dynamic motion graphic. If I was the graphics designer I wish I was, I would take Sound Keys and something like Particular or Tao and create some stunning work. I bet I could even make some money making some lyric videos… one day.

Summing Up
In the end, the Trapcode Suite v13 is an epic and monumental release. The total cost as a package is $999, and while it is a significantly higher cost than After Effects, let me tell you: it has the ability to make you way more money with some time and effort. Even with just an hour or so a day I feel like my Trapcode game would go to the next level.

For those that have the Trapcode Suite and want to upgrade for $199, there are some huge benefits to the v13 update including Trapcode Tao, GPU performance upgrades across the board, and even things like the second pass wireframe for Mir.

If you are a student, you can grab Trapcode Suite 13 for $499 with a little verification legwork. If you are worried about your system working efficiently with the Trapcode Suite you can check the technical requirements here, but I was working on an Intel i7 tablet with 8GB of memory and Intel Iris 6100 graphics processor. I found everything to be very speedy for the limitations I had. Tao was the only plug-in that wouldn’t display correctly, but rightly so, as you can read the GPU requirements here.

If I was you and had a cool $999 burning a hole in my After Effects wallet I would pick up Trapcode Suite 13 immediately.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter @allbetzroff. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

SIGGRAPH’s 43rd Computer Animation Festival winners

The winners of SIGGRAPH’s 43rd Annual Computer Animation Festival have been announced. For 2016, submissions were evaluated by an expert jury of proS who span the visual effects, animation, research and development, games, advertising and education industries.

This 2016 award categories and winners are:

Best in Show
Borrowed Time (USA), directed by Andrew Coats and Lou Hamou-Lhadj, and produced by Amanda Jones. It runs seven minutes.

A weathered sheriff returns to the remains of an accident he has spent a lifetime trying to forget. With each step forward, the memories come flooding back. Faced with his mistake once again, he must find the strength to carry on.

Jury’s Choice
Cosmos Laundromat (Netherlands) submitted and produced by Ton Roosendaal.

In this short, Franck, a depressed sheep, sees only one way out of his boring life, until he meets with the quirky salesman Victor, who offers him any life he ever wanted. The piece was created as a pilot for a feature film project that, if it happens, will be the first free, open-source animated production.

Best Student Project
Le Crabe-Phare (France), directed by Mengjing Yang, Gaëtan Borde, Benjamin Lebourgeois, Claire Vandermeersch and Alendandre Veaux.

The Crabe-Phare is a legendary crustacean. He captures the boats of lost sailors to add them to his collection. But the crab is getting old, and it is more and more difficult for him to build his collection.Crabe-Phare © 2016 AUTOUR DE MINUIT

The 2016 Computer Animation Festival is comprised of two programs: the Electronic Theater and Daytime Selects. An evening event, the Electronic Theater will contain over 20 primarily narrative-driven short films from around the globe, showcasing technical excellence, art and animation.

In addition to juried pieces, this year’s theater will feature curated works such as Disney Pixar’s Piper and Disney Animation Studios’ Inner Workings.

The Daytime Selects program has been revamped for 2016 and will offer four varied sessions. They will include

·  Break it Down – A chance for attendees to get a behind-the-scenes glimpse at how movie magic is created, featuring demonstrations of visual effects from major studios and a glimpse at how standard techniques can be used in new ways. Participating studios include ILM, MPC, Framestore, Weta, Digital Domain, Pixar, Spin VFX, OLM, Mr. X, and many more!
·  The Arcade – An audience experience that focuses on games from concept art through technology to implementation in cinematic and realtime. The show touches on everything from look development through to the accomplishments being made today with modern realtime engines.
·  Demoscene – A representation of an international computer art subculture that specializes in creating self-contained programs that produce audio-visual presentations. It is designed for computer scientists, GPU lovers, shader architects, and extreme realtime graphics artists who exhibit programming, artistic and musical skills within highly constrained limitations.
·  Winners Circle – A celebration of Computer Animation Festival award winners from the past seven years for attendees who wish to revisit some of their favorite winning content from Electronic Theaters.

Click here to view the trailer for the 2016 Computer Animation Festival. To learn more about the festival and this year’s selections visit conference website.

GPU-accelerated renderer Redshift now in v.2.0, integrates with 3ds Max

Redshift Rendering has updated its GPU-accelerated rendering software to Redshift 2.0. This new version includes new features and pipeline enhancements to the existing Maya and Softimage plug-ins. Redshift 2.0 also introduces integration with Autodesk 3ds Max. Integrations with Side Effects Houdini and Maxon Cinema 4D are currently in development and are expected later in 2016.

New features across all platforms include realistic volumetrics, enhanced subsurface scattering and a new PBR-based Redshift material, all of which deliver improved final render results. Starting July 5, Redshift is offering 20 percent off new Redshift licenses through July 19.

Age of Vultures

Age of Vultures

A closer look at Redshift 2.0’s new features:

● Volumetrics (OpenVDB) – Render clouds, smoke, fire and other volumetric effects with production-quality results (initial support for OpenVDB volume containers).

● Nested dielectrics – The ability to accurately simulate the intersection of transparent materials with realistic results and no visual artifacts.

● New BRDFS and linear glossiness response – Users can model a wider variety of metallic and reflective surfaces via the latest and greatest in surface shading technologies (GGX and Beckmann/CookTorrance BRDFs).

● New SSS models and single scattering – More realistic results with support for improved subsurface scattering models and single-scattering.

● Redshift material – The ability to use more intuitive, PBR-based main material, featuring effects such as dispersion/chromatic aberration.

● Multiple dome lights – Users can combine multiple dome lights to create more compelling lighting.

● alSurface support – There is now full support for the Arnold shader without having to port settings.

● Baking – Users can save a lot of rendering time with baking for lighting and AOVs.

Users include Blizzard, Jim Henson’s Creature Shop, Glassworks and Blue Zoo.

Main Image: Rendering example from A Large Evil Corporation.

SIGGRAPH hosting character design contest

If you are a character designer and thinking about attending the SIGGRAPH conference on July 24-28 in Anaheim, this is your lucky week. The Spirit of SIGGRAPH 
winner will receive complimentary full conference registration for SIGGRAPH 2016. Travel and incidental expenses are not included. The winning design will also be featured in promotion of the conference through social media, and the designer will be credited.

Designs must be submitted by midnight on April 15. The winner will be announced on May 2. The contest is seeking character designs that “embody the spirit of SIGGRAPH and should be original creations.” Submissions will be judged on any of the following criteria:
– Creativity
– Design
– Relevance to SIGGRAPH
– Suitability to 2D graphic design, 3D animatable design
and 3D printable solid design
– Ability to be turned into a wearable costume
– Suitability for use in a variety of on-site promotions

The winner will be chosen by this year’s SIGGRAPH event chair, Mona Kasra, next year’s event chair, Jerome Solomon, and a board of judges comprised of 2016 program chairs and experts across the animation and design industries.