Category Archives: SIGGRAPH

Reallusion intros three tools for mocap, characters

Reallusion has launched three new motion capture and character creation products: Character Creator 3, a stand-alone character creation tool; Motion Live, a realtime motion capture solution; and 3D Face Motion Capture with Live Face for iPhone X. With these products Reallusion is offering a total solution to build, morph, animate and gamify 3D characters.

Character Creator 3 (CC3), the new generation of iClone Character Creator, has separated from iClone to become a professional stand-alone tool. With a new quad base, roundtrip editing with ZBrush and photorealistic rendering using Iray, Character Creator 3 is a full character-creation solution for generating optimized 3D characters that are ready for games or intensive artistic design.

CC3 provides a new game character base with topology optimized for mobile, game and AR/VR developers. The big breakthrough is the integration with InstaLOD’s model and material optimization technologies to generate game-ready characters that are animatable on the fly, fulfilling the complete character pipeline on polygon reduction, material merge, texture baking, remeshing and LOD generation.

CC3 launches this month and is available now for preorder for $199. More details can be found here. iClone Motion Live, the multidevice motion capture system, connects industry-standard motion gear — including Rokoko, Leap Motion, Xsens, Faceware, OptiTrack, Noitom and iPhone X — into one solution.

Motion Live’s intuitive plug-and-play design makes connecting complicated mocap devices simple by animating custom imported characters or fully rigged 3D characters generated by Character Creator, Daz Studio or other industry-standard sources.

Reallusion has also debuted the combination of the 3D Face Motion Capture with the iPhone X solution with the Live Face app for iClone. As a result, users can record instant facial motion capture on any 3D character with an iPhone X. Reallusion has expanded the technology behind Animoji and Memoji to lift iPhone X animation and motion capture to the next level for studios and independent creators. The solution combines the power of iPhone X mocap with iClone Motion Live to blend face motion capture with Xsens, Perception Neuron, Rokoko, OptiTrack and Leap Motion for a truly realtime live experience in full-body mocap.

Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018

DigitalGlue 12.3

postPerspective Impact Award winners from SIGGRAPH 2018

postPerspective has announced the winners of our Impact Awards from SIGGRAPH 2018 in Vancouver. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and professionals. It’s working pros who are going to be using new tools — so we let them make the call.

The awards honor innovative products and technologies for the visual effects, post production and production industries that will influence the way people work. They celebrate companies that push the boundaries of technology to produce tools that accelerate artistry and actually make users’ working lives easier.

While SIGGRAPH’s focus is on VFX, animation, VR/AR, AI and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production, which makes these SIGGRAPH Impact Awards doubly interesting.

The winners are as follows:

postPerspective Impact Award — SIGGRAPH 2018 MVP Winner:

They generated a lot of buzz at the show, as well as a lot of votes from our team of judges, so our MVP Impact Award goes to Nvidia for its Quadro RTX raytracing GPU.

postPerspective Impact Awards — SIGGRAPH 2018 Winners:

  • Maxon for its Cinema 4D R20 3D design and animation software.
  • StarVR for its StarVR One headset with integrated eye tracking.

postPerspective Impact Awards — SIGGRAPH 2018 Horizon Winners:

This year we have started a new Imapct Award category. Our Horizon Award celebrates the next wave of impactful products being previewed at a particular show. At SIGGRAPH, the winners were:

  • Allegorithmic for its Substance Alchemist tool powered by AI.
  • OTOY and Epic Games for their OctaneRender 2019 integration with UnrealEngine 4.

And while these products and companies didn’t win enough votes for an award, our voters believe they do deserve a mention and your attention: Wrnch, Google Lightfields, Microsoft Mixed Reality Capture and Microsoft Cognitive Services integration with PixStor.

 


DeepMotion’s Neuron cloud app trains digital characters using AI

DeepMotion has launched DeepMotion Neuron, the first tool for completely procedural, physical character animation, for presale. The cloud application trains digital characters to develop physical intelligence using advanced artificial intelligence (AI), physics and deep learning. With guidance and practice, digital characters can now achieve adaptive motor control just as humans do, in turn allowing animators and developers to create more lifelike and responsive animations than those possible using traditional methods.

DeepMotion Neuron is a behavior-as-a-service platform that developers can use to upload and train their own 3D characters, choosing from hundreds of interactive motions available via an online library. Neuron will enable content creators to tell more immersive stories by adding responsive actors to games and experiences. By handling large portions of technical animation automatically, the service also will free up time for artists to focus on expressive details.

DeepMotion Neuron is built on techniques identified by researchers from DeepMotion and Carnegie Mellon University who studied the application of reinforcement learning to the growing domain of sports simulation, specifically basketball, where real-world human motor intelligence is at its peak. After training and optimization, the researchers’ characters were able to perform interactive ball-handling skills in real-time simulation. The same technology used to teach digital actors how to dribble can be applied to any physical movement using Neuron.

DeepMotion Neuron’s cloud platform is slated for release in Q4 of 2018. During the DeepMotion Neuron prelaunch, developers and animators can register on the DeepMotion website for early access and discounts.


SIGGRAPH: Nvidia intros Quadro RTX raytracing GPU

At SIGGRAPH, Nvidia announced its first Turing architecture-based GPUs, which enable artists to render photorealistic scenes in realtime, add new AI-based capabilities to their workflows and experience fluid interactivity with complex models and scenes.

The Nvidia Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 enable hardware-accelerated raytracing, AI, advanced shading and simulation. Also announced was the Quadro RTX Server, a reference architecture for highly configurable, on-demand rendering and virtual workstation solutions from the datacenter.

“Quadro RTX marks the launch of a new era for the global computer graphics industry,” says Bob Pette, VP of professional visualization at Nvidia. “Users can now enjoy powerful capabilities that weren’t expected to be available for at least five more years. Designers and artists can interact in realtime with their complex designs and visual effects in raytraced photo-realistic detail. And film studios and production houses can now realize increased throughput with their rendering workloads, leading to significant time and cost savings.”

Quadro RTX GPUs are designed for demanding visual computing workloads, such as those used in film and video content creation, automotive and architectural design and scientific visualization.

Quadro RTX Server

Features include:
• New RT cores to enable realtime raytracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination.
• Turing Tensor Cores to accelerate deep neural network training and inference, which are critical to powering AI-enhanced rendering, products and services.
• New Turing Streaming Multiprocessor architecture, featuring up to 4,608 CUDA cores, that delivers up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second to accelerate complex simulation of real-world physics.
• Advanced programmable shading technologies to improve the performance of complex visual effects and graphics-intensive experiences.
• First implementation of ultra-fast Samsung 16Gb GDDR6 memory to support more complex designs, massive architectural datasets, 8K movie content and more.
• Nvidia NVLink to combine two GPUs with a high-speed link to scale memory capacity up to 96GB and drive higher performance with up to 100GB/s of data transfer.
• Hardware support for USB Type-C and VirtualLink, a new open industry standard being developed to meet the power, display and bandwidth demands of next-generation VR headsets through a single USB-C connector.• New and enhanced technologies to improve performance of VR applications, including Variable-Rate Shading, Multi-View Rendering and VRWorks Audio.

The Quadro RTX Server combines Quadro RTX GPUs with new Quadro Infinity software (available in the 1st quarter of 2019) to deliver a flexible architecture to meet the demands of creative pros. Quadro Infinity will enable multiple users to access a single GPU through virtual workstations, dramatically increasing the density of the datacenter. End-users can also easily provision render nodes and workstations based on their specific needs.

Quadro RTX GPUs will be available starting in the 4th quarter. Pricing is as follows:
Quadro RTX 8000 with 48GB memory: $10,000 estimated street price
Quadro RTX 6000 with 24GB memory: $6,300 ESP
Quadro RTX 5000 with 16GB memory: $2,300 ESP


Siggraph: StarVR One’s VR headset with integrated eye tracking

StarVR was at SIGGRAPH 2018 with its StarVR One, its next-generation VR headset built to support the most optimal lifelike VR experience. Featuring advanced optics, VR-optimized displays, integrated eye tracking and a vendor-agnostic tracking architecture, StarVR One is built from the ground up to support use cases in the commercial and enterprise sectors.

The StarVR One VR head-mounted display provides a nearly 100 percent human viewing angle — a 210-degree horizontal and 130-degree vertical field-of-view — and supports a more expansive user experience. Approximating natural human peripheral vision, StarVR One can support rigorous and exacting VR experiences such as driving and flight simulations, as well as tasks such as identifying design issues in engineering applications.

StarVR’s custom AMOLED displays serve up 16 million subpixels at a refresh rate of 90 frames per second. The proprietary displays are designed specifically for VR with a unique full-RGB-per-pixel arrangement to provide a professional-grade color spectrum for real-life color. Coupled with StarVR’s custom Fresnel lenses, the result is a clear visual experience within the entire field of view.

StarVR One automatically measures interpupillary distance (IPD) and instantly provides the best image adjusted for every user. Integrated Tobii eye-tracking technology enables foveated rendering, a technology that concentrates high-quality rendering only where the eyes are focused. As a result, the headset pushes the highest-quality imagery to the eye-focus area while maintaining the right amount of peripheral image detail.

StarVR One eye-tracking thus opens up commercial possibilities that leverage user-intent data for content gaze analysis and improved interactivity, including heat maps.

Two products are available with two different integrated tracking systems. The StarVR One is ready out of the box for the SteamVR 2.0 tracking solution. Alternatively, StarVR One XT is embedded with active optical markers for compatibility with optical tracking systems for more demanding use cases. It is further enhanced with ready-to-use plugins for a variety of tracking systems and with additional customization tools.

The StarVR One headset weighs 450 grams, and its ergonomic headband design evenly distributes this weight to ensure comfort even during extended sessions.

The StarVR software development kit (SDK) simplifies the development of new content or the upgrade of an existing VR experience to StarVR’s premium wide-field-of-view platform. Developers also have the option of leveraging the StarVR One dual-input VR SLI mode, maximizing the rendering performance. The StarVR SDK API is designed to be familiar to developers working with existing industry standards.

The development effort that culminated in the launch of StarVR One involved extensive collaboration with StarVR technology partners, which include Intel, Nvidia and Epic Games.


Siggraph: Chaos Group releases the open beta for V-Ray for Houdini

With V-Ray for Houdini now in open beta, Chaos Group is ensuring that its rendering technology can be used on to each part of the VFX pipeline. With V-Ray for Houdini, artists can apply high-performance raytracing to all of their creative projects, connecting standard applications like Autodesk’s 3ds Max and Maya, and Foundry’s Katana and Nuke.

“Adding V-Ray for Houdini streamlines so many aspects of our pipeline,” says Grant Miller, creative director at Ingenuity Studios. “Combined with V-Ray for Maya and Nuke, we have a complete rendering solution that allows look-dev on individual assets to be packaged and easily transferred between applications.” V-Ray for Houdini was used by Ingenuity on the Taylor Swift music video for Look What You Made Me Do. (See our main image.) 

V-Ray for Houdini uses the same smart rendering technology introduced in V-Ray Next, including powerful scene intelligence, fast adaptive lighting and production-ready GPU rendering. V-Ray for Houdini includes two rendering engines – V-Ray and V-Ray GPU – allowing visual effects artists to choose the one that best takes advantage of their hardware.

V-Ray for Houdini, Beta 1 features include:
• GPU & CPU Rendering – High-performance GPU & CPU rendering capabilities for high-speed look development and final frame rendering.
• Volume Rendering – Fast, accurate illumination and rendering of VDB volumes through the V-Ray Volume Grid. Support for Houdini volumes and Mac OS are coming soon.
• V-Ray Scene Support – Easily transfer and manipulate the properties of V-Ray scenes from applications such as Maya and 3ds Max.
• Alembic Support – Full support for Alembic workflows including transformations, instancing and per object material overrides.
• Physical Hair – New Physical Hair shader renders realistic-looking hair with accurate highlights. Only hair as SOP geometry is supported currently.
• Particles – Drive shader parameters such as color, alpha and particle size through custom, per-point attributes.
• Packed Primitives – Fast and efficient handling of Houdini’s native packed primitives at render time.
• Material Stylesheets – Full support for material overrides based on groups, bundles and attributes. VEX and per-primitive string overrides such as texture randomization are planned for launch.
• Instancing – Supports copying any object type (including volumes) using Packed Primitives, Instancer and “instancepath” attribute.
• Light Instances – Instancing of lights is supported, with options for per-instance overrides of the light parameters and constant storage of light link settings.

To join the beta, check out the Chaos Group website.

V-Ray for Houdini is currently available for Houdini and Houdini Indie 16.5.473 and later. V-Ray for Houdini supports Windows, Linux and Mac OS.


2nd-gen AMD Ryzen Threadripper processors

At the SIGGRAPH show, AMD announced the availability of its 2nd-generation AMD Ryzen Threadripper 2990WX processor with 32 cores and 64 threads. These new AMD Ryzen Threadripper processors are built using 12nm “Zen+” x86 processor architecture. Second-gen AMD Ryzen Threadripper processors support the most I/O and are compatible with existing AMD X399 chipset motherboards via a simple BIOS update, offering builders a broad choice for designing the ultimate high-end desktop or workstation PC.

The 32-core/64-thread Ryzen Threadripper 2990WX and the 24-core/48-thread Ryzen Threadripper 2970WX are purpose-built for prosumers who crave raw computational compute power to dispatch the heaviest workloads. The 2nd-gen AMD Ryzen Threadripper 2990WX offers up to 53 percent faster multithread performance and up to 47 percent more rendering performance for creators than the core i9-7980XE.

This new AMD Ryzen Threadripper X series comes with a higher base and boost clocks for users who need high performance. The 16 cores and 32 threads in the 2950X model offer up to 41 percent more multithreaded performance than the Core i9-7900X.

Additional performance and value come from:
• AMD StoreMI technology: All X399 platform users will now have free access to AMD StoreMI technology, enabling configured PCs to load files, games and applications from a high-capacity hard drive at SSD-like read speeds.
• Ryzen Master Utility: Like all AMD Ryzen processors, the 2nd-generation AMD Ryzen Threadripper CPUs are fully unlocked. With the updated AMD Ryzen Master Utility, AMD has added new features, such as fast core detection both on die and per CCX; advanced hardware controls; and simple, one-click workload optimizations.
• Precision Boost Overdrive (PBO): A new performance-enhancing feature that allows multithreaded boost limits to be raised by tapping into extra power delivery headroom in premium motherboards.

With a simple BIOS update, all 2nd-generation AMD Ryzen Threadripper CPUs are supported by a full ecosystem of new motherboards and all existing X399 platforms. Designs are available from top motherboard manufacturers, including ASRock, ASUS, Gigabyte and MSI.

The 32-core, 64-thread AMD Ryzen Threadripper 2990WX is available now from global retailers and system integrators. The 16-core, 32-thread AMD Ryzen Threadripper 2950X processor is expected to launch on August 31, and the AMD Ryzen Threadripper 2970WX and 2920X models are slated for launch in October.


Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.

Ziva VFX 1.4 adds real-world physics to character creation

Ziva Dynamics has launched Ziva VFX 1.4, a major update that gives the company’s character-creation technology five new tools for production artists. With this update, creators can apply real-world physics to even more of the character creation process — muscle growth, tissue tension and the effects of natural elements, such as heavy winds and water pressure — while removing difficult steps from the rigging process.

Ziva VFX 1.4 combines the effects of real-world physics with the rapid creation of soft-tissue materials like muscles, fat and skin. By mirroring the fundamental properties of nature, users can produce CG characters that move, flex and jiggle just as they would in real life.

With External Forces, users are able to accurately simulate how natural elements like wind and water interact with their characters. Making a character’s tissue flap or wrinkle in the wind, ripple and wave underwater, or even stretch toward or repel away from a magnetic field can all be done quickly, in a physically accurate way.

New Pressure and Surface Tension properties can be used to “fit” fat tissues around muscles, augmenting the standard Ziva VFX anatomy tools. These settings allow users to remove fascia from a Ziva simulation while still achieving the detailed wrinkling and sliding effects that make humans and creatures look real.

Muscle growth can rapidly increase the overall muscle definition of a character or body part without requiring the user to remodel the geometry. A new Rest Scale for Tissue feature lets users grow or shrink a tissue object equally in all directions. Together, these tools improve collaboration between modelers and riggers while increasing creative control for independent artists.

Ziva VFX 1.4 also now features Ziva Scene Panel, which allows artists working on complex builds to visualize their work more simply. Ziva Scene Panel’s tree-like structure shows all connections and relationships between an asset’s objects, functions and layers, making it easier to find specific items and nodes within an Autodesk Maya scene file.

Ziva VFX 1.4 is available now as a Maya plug-in for Windows and Linux users.