Category Archives: SIGGRAPH

Chaos Group releases the open beta for V-Ray for Houdini

With V-Ray for Houdini now in open beta, Chaos Group is ensuring that its rendering technology can be used on to each part of the VFX pipeline. With V-Ray for Houdini, artists can apply high-performance raytracing to all of their creative projects, connecting standard applications like Autodesk’s 3ds Max and Maya, and Foundry’s Katana and Nuke.

“Adding V-Ray for Houdini streamlines so many aspects of our pipeline,” says Grant Miller, creative director at Ingenuity Studios. “Combined with V-Ray for Maya and Nuke, we have a complete rendering solution that allows look-dev on individual assets to be packaged and easily transferred between applications.” V-Ray for Houdini was used by Ingenuity on the Taylor Swift music video for Look What You Made Me Do. (See our main image.) 

V-Ray for Houdini uses the same smart rendering technology introduced in V-Ray Next, including powerful scene intelligence, fast adaptive lighting and production-ready GPU rendering. V-Ray for Houdini includes two rendering engines – V-Ray and V-Ray GPU – allowing visual effects artists to choose the one that best takes advantage of their hardware.

V-Ray for Houdini, Beta 1 features include:
• GPU & CPU Rendering – High-performance GPU & CPU rendering capabilities for high-speed look development and final frame rendering.
• Volume Rendering – Fast, accurate illumination and rendering of VDB volumes through the V-Ray Volume Grid. Support for Houdini volumes and Mac OS are coming soon.
• V-Ray Scene Support – Easily transfer and manipulate the properties of V-Ray scenes from applications such as Maya and 3ds Max.
• Alembic Support – Full support for Alembic workflows including transformations, instancing and per object material overrides.
• Physical Hair – New Physical Hair shader renders realistic-looking hair with accurate highlights. Only hair as SOP geometry is supported currently.
• Particles – Drive shader parameters such as color, alpha and particle size through custom, per-point attributes.
• Packed Primitives – Fast and efficient handling of Houdini’s native packed primitives at render time.
• Material Stylesheets – Full support for material overrides based on groups, bundles and attributes. VEX and per-primitive string overrides such as texture randomization are planned for launch.
• Instancing – Supports copying any object type (including volumes) using Packed Primitives, Instancer and “instancepath” attribute.
• Light Instances – Instancing of lights is supported, with options for per-instance overrides of the light parameters and constant storage of light link settings.

To join the beta, check out the Chaos Group website.

V-Ray for Houdini is currently available for Houdini and Houdini Indie 16.5.473 and later. V-Ray for Houdini supports Windows, Linux and Mac OS.

2nd-gen AMD Ryzen Threadripper processors

At the SIGGRAPH show, AMD announced the availability of its 2nd-generation AMD Ryzen Threadripper 2990WX processor with 32 cores and 64 threads. These new AMD Ryzen Threadripper processors are built using 12nm “Zen+” x86 processor architecture. Second-gen AMD Ryzen Threadripper processors support the most I/O and are compatible with existing AMD X399 chipset motherboards via a simple BIOS update, offering builders a broad choice for designing the ultimate high-end desktop or workstation PC.

The 32-core/64-thread Ryzen Threadripper 2990WX and the 24-core/48-thread Ryzen Threadripper 2970WX are purpose-built for prosumers who crave raw computational compute power to dispatch the heaviest workloads. The 2nd-gen AMD Ryzen Threadripper 2990WX offers up to 53 percent faster multithread performance and up to 47 percent more rendering performance for creators than the core i9-7980XE.

This new AMD Ryzen Threadripper X series comes with a higher base and boost clocks for users who need high performance. The 16 cores and 32 threads in the 2950X model offer up to 41 percent more multithreaded performance than the Core i9-7900X.

Additional performance and value come from:
• AMD StoreMI technology: All X399 platform users will now have free access to AMD StoreMI technology, enabling configured PCs to load files, games and applications from a high-capacity hard drive at SSD-like read speeds.
• Ryzen Master Utility: Like all AMD Ryzen processors, the 2nd-generation AMD Ryzen Threadripper CPUs are fully unlocked. With the updated AMD Ryzen Master Utility, AMD has added new features, such as fast core detection both on die and per CCX; advanced hardware controls; and simple, one-click workload optimizations.
• Precision Boost Overdrive (PBO): A new performance-enhancing feature that allows multithreaded boost limits to be raised by tapping into extra power delivery headroom in premium motherboards.

With a simple BIOS update, all 2nd-generation AMD Ryzen Threadripper CPUs are supported by a full ecosystem of new motherboards and all existing X399 platforms. Designs are available from top motherboard manufacturers, including ASRock, ASUS, Gigabyte and MSI.

The 32-core, 64-thread AMD Ryzen Threadripper 2990WX is available now from global retailers and system integrators. The 16-core, 32-thread AMD Ryzen Threadripper 2950X processor is expected to launch on August 31, and the AMD Ryzen Threadripper 2970WX and 2920X models are slated for launch in October.

DG 7.9.18

Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.


Ziva VFX 1.4 adds real-world physics to character creation

Ziva Dynamics has launched Ziva VFX 1.4, a major update that gives the company’s character-creation technology five new tools for production artists. With this update, creators can apply real-world physics to even more of the character creation process — muscle growth, tissue tension and the effects of natural elements, such as heavy winds and water pressure — while removing difficult steps from the rigging process.

Ziva VFX 1.4 combines the effects of real-world physics with the rapid creation of soft-tissue materials like muscles, fat and skin. By mirroring the fundamental properties of nature, users can produce CG characters that move, flex and jiggle just as they would in real life.

With External Forces, users are able to accurately simulate how natural elements like wind and water interact with their characters. Making a character’s tissue flap or wrinkle in the wind, ripple and wave underwater, or even stretch toward or repel away from a magnetic field can all be done quickly, in a physically accurate way.

New Pressure and Surface Tension properties can be used to “fit” fat tissues around muscles, augmenting the standard Ziva VFX anatomy tools. These settings allow users to remove fascia from a Ziva simulation while still achieving the detailed wrinkling and sliding effects that make humans and creatures look real.

Muscle growth can rapidly increase the overall muscle definition of a character or body part without requiring the user to remodel the geometry. A new Rest Scale for Tissue feature lets users grow or shrink a tissue object equally in all directions. Together, these tools improve collaboration between modelers and riggers while increasing creative control for independent artists.

Ziva VFX 1.4 also now features Ziva Scene Panel, which allows artists working on complex builds to visualize their work more simply. Ziva Scene Panel’s tree-like structure shows all connections and relationships between an asset’s objects, functions and layers, making it easier to find specific items and nodes within an Autodesk Maya scene file.

Ziva VFX 1.4 is available now as a Maya plug-in for Windows and Linux users.


Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.


Quick Chat: Joyce Cox talks VFX and budgeting

Veteran VFX producer Joyce Cox has a long and impressive list of credits to her name. She got her start producing effects shots for Titanic and from there went on to produce VFX for Harry Potter and the Sorcerer’s Stone, The Dark Knight and Avatar, among many others. Along the way, Cox perfected her process for budgeting VFX for films and became a go-to resource for many major studios. She realized that the practice of budgeting VFX could be done more efficiently if there was a standardized way to track all of the moving parts in the life cycle of a project’s VFX costs.

With a background in the finance industry, combined with extensive VFX production experience, she decided to apply her process and best practices into developing a solution for other filmmakers. That has evolved into a new web-based app called Curó, which targets visual effects budgeting from script to screen. It will be debuting at Siggraph in Vancouver this month.

Ahead of the show we reached out to find out more about her VFX producer background and her path to becoming a the maker of a product designed to make other VFX pros’ lives easier.

You got your big break in visual effects working on the film Titanic. Did you know that it would become such an iconic landmark film for this business while you were in the throes of production?
I recall thinking the rough cut I saw in the early stage was something special, but had no idea it would be such a massive success.

Were there contacts made on that film that helped kickstart your career in visual effects?
Absolutely. It was my introduction into the visual effects community and offered me opportunities to learn the landscape of digital production and develop relationships with many talented, inventive people. Many of them I continued to work with throughout my career as a VFX producer.

Did you face any challenges as a woman working in below-the-line production in those early days of digital VFX?
It is a bit tricky. Visual effects is still a primarily male dominated arena, and it is a highly competitive environment. I think what helped me navigate the waters is my approach. My focus is always on what is best for the movie.

Was there anyone from those days that you would consider a professional mentor?
Yes. I credit Richard Hollander, a gifted VFX supervisor/producer with exposing me to the technology and methodologies of visual effects; how to conceptualize a VFX project and understand all the moving parts. I worked with Richard on several projects producing the visual effects within digital facilities. Those experiences served me well when I moved to working on the production side, navigating the balance between the creative agenda, the approved studio budgets and the facility resources available.

You’ve worked as a VFX producer on some of the most notable studio effects films of all time, including X-Men 2, The Dark Night, Avatar and The Jungle Book. Was there a secret to your success or are you just really good at landing top gigs?
I’d say my skills lie more in doing the work than finding the work. I believe I continued to be offered great opportunities because those I’d worked for before understood that I facilitated their goals of making a great movie. And that I remain calm while managing the natural conflicts that arise between creative desire and financial reality.

Describe what a VFX producer does exactly on a film, and what the biggest challenges are of the job.
This is a tough question. During pre-production, working with the director, VFX supervisor and other department heads, the VFX producer breaks down the movie into the digital assets, i.e., creatures, environments, matte paintings, etc., that need to be created, estimate how many visual effects shots are needed to achieve the creative goals as well as the VFX production crew required to support the project. Since no one knows exactly what will be needed until the movie is shot and edited, it is all theory.

During production, the VFX producer oversees the buildout of the communications, data management and digital production schedule that are critical to success. Also, during production the VFX producer is evaluating what is being shot and tries to forecast potential changes to the budget or schedule.

Starting in production and going through post, the focus is on getting the shots turned over to digital facilities to begin work. This is challenging in that creative or financial changes can delay moving forward with digital production, compressing the window of time within which to complete all the work for release. Once everything is turned over that focus switches to getting all the shots completed and delivered for the final assembly.

What film did you see that made you want to work in visual effects?
Truthfully, I did not have my sights set on visual effects. I’ve always had a keen interest in movies and wanted to produce them. It was really just a series of unplanned events, and I suppose my skills at managing highly complex processes drew me further into the world of visual effects.

Did having a background in finance help in any particular way when you transitioned into VFX?
Yes, before I entered into production, I spent a few years working in the finance industry. That experience has been quite helpful and perhaps is something that gave me a bit of a leg up in understanding the finances of filmmaking and the ability to keep track of highly volatile budgets.

You pulled out of active production in 2016 to focus on a new company, tell me about Curó.
Because of my background in finance and accounting, one of the first things I noticed when I began working in visual effects was, unlike production and post, the lack of any unified system for budgeting and managing the finances of the process.  So, I built an elaborate system of worksheets in Excel that I refined over the years. This design and process served as the basis for Curó’s development.

To this day the entire visual effects community manages the finances, which can be tens, if not hundreds, of millions in spend with spreadsheets. Add to that the fact that everyone’s document designs are different, which makes the job of collaborating, interpreting and managing facility bids unwieldy to say the least.

Why do you think the industry needs Curó, and why is now the right time? 
Visual effects is the fastest growing segment of the film industry, demonstrated in the screen credits of VFX-heavy films. The majority of studio projects are these tent-pole films, which heavily use visual effects. The volatility of visual effects finances can be managed more efficiently with Curó and the language of VFX financial management across the industry would benefit greatly from a unified system.

Who’s been beta testing Curó, and what’s in store for the future, after its Siggraph debut?
We’ve had a variety of beta users over the past year. In addition to Sony and Netflix a number of freelance VFX producers and supervisors as well as VFX facilities have beta access.

The first phase of the Curó release focuses on the VFX producers and studio VFX departments, providing tools for initial breakdown and budgeting of digital and overhead production costs. After Siggraph we will be continuing our development, focusing on vendor bid packaging, bid comparison tools and management of a locked budget throughout production and post, including the accounting reports, change orders, etc.

We are also talking with visual effects facilities about developing a separate but connected module for their internal granular bidding of human and technical resources.

 


SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 


Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.

Dell intros new Precision workstations, Dell Canvas and more

To celebrate the 20th anniversary of Dell Precision workstations, Dell announced additions to its Dell Precision fixed workstation portfolio, a special anniversary edition of its Dell Precision 5520 mobile workstation and the official availability of Dell Canvas, the new workspace device for digital creation.

Dell is showcasing its next-generation, fixed workstations at SIGGRAPH, including the Dell Precision 5820 Tower, Precision 7820 Tower, Precision 7920 Tower and Precision 7920 Rack, completely redesigned inside and out.

The three new Dell Precision towers combine a brand-new flexible chassis with the latest Intel Xeon processors, next-generation Radeon Pro graphics and highest-performing Nvidia Quadro professional graphics cards. Certified for professional software applications, the new towers are configured to complete the most complex projects, including virtual reality. Dell’s Reliable Memory Technology (RMT) Pro ensures memory challenges don’t kill your workflow, and Dell Precision Optimizer (DPO) tailors performance for your unique hardware and software combination.

The fully-customizable configuration options deliver the flexibility to tackle virtually any workload, including:

  • AI: The latest Intel Xeon processors are an excellent choice for artificial intelligence (AI), with agile performance across a variety of workloads, including machine learning (ML) and deep learning (DL) inference and training. If you’re just starting AI workloads, the new Dell Precision tower workstations allow you to use software optimized to your existing Intel infrastructure.
  • VR: The Nvidia Quadro GP100 powers the development and deployment of cognitive technologies like DL and ML applications. Additional Nvidia Pascal GPU options like HBM2 memory, and NVLink technologies allow professional users to create complex designs in computer-aided engineering (CAE) and experience life-like VR environments.
  • Editing and playback: Radeon Pro SSG Graphics with HBM2 memory and 2TB of SSD onboard allows real-time 8K video editing and playback, high-performance computing of massive datasets, and rendering of large projects.

The Dell Precision 7920 Rack is ideal for secure, remote workers and delivers the same power and scalability as the highest-performing tower workstation in a 2U form factor.  The Dell Precision 5820, 7820, 7920 towers and 7920 Rack will be available for order beginning October 3.

“Looking back at 20 years of Dell Precision workstations, you get a sense of how the capabilities of our workstations, combined with certified and optimized software and the creativity of our awesome customers, have achieved incredible things,” said Rahul Tikoo, vice president and general manager for Dell Precision workstations. “As great as those achievements are, this new lineup of Dell Precision workstations enables our customers to be ready for the next big technology revolution that is challenging business models and disrupting industries.”

Dell Canvas

Dell has also announced its highly-anticipated Dell Canvas, available now. Dell Canvas is a new workspace designed to make digital creative more natural. It features a 27” QHD touch screen that sits horizontally on your desk and can be powered by your current PC ecosystem and the latest Windows 10 Creator’s Update. Additionally, a digital pen provides precise tactile accuracy and the totem offers diverse menu and shortcut interaction.

For the 20th anniversary of Dell Precision, Dell is introducing a limited-edition anniversary model of its award-winning mobile workstation, the Dell Precision 5520. The Dell Precision 5520 Anniversary Edition is Dell’s thinnest, lightest, and smallest mobile workstation, available for a limited time, in hard-anodized aluminum, with a brushed metallic finish in a brand-new Abyss color with anti-finger print coating. The device is available now with two high-end configuration options.

Blackmagic’s Fusion 9 is now VR-enabled

At SIGGRAPH, Blackmagic was showing Fusion 9, its newly upgraded visual effects, compositing, 3D and motion graphics software. Fusion 9 features new VR tools, an entirely new keyer technology, planar tracking, camera tracking, multi-user collaboration tools and more.

Fusion 9 is available now with a new price point — Blackmagic has lowered the price of its Studio version from $995 to $299 Studio Version. (Blackmagic is also offering a free version of Fusion.) The software now works on Mac, PC and Linux.

Those working in VR get a full 360º true 3D workspace, along with a new panoramic viewer and support for popular VR headsets such as Oculus Rift and HTC Vive. Working in VR with Fusion is completely interactive. GPU acceleration makes it extremely fast so customers can wear a headset and interact with elements in a VR scene in realtime. Fusion 9 also supports stereoscopic VR. In addition, the new 360º spherical camera renders out complete VR scenes, all in a single pass and without the need for complex camera rigs.

The new planar tracker in Fusion 9 calculates motion planes for accurately compositing elements onto moving objects in a scene. For example, the new planar tracker can be used to replace signs or other flat objects as they move through a scene. Planar tracking data can also be used on rotoscope shapes. That means users don’t have to manually animate motion, perspective, position, scale or rotation of rotoscoped elements as the image changes.

Fusion 9 also features an entirely new camera tracker that analyzes the motion of a live-action camera in a scene and reconstructs the identical motion path in 3D space for use with cameras inside of Fusion. This lets users composite elements with precisely matched movement and perspective of the original. Fusion can also use lens metadata for proper framing, focal length and more.

The software’s new delta keyer features a complete set of matte finesse controls for creating clean keys while preserving fine image detail. There’s also a new clean plate tool that can smooth out subtle color variations on blue- and greenscreens in live action footage, making them easier to key.

For multi-user collaboration, Fusion 9 Studio includes Studio Player, a new app that features a playlist,
storyboard and timeline for playing back shots. Studio Player can track version history, display annotation notes, has support for LUTs and more. The new Studio Player is suited for customers that need to see shots in a suite or theater for review and approval. Remote synchronization lets artists  sync Studio Players in multiple locations.

In addition, Fusion 9 features a bin server so shared assets and tools don’t have to be copied onto each user’s local workstation.