OWC 12.4

Category Archives: rendering

Redshift integrates Cinema 4D noises, nodes and more

Maxon and Redshift Rendering Technologies have released Redshift 3.0.12, which has native support for Cinema 4D noises and deeper integration with Cinema 4D, including the option to define materials using Cinema 4D’s native node-based material system.

Cinema 4D noise effects have been in demand within other 3D software packages because of their flexibility, efficiency and look. Native support in Redshift means that users of other DCC applications can now access Cinema 4D noises by using Redshift as their rendering solution. Procedural noise allows artists to easily add surface detail and randomness to otherwise perfect surfaces. Cinema 4D offers 32 different types of noise and countless variations based on settings. Native support for Cinema 4D noises means Redshift can preserve GPU memory while delivering high-quality rendered results.

Redshift 3.0.12 provides content creators deeper integration of Redshift within Cinema 4D. Redshift materials can now be defined using Cinema 4D’s nodal material framework, introduced in Release 20. As well, Redshift materials can use the Node Space system introduced in Release 21, which combines the native nodes of multiple render engines into a single material. Redshift is the first to take advantage of the new API in Cinema 4D to implement its own Node Spaces. Users can now also use any Cinema 4D view panel as a Redshift IPR (interactive preview render) window, making it easier to work within compact layouts and interact with a scene while developing materials and lighting.

Redshift 3.0.12 is immediately available from the Redshift website.

Maxon acquired RedShift in April of 2019.

Apple intros long-awaited new Mac Pro and Pro Display XDR

By Barry Goch

The Apple Worldwide Developers Conference (WWDC19) kicked off on Monday with a keynote from Apple CEO Tim Cook, where he announced the eagerly awaited new Mac Pro and Pro Display XDR.

Tim Cook’s keynote

In recent years, many working in M&E felt as if Apple had moved away from supporting creative pros in this industry. There was the fumbled rollout of FCPX and then the “trash can” MacPro with its limited upgrade path. Well, our patience has finally paid off and our faith in Apple restored. This week Apple delivered products beyond expectation.

This post pro, for one, is very happy that Apple is back making serious hardware for creative professionals. The tight integration of hardware and software, along with Apple’s build quality, makes its products unique in the market. There is confidence and freedom using Macs that creatives love, and the tower footprint is back!

The computer itself is a more than worthy successor to the original Mac Pro tower design. It’s the complete opposite concept of the current trash-can-shaped Mac Pro, with its closed design and limited upgradeability. The new Mac Pro’s motherboard is connected to a stainless steel space frame offering 360-degree access to the internals, which include 12 memory slots with up to 1.5TB of RAM capacity and eight PCI slots, which is the most ever in a Mac — more than the venerable 9600 Power Mac. The innovative graphics architecture in the new Mac Pro is an expansion module, or MPX module, which allows the installation of two graphic cards tied together through the Infinity Fabric link. This allows for data transfers up to five times faster between the GPUs on the PCIe bus.

Also new is the Apple Afterburner hardware accelerator card, which is a field programmable gate array (FPGA) hardware card for accelerating ProRes and ProRes RAW workflows. Afterburner supports playback of up to three streams of 8K ProRes RAW or up to 12 streams of 4K ProRes RAW. The FPGA allows new instruction to be installed on the chipset, giving the MacPro Afterburner card a wealth of possibilities for future updates.

Plays Well With Others
Across the street from the San Jose Convention Center, where the keynote was held, Apple set up “The Studio” in the historic San Jose Civic. The venue was divided into areas of creative specialization: video, photography, music production, 3D and AR. It was really great to see complete workflows and to be able to interface with Apple creative pros. Oh, and Apple has announced support from third-party developers, such as Blackmagic, Avid, Adobe, Maxon’s Cinema 4D, Foundry, Red, Epic Games, Unity, Pixar and more.

Metal is Apple’s replacement for OpenCL/GL. It’s a low level language for interfacing with GPUs. Working closely with AMD, the new Mac Pro will use native Metal rendering for Resolve, OToy Octane, Maxon Cinema 4D and Red.

Blackmagic’s Grant Perry and Barry Goch at The Studio.

DaVinci Resolve is a color correction and online editing software for high-end film and television work. “It was the first professional software to adopt Metal and now, with the new Mac Pro and Afterburner, we’re seeing full-quality 8K performance in realtime with color correction and effects, something we could never dream of doing before,” explains Blackmagic CEO Grant Petty. “DaVinci Resolve running on the new Mac Pro is the fastest way to edit, grade and finish movies and TV shows.”

According to Avid’s director of product management for audio, Francois Quereuil, “Avid’s Pro Tools team is blown away by the unprecedented processing power of the new Mac Pro, and thanks to its internal expansion capabilities, up to six Pro Tools HDX cards can be installed within the system — a first for Avid’s flagship audio workstation. We’re now able to deliver never-before-seen performance and capabilities for audio production in a single system and deliver a platform that professional users in music and post have been eagerly awaiting.”

“Apple continues to innovate for video professionals,” reports Adobe’s VP of digital video and audio, Steven Warner. “With the power offered by the new Mac Pro, editors will be able to work with 8K without the need for any proxy workflows in a future release of Premiere Pro.”

And from Apple? Expect versions of FCPX and Logic to be available with release of the new MacPro and rest assured they will fully use the new hardware.

The Cost
The price for a Mac Pro with an eight-core Xeon W processor, 32GB of RAM, an AMD Radeon Pro 580X GPU and a 256GB SSD is $5999. The price for the fully loaded version with the 28-core Xeon processor, Afterburner, two MDX modules with four AMD Radeon Pro Vega II Duo graphics cards and 4TB of SSD internal storage will come in around $20,000, give or take. It will be available this fall.

Pro Display XDR
The new Pro Display XDR is amazing. I was invited into a calibrated viewing environment that also housed Dell, Eizo, Sony BVM-X300 and Sony-X310 HDR monitors. We were shown the typical extreme bright and colorful animal footage for monitor demos. Personally, I would have preferred to have seen more shots of people from a TV show or feature and not the usual extreme footage used to show off how bright the monitor could get.

For example, it would have been cool to see the Jony Ive video — which plays on the Apple site and describes the offerings of the MacPro and the monitor — talking about the design of the product on the monitor.

Anyway, the big hang-up with the monitor is the stand. The price tag of $1,000 for a monitor stand is a lot compared to the price of the monitor itself. When the price of the stand was announced during the keynote, there was a loud gasp, which unfortunately dampened the excitement and momentum of the new releases. It too will be available in the fall.

Display Specs
This Retina 6K 32-inch (diagonal) display offers 6016×3384 pixels (20.4 million pixels) at 218 pixels per inch. The sustained brightness is 1000-nits sustained (full screen) with 1600 nits peak and a contrast ratio of one million to one. It works in P3 wide color gamut with 10-bit depth for 1.073 billion colors. Available reference modes include HDR video (P3-ST 2084), Digital Cinema (P3-DCI), Digital Cinema (P3-D65) and HDTV video (BT.709-BT.1886). Supported HDR formats are HLG, HDR 10 and Dolby Vision.

Portrait mode

The Cost
The standard glass version is $4,999. The nano-texture anti-glare glass version is $5,999. As mentioned, the Pro Stand is $999 and VESA mount adapter is $199. Both are sold separately and have a Thunderbolt 3 connection only.

Pros and Cons
MacPro Pros: innovative design, expandability
Cons: Lack of Nvidia support, no Afterburner support for other formats beyond ProRes and no optical audio output.

Pro Display XDR Pros: Ability to sustain 1,000 nits, beautiful design and execution.
Cons: Lack of Rec 2020 color space and ACES profile, plus the high cost of the display stand.

Summing Up
The Pro is back for Apple and third-party apps like Avid and Resolve. I really can’t wait to get my hands on the new MacPro and Pro Display XDR and put them through their paces.


Barry Goch is a finishing artist at LA’s The Foundation as well as a UCLA Extension Instructor, Post Production. You can follow him on Twitter at @Gochya

OWC 12.4

Review: Maxon Cinema 4D Release 20

By Brady Betzel

Last August, Maxon made available its Cinema 4D Release 20. From the new node-based Material Editor to the all new console used to debug and develop scripts, Maxon has really upped the ante.

At the recent NAB show, Maxon announced that they acquired Redshift Rendering Technologies, the makers of the Redshift rendering engine. This acquisition will hopefully tie in an industry standard GPU-based rendering engine inside of Cinema 4D R20’s workflow and speed up rendering. As of now there is still the same licensing fees attached to Redshift as there were before the acquisition: Node-Locked is $500 and Floating is $600.

Digging In
The first update to Cinema 4D R20 that I wanted to touch on is the new node-based Material Editor. If you are familiar with Blackmagic’s DaVinci Resolve or Nuke’s applications, then you have seen how nodes work. I love how nodes work, allowing the user to not only layer up effects — or in Cinema 4D R20’s case — diffusion to camera distance. There are over 150 nodes inside of the material editor to build textures with.

One small change that I noticed inside of the updated Material Editor was the new gradient settings. When you are working with gradient knots you can now select multiple knots at once and then right click and double the selected knots, invert the knots, select different knot interpolations (including stepped, smooth, cubic, linear, and blend) and even distribute the knots to clean up your pattern. A real nice and convenient update to gradient workflows.

In Cinema 4D R20, not only can you add new nodes from the search menu, but you can also click the node dots in the Basic properties window and route nodes through there. When you are happy with your materials made in the node editor, you can save them as assets in the scene file or even compress them in a .zip file to share with others.

In a related update category, Cinema 4D Release 20 has introduced the Uber Material. In simple terms (and I mean real simple), the Uber Material is a node-based material that is different from standard or physical materials because it can be edited inside of the Attribute Manager or Material Editor but retain the properties available in the Node Editor.

The Camera Tracking and 2D Camera View has been updated. While the Camera Tracking mode has been improved, the new 2D Camera View mode has combined the Film Move mode with the Film Zoom mode. Adding the ability to use standard shortcuts to move around a scene instead of messing with the Film Offset or Focal Length in the Camera Object Properties dialogue. For someone like me who isn’t a certified pro in Cinema 4D, these little shortcuts really make me feel at home. Much more like apps I’m used to such as Mocha Pro or After Effects. Maxon has also improved the 2D tracking algorithm for much tighter tracks as well as added virtual keyframes. The virtual keyframes are an extreme help when you don’t have time for minute adjustments.

Volume Modeling
What seems to be one of the largest updates in Cinema 4D R20 is the addition of Volume Modeling with the OpenVDB-based Volume Builder. According to www.openvdb.org, “OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids,” developed by Ken Museth at DreamWorks Animation. It uses 3D pixels called voxels instead of polygons. When using the Volume Builder you can combine multiple polygon and primitive objects using Boolean operations: Union, Subtract or Intersect. Furthermore you can smooth your volume using multiple techniques, including one that made me do some extra Google work: Laplacian Flow.

Fields
When going down the voxel rabbit hole in Cinema 4D R20, you will run into another new update: Fields. Prior to Cinema 4D R20, we would use Effectors to affect strength values of an object. You would stack and animate multiple effectors to achieve different results. In Cinema 4D R20, under the Falloff tab you will now see a Fields list along with the types of Field Objects to choose from.

Imagine you make a MoGraph object that you want its opacity to be controlled by a box object moving through your MoGraph but also physically modified by a capsule poking through. You can combine these different field object effectors by using compositing functions in the Fields list. In addition you can animate or alter these new fields straight away in the Objects window.

Summing Up
Cinema 4D Release 20 has some amazing updates that will greatly improve efficiency and quality of your work. From tracking updates to field updates, there are plenty of exciting tools to dive into. And if you are reading this as an After Effects user who isn’t sure about Cinema 4D, now is the time to dive in. Once you learn the basics, whether it’s from Youtube tutorials or you sign up for www.cineversity.com classes, you will immediately see an increase in the quality of your work.

Combining Adobe After Effects, Element 3D and Cinema 4D R20 is the ultimate in 3D motion graphics and 2D compositing — accessible to almost everyone. And I didn’t even touch on the dozens of other updates to Cinema 4D R20 like the multitude of ProRender updates, FBX import/export options, new node materials and CAD import support for Cataia, Iges, JT, Solidworks and Step formats. Check out Cinema 4D Release 20’s newest features on YouTube and on their website.

And, finally, I think it’s safe to assume that Maxon’s acquisition of RedShift renderer poses a bright future for Cinema 4D users.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.


NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.


Milk VFX provides 926 shots for YouTube’s Origin series

London’s Milk VFX, known for its visual effects work on Adrift, Annihilation and Altered Carbon, has just completed production on YouTube Premium’s new sci-fi thriller original series, Origin.

Milk created all of the 926 VFX shots for Origin in 4K, encompassing a wide range of VFX work, in a four-month timeframe. Milk executed rendering entirely in the cloud (via the AWS Cloud Platform); allowing the team to scale its current roster of projects, which include Amazon’s Good Omens and feature film Four Kids and It.

VFX supervisor and Milk co-founder Nicolas Hernandez supervised the entire roster of VFX work on Origin. Milk also supervised the VFX shoot on location in South Africa.

“As we created all the VFX for the 10-episode series it was even more important for us to be on set,” says Hernandez. “As such, our VFX supervisor Murray Barber and onset production manager David Jones supervised the Origin VFX shoot, which meant being based at the South Africa shoot location for several months.”

The series is from Left Bank Pictures, Sony Pictures Television and Midnight Radio in association with China International Television Corporation (CiTVC). Created by Mika Watkins, Origin stars Tom Felton and Natalia Tena and will premiere on 14 November on YouTube Premium.

“The intense challenge of delivering and supervising a show on the scale of Origin — 900 4K shots in four months — was not only helped by our recent expansion and the use of the cloud for rendering, but was largely due to the passion and expertise of the Milk Origin team in collaboration with Left Bank Pictures,” says Cohen.

In terms of tools, Milk used Autodesk Maya, Side Effects Houdini, Foundry’s Nuke and Mari, Shotgun, Photoshop, Deadline for renderfarms and Arnold for rendering and a variety of in-house tools. Hardware includes HPz series workstations and Nvidia graphics. Storage used was Pixitmedia’s PixStor.

The series, from director Paul W.S. Anderson and the producers of The Crown and Lost, follows a group of outsiders who find themselves abandoned on a ship bound for a distant land. Now they must work together for survival, but quickly realize that one of them is far from who they claim to be.

 


Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.


AMD Radeon Vega mobile graphics coming to MacBook Pro

New AMD Radeon Vega Mobile graphics processors — including the AMD Radeon Pro Vega 20 and Radeon Pro Vega 16 graphics — will be available as configuration options on Apple’s 15-inch MacBook Pro starting in late November.

AMD Radeon Vega Mobile graphics offers performance upgrades in 3D rendering, video editing and other creative applications, as well as 1080p HD gaming at ultra settings in the most-used AAA and eSports games.

Built around AMD’s Vega architecture, the new graphics processors were engineered to excel in notebooks for cool and quiet operation. In addition, the processor’s thin design features HBM2 memory (2nd-generation high-bandwidth memory), which takes up less space in a notebook compared to traditional GDDR5-based graphics processors.

 


DeepMotion’s Neuron cloud app trains digital characters using AI

DeepMotion has launched DeepMotion Neuron, the first tool for completely procedural, physical character animation, for presale. The cloud application trains digital characters to develop physical intelligence using advanced artificial intelligence (AI), physics and deep learning. With guidance and practice, digital characters can now achieve adaptive motor control just as humans do, in turn allowing animators and developers to create more lifelike and responsive animations than those possible using traditional methods.

DeepMotion Neuron is a behavior-as-a-service platform that developers can use to upload and train their own 3D characters, choosing from hundreds of interactive motions available via an online library. Neuron will enable content creators to tell more immersive stories by adding responsive actors to games and experiences. By handling large portions of technical animation automatically, the service also will free up time for artists to focus on expressive details.

DeepMotion Neuron is built on techniques identified by researchers from DeepMotion and Carnegie Mellon University who studied the application of reinforcement learning to the growing domain of sports simulation, specifically basketball, where real-world human motor intelligence is at its peak. After training and optimization, the researchers’ characters were able to perform interactive ball-handling skills in real-time simulation. The same technology used to teach digital actors how to dribble can be applied to any physical movement using Neuron.

DeepMotion Neuron’s cloud platform is slated for release in Q4 of 2018. During the DeepMotion Neuron prelaunch, developers and animators can register on the DeepMotion website for early access and discounts.

Siggraph: Chaos Group releases the open beta for V-Ray for Houdini

With V-Ray for Houdini now in open beta, Chaos Group is ensuring that its rendering technology can be used on to each part of the VFX pipeline. With V-Ray for Houdini, artists can apply high-performance raytracing to all of their creative projects, connecting standard applications like Autodesk’s 3ds Max and Maya, and Foundry’s Katana and Nuke.

“Adding V-Ray for Houdini streamlines so many aspects of our pipeline,” says Grant Miller, creative director at Ingenuity Studios. “Combined with V-Ray for Maya and Nuke, we have a complete rendering solution that allows look-dev on individual assets to be packaged and easily transferred between applications.” V-Ray for Houdini was used by Ingenuity on the Taylor Swift music video for Look What You Made Me Do. (See our main image.) 

V-Ray for Houdini uses the same smart rendering technology introduced in V-Ray Next, including powerful scene intelligence, fast adaptive lighting and production-ready GPU rendering. V-Ray for Houdini includes two rendering engines – V-Ray and V-Ray GPU – allowing visual effects artists to choose the one that best takes advantage of their hardware.

V-Ray for Houdini, Beta 1 features include:
• GPU & CPU Rendering – High-performance GPU & CPU rendering capabilities for high-speed look development and final frame rendering.
• Volume Rendering – Fast, accurate illumination and rendering of VDB volumes through the V-Ray Volume Grid. Support for Houdini volumes and Mac OS are coming soon.
• V-Ray Scene Support – Easily transfer and manipulate the properties of V-Ray scenes from applications such as Maya and 3ds Max.
• Alembic Support – Full support for Alembic workflows including transformations, instancing and per object material overrides.
• Physical Hair – New Physical Hair shader renders realistic-looking hair with accurate highlights. Only hair as SOP geometry is supported currently.
• Particles – Drive shader parameters such as color, alpha and particle size through custom, per-point attributes.
• Packed Primitives – Fast and efficient handling of Houdini’s native packed primitives at render time.
• Material Stylesheets – Full support for material overrides based on groups, bundles and attributes. VEX and per-primitive string overrides such as texture randomization are planned for launch.
• Instancing – Supports copying any object type (including volumes) using Packed Primitives, Instancer and “instancepath” attribute.
• Light Instances – Instancing of lights is supported, with options for per-instance overrides of the light parameters and constant storage of light link settings.

To join the beta, check out the Chaos Group website.

V-Ray for Houdini is currently available for Houdini and Houdini Indie 16.5.473 and later. V-Ray for Houdini supports Windows, Linux and Mac OS.

Review: HP’s lower-cost DreamColor Z24x display

By Dariush Derakhshani

So, we all know how important a color-accurate monitor is in making professional-level graphics, right? Right?!? Even at the most basic level, when you’re stalking online for the perfect watch band for your holiday present of a smart watch, you want the orange band you see in the online ad to be what you get when it arrives a few days later. Even if your wife thinks orange doesn’t suit you, and makes you look like “you’re trying too hard.”

Especially as a content developer, you want to know what you’re looking at is an accurate representation of the image. Ever walk into a Best Buy and see multiple screens showing the same content but with wild ranging differences in color? You can’t have that discrepancy working as a pro, especially in collaboration; you need color accuracy. In my own experience, that position has been filled by HP’s 10-bit DreamColor displays for many years now, but not everyone is awash in bitcoins, and justifying a price tag of over $1,200 is sometimes hard to justify, even for a studio professional.

Enter HP’s DreamColor Z24x display at half the price, coming in around $550 online. Yes, DreamColor for half the cost. That’s pretty significant. For the record, I haven’t used a 24-inch monitor since the dark ages; when Lost was the hot TV show. I’ve been fortunate enough to be running at 27-inch and higher, so there was a little shock when I started using the Z24x HP sent me for review. But this is something I quickly got used to.

With my regular 32-inch 4K display still my primary — so I can fit loads of windows all over the place — I used this DreamColor screen as my secondary display, primarily to check output for my Adobe After Effects comps, Adobe Premiere Pro edits and to hold my render view window as I develop shaders and lighting in Autodesk Maya. I felt comfortable knowing the images I shared with my colleagues across town would be seen as I intended them, evening the playing field when working collaboratively (as long as everyone is on the same LUT and color space). Speaking of color spaces, the Z24x hits 100% of sRGB, 99% of AdobeRGB and 96% of DCI P3, which is just slightly under HP’s Z27x DreamColor. It is, however, slightly faster with a 6ms response rate.

The Z24x has a 24-inch IPS panel from LG that exhibits color in 10-bit, like its bigger 27-inch Z27x sibling. This gives you over a billion colors, which I have personally verified by counting them all —that was one, long weekend, I can tell you. Unlike the highest-end DreamColor screens though, the Z24x dithers up from 8-bit to 10-bit (called an 8-bit+FRC). This means it’s better than an 8-bit color display, for sure, but not quite up to real 10-bit, making it color accurate but not color critical. HP’s implementation of dithering is quite good, when subjectively compared to my full 10-bit main display. Frankly, a lot of screens that claim 10-bit may actually be 8-bit+FRC anyway!

While the Z27x gives you 2560×1440 as you expect of most 27inch displays, if not full on 4K, the Z24x is at a comfortable 1920×1200, just enough for a full 1080p image and a little room for a slider or info bar. Being the res snob that I am, I had wondered if that was just too low, but at 24-inches I don’t think you would want a higher resolution, even if you’re sitting only 14-inches away from it. And this is a sentiment echoed by the folks at HP who consulted with so many of their professional clients to build this display. That gives a pixel density of about 94PPI, a bit lower than the 109PPI of the Z27x. This density is about the same as a 1080p HD display at 27-inch, so it’s still crisp and clean.

Viewing angles are good at about 178 degrees, and the screen is matte, with an anti-glare coating, making it easier to stare at without blinking for 10 hours at a clip, as digital artists usually do. Compared to my primary display, this HP’s coating was more matte and still gave me a richer black in comparison, which I liked to see.

Connection options are fairly standard with two DisplayPorts, one HDMI, and one DVI dual link for anyone still living in the past. You also get four USB ports and an analog 3.5mm audio jack if you want to drive some speakers, since you can’t from your phone anymore (Apple, I’m looking at you).

Summing Up
So while 24-inches is a bit small for my tastes for a display, I am seriously impressed at the street price of the Z24x, allowing a lot more pros and semi-pros to get the DreamColor accuracy HP offers at half the price. While I wouldn’t recommend color grading a show on the Z24x, this DreamColor does a nice job of bringing a higher level of color confidence at an attractive price. As a secondary display, the z24x is a nice addition to an artist workflow with budget in mind — or who has a mean, orange-watch-band-hating spouse.


Dariush Derakhshani is a VFX supervisor and educator in Southern California. You can follow his random tweets at @koosh3d.

Saddington Baynes adds senior lighting artist Luis Cardoso

Creative production house Saddington Baynes has hired Luis Cardoso as a senior lighting artist, adding to the studio’s creative team with specialist CGI skills in luxury goods, beauty and cosmetics. He joins the team following a four-year stint at Burberry, where he worked on high-end CGI.

He specializes in Autodesk 3ds Max, Chaos Group’s V-Ray and Adobe Photoshop. Cardoso’s past work includes imagery for all Burberry fragrances, clothing and accessories and social media assets for the Pinterest Cat Lashes campaign. He also has experience under his belt as senior CG artist at Sectorlight, and later in his career Assembly Studios.

At Saddington Baynes, Cardoso will be working on new motion cinematic sequences for online video to expand the beauty, fragrance, fashion and beverage departments and take the expertise further, particularly in regards to video lighting.

According to executive creative director James Digby-Jones, “It no longer matters whether elements are static or moving; whether the brief is for a 20,000-pixel image or 4K animation mixed with live action. We stretch creative and technical boundaries with fully integrated production that encompasses everything from CGI and motion to shoot production and VR capability.”

Chaos Group acquires Render Legion and its Corona Renderer

Chaos Group has purchased Prague-based Render Legion, creator of the Corona Renderer. With this new product and Chao’s own V-Ray, the company is offering even more rendering solutions for M&E and the architectural visualization world.

Known for its ease of use, the Corona Renderer has become a popular choice for architectural visualization, but according to Chaos Group’s David Tracy, “There are a few benefits for M&E. Corona plans to implement some VFX-related features, such as hair and skin with the help of the V-Ray team. Also, Corona is sharing technology, like the way they optimize dome lights. That will definitely be a benefit for V-Ray users in the VFX space.”

The Render Legion team, including its founders and developers, will join Chaos Group as they continue to develop Corona using additional support and resources provided through the deal.

Chaos Group’s Academy Award-winning renderer, V-Ray will continue to be a core component of the company’s portfolio. Both V-Ray and Corona will benefit from joint collaborations, bringing complementary features and optimizations to each product.

The Render Legion acquisition is Chaos Group’s largest investment to date. It is the third investment in a visualization company in the last two years, including interactive presentation platform CL3VER and virtual reality pioneer Nurulize. According to Chaos Group, the computer graphics industry is expected to reach $112 billion in 2019, fueled by a rise in the demand for 3D visuals. This, they say, has presented a prime opportunity for companies who make the creation of photorealistic imagery more accessible.

Main Image: ( L-R) Chaos Group co-founder Vlado Koylazov and Render Legion CEO/co-founder Ondřej Karlík.

Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.

Chaos Group and Adobe partner for photorealistic rendering in CC

Chaos Group’s V-Ray rendering technology is featured in Adobe’s Creative Cloud, allowing graphic designers to easily create photorealistic 3D rendered composites with Project Felix.

Available now, Project Felix is a public beta desktop app that helps users composite 3D assets like models, materials and lights with background images, resulting in an editable render they can continue to design in Photoshop CC. For example, users can turn a basic 3D model of a generic bottle into a realistic product shot that is fully lit and placed in a scene to create an ad, concept mock-up or even abstract art.

V-Ray acts as a virtual camera, letting users test angles, perspectives and placement of their model in the scene before generating a final high-res render. Using the preview window, Felix users get immediate visual feedback on how each edit affects the final rendered image.

By integrating V-Ray, Adobe has brought the same raytracing technology used by companies Industrial Light & Magic to a much wider audience.

“We’re thrilled that Adobe has chosen V-Ray to be the core rendering engine for Project Felix, and to be a part of a new era for 3D in graphic design,” says Peter Mitev, CEO of Chaos Group. “Together we’re bringing the benefits of photoreal rendering, and a new design workflow, to millions of creatives worldwide.”

“Working with the amazing team at Chaos Group meant we could bring the power of the industry’s top rendering engine to our users,” adds Stefano Corazza, senior director of engineering at Adobe. “Our collaboration lets graphic designers design in a more natural flow. Each edit comes to life right before their eyes.”

GPU-accelerated renderer Redshift now in v.2.0, integrates with 3ds Max

Redshift Rendering has updated its GPU-accelerated rendering software to Redshift 2.0. This new version includes new features and pipeline enhancements to the existing Maya and Softimage plug-ins. Redshift 2.0 also introduces integration with Autodesk 3ds Max. Integrations with Side Effects Houdini and Maxon Cinema 4D are currently in development and are expected later in 2016.

New features across all platforms include realistic volumetrics, enhanced subsurface scattering and a new PBR-based Redshift material, all of which deliver improved final render results. Starting July 5, Redshift is offering 20 percent off new Redshift licenses through July 19.

Age of Vultures

Age of Vultures

A closer look at Redshift 2.0’s new features:

● Volumetrics (OpenVDB) – Render clouds, smoke, fire and other volumetric effects with production-quality results (initial support for OpenVDB volume containers).

● Nested dielectrics – The ability to accurately simulate the intersection of transparent materials with realistic results and no visual artifacts.

● New BRDFS and linear glossiness response – Users can model a wider variety of metallic and reflective surfaces via the latest and greatest in surface shading technologies (GGX and Beckmann/CookTorrance BRDFs).

● New SSS models and single scattering – More realistic results with support for improved subsurface scattering models and single-scattering.

● Redshift material – The ability to use more intuitive, PBR-based main material, featuring effects such as dispersion/chromatic aberration.

● Multiple dome lights – Users can combine multiple dome lights to create more compelling lighting.

● alSurface support – There is now full support for the Arnold shader without having to port settings.

● Baking – Users can save a lot of rendering time with baking for lighting and AOVs.

Users include Blizzard, Jim Henson’s Creature Shop, Glassworks and Blue Zoo.

Main Image: Rendering example from A Large Evil Corporation.

Sony Imageworks helps take ‘Alice Through the Looking Glass’

By Christine Holmes

Sony Imageworks VFX supervisor Jay Redd’s journey with Alice Through the Looking Glass began at the end of 2013, a full two and a half years before the US domestic release. A seasoned veteran in the visual effects world, Redd partnered closely with director James Bobin, Imageworks VFX supervisor Ken Ralston, production designer Dan Hennah and crew to bring this vibrant adventure to life.

Jay Redd

Time itself plays multiple roles in this new chapter to Alice in Wonderland. We see Alice return to Wonderland — or “Underland” as it’s referred to in the film — to help her friend the Mad Hatter find out what happened to his family many years ago. She sets out on a solo quest to find the Chronosphere, a small object located in a castle at the center of a giant clock. The sphere that controls time is being guarded by a new character, played by Sacha Baron Cohen, called Time, a true personification of time. When taken from the clock and activated for time travel, the Chronosphere brings Alice to a new location where all the moments in Underland’s history are displayed within the waves the Oceans of Time. Disrupting the Chronosphere ultimately causes consequences as Time and time to begin to break down.

Redd was kind enough to talk to postPerspective about the challenges of representing Time, the character, and the concept of in this film.

Did it help your initial process to have had a visual language already established from Alice in Wonderland?
I would say it served as a foundation, but part of what was exciting about working with James Bobin on this one was that he wanted to make it feel really different. The time travel element allowed us to go back into Underland before things became sad — before the Jabberwocky attacked the village, before the Red Queen was in power, before all of these things. It allowed us to expand on the palette to be much more saturated and vibrant, and I think you can see that as compared to Alice in Wonderland.

How did you begin the challenge of representing time in both human form and in an entirely new time travel world?
The character Time is not in any of Lewis Carroll’s books, and in one of the early drafts of the script, time itself is just a concept. Then James Bobin brought the idea that time is a character. Personify time and create an actual character. The idea of the back of his head being clockwork came from James as well. He wanted Time to be part of the clock. There’s a moment in the film where Time says, “I am he and he is me.” The clock and Time are the same thing, so when we see Time open his chest, there’s a miniature version of the clock in there as well.

The idea of the Oceans of Time was just one line in the script: “Alice traveled through the Oceans of Time.” Then Ken Ralston, James, myself and our team had to figure out what that looked like. James was very adamant about wanting to include images. Pardon the pun, but over time, we experimented with a number of different looks and ended up with this ocean setting that surrounded you — the characters and the audience. You would go in and out of the ocean to enter moments and different times in history.

clock before

clock after

With the moments depicted within the ocean waves, it felt like there was a very painterly style employed there. What was the thought process behind that decision?
Those images actually came from original shots from the first movie. We started with footage — either completed shots from the first movie or footage from other scenes in this sequel. We couldn’t complete some of the Oceans of Time shots until we had finished the others. So that became a weird schedule for us. We processed the footage for moments to make them feel like they were in the water. We didn’t want the moments to feel like you were going to a drive-in theater like they were just projected on the wall or surface. We wanted them to feel volumetric — to have volume and feel thick and deep —100 feet in the air. It’s a really interesting process that all had to start with 2D processing from our compositing department, which required slowing down the footage to make it feel bigger, and more present.

To add scale?
Yes, exactly, scale. Sometimes a shot used for these moments would only be one second, but we needed three seconds. So we would slow it down, process it and use our own optical flow processes from our 2D workflow that would then feed into our water simulations. Then, in 3D simulations, those moving pieces of footage, using the vector data from the optical flow, would actually move the water and the wave spray around. When you see the Jabberwocky swing its head, it’s actually affecting the surface of the water. That’s why it has that painterly, or liquid, feel to it. I’m really impressed with what our team did in 3D. That’s the stuff you really can’t see as clearly in a 2D flat projection theatre when you’re there. In a 3D theatre, it really comes alive. I’m happy you picked up on it because it was a lot of work.

Those are some of my favorite moments, especially in 3D.
Awesome! Mine too. The Oceans of Time is so much bigger in 3D projection. That little Chronosphere you see, that’s the thing that tells you how big everything is. We can play with the size of the Chronosphere. You can make the Chronosphere bigger and everything feels smaller. Or you can make the Chronosphere smaller and animate it in a slightly different way and suddenly the Oceans of Time is huge. There’s a really interesting relationship between the size of the Chronosphere, how fast it’s moving, how fast we’re moving, and the scale of the world. That was something that took weeks to figure out in animation. If you move too quickly through a large environment, it doesn’t feel that huge. That’s something we wanted to play with and, of course, keep the pacing of the chase sequence to keep it exciting. Those are the kind of things that take months to figure out.

What about the creative evolution of the effect used to represent time breaking down in a tangible way in Underland?
We knew Time’s castle had to get completely rusted over, or frozen over. After a trip to the Los Angeles Natural History Museum, Ralston and I came upon obsidian rock with a kind of mineral growth on it. It was a bright orange and red mineral deposit that had started growing across these crystals. It probably took a million years to happen. We both looked at each other recognizing just how cool that was.

Fast forward a few months after meeting with the effects lead, and the rest of our team about covering the entire world with rust. Ken and I were shooting in London and had been doing dailies with our team for a few weeks, and even while we were shooting we were creating this reference imagery. We all gave this stuff to our team, led by Imageworks senior effects/simulation supervisor Joseph Pepper. A few weeks later he had put together a very rough test.

After the first viewing I said, “At some point we’ll want to get spikes and dust in there, but don’t do that right now, just keep it simple.” Well, Pepper and team went further with the next test. It’s Alice running down a hallway in the castle — granted this is all digital — and there’s this rust aggressively chasing her. Her hair is flopping all over and she looks back, and there is rust shooting up the walls, down the stairs, and through all the arches. A piece of stone breaks off and all this dust is falling and then the rust catches her by the ankle and freezes her in a second and a half, putting her in this physically impossible shape where she’s balancing on her toe reaching for a window. A lot of chaos and excitement! Ken and I looked at that with our jaws dropped. It really showed us what was possible with this idea.

The last week of shooting at Shepperton Studios was that moment where we knew rust was going to be something really cool. We made a couple small changes to the test and showed it to James Bobin. I was looking at his face when he watched it. It kind of sunk and I thought, “Oh crap.” Then he said, “Wow. That’s scary.” We knew we hit it! There was a kind of nervous chuckle and then he said something like, “Yeah, that’ll scare the kids.”

What was the most challenging shot in this film?
The toughest shot was the real movie version of the one I just described to you, when the rust comes over the clock cog and big gear, grabs Alice by the ankle, then climbs up her body and freezes her face all the way to her fingertips as she’s trying to drop the Chronosphere back into the holder. That was one of the most difficult shots of the film.

In fact, we reshot the live-action element of Alice a year later because we wanted to do a slightly better version of it to expose more of her ankle and her body. What we were doing was blending from a full live-action version of Alice to a full photoreal digital double. That’s the kind of shot that every single department touched, from paint and stabilization in the 2D world to wire rig removals, from full on modeling to all the textures of the costumes.

Then there was the lighting, the castle — all the effects — very detailed timing and art direction and controlling of the animation of the rust coming around her face, crossing her eyes, nose and arms at a certain time. There was also the subtle affecting of the cloth so it added weight when the rust went over her arm. It’s incredibly dense and detailed digital work. Our team developed a lot of specialized and cool technology for that. A lot of new animation tools, rendering tools and FX tools to make that happen — definitely pushing boundaries for us at Sony Imageworks.

Christine Holmes is a freelance artist and manager of animated content. She has worked in the film industry for the last six years.

AMD offering FireRender plug-in for 3ds Max

AMD, makers of the line of FirePro graphics cards and engines, has released a free software–based rendering plug-in, the FireRender for Autodesk 3ds Max, which is designed for content creators with 4K workflows and who are looking for photorealistic rendering. FireRender for Max offers physically accurate raytracing and comes with an extensive material library.

AMD FireRender is built on OpenCL 1.2, which means it can run on any hardware. It also provides a CPU backend, which means that FireRender can run on GPU, CPU, CPU+GPU, or a variety of combinations of multiple CPUs and GPUs. Within the FireRender, integrated materials are editable in the 3ds Max Material Slate Editor as nodes. There is also Active Shade Viewport Integration, which means you can work with FireRender in realtime and see your changes as you make them. Physically Correct materials and lighting help with true design decisions via global illumination — including caustics. Emissive and Photometric Lighting, as well as lights from HDRI environments, enable artists to blend a scene in with its surroundings.

AMD says to keep an eye out for other upcoming free software plug-ins for other animation software, including Autodesk Maya and Rhino.

 

In other AMD news, at the NAB show last month, the company introduced the AMD FirePro W9100 32GB workstation graphics card designed for large asset workflows with creative applications. It will be available in Q2 of this year. The FirePro W9100 16GB is currently available.

Thinkbox addresses usage-based licensing

At the beginning of May, Thinkbox Software launched Deadline 8, which introduced on-demand, per-minute licensing as an option for Thinkbox’s Deadline and Krakatoa, The Foundry’s Nuke and Katana, and Chaos Group’s V-Ray. The company also revealed it is offering free on-demand licensing for Deadline, Krakatoa, Nuke, Katana and V-Ray for the month of May.

Chris BondThinkbox founder/CEO Chris Bond explained, “As workflows increasingly incorporate cloud resources, on-demand licensing expands options for studios, making it easy to scale up production, whether temporarily or for a long-term basis. While standard permanent licenses are still the preferred choice for some VFX facilities, the on-demand model is an exciting option for companies that regularly expand and contract based on their project needs.”

Since the announcement, users have been reaching out to Thinkbox with questions about usage-based licensing. We reached out to Bond to help those with questions get a better understanding of what this model means for the creative community.

What is usage-based licensing?
Usage-based licensing is an additional option to permanent and temporary licenses and gives our clients the ability to easily scale up or scale down, without increasing their overhead, on a project-need basis. Instead of one license per render node, you can purchase minutes from the Thinkbox store (as pre-paid bundles of hours) that can be distributed among as many render nodes as you like. And, once you have an account with the Store, purchasing extra time only takes a few minutes and does not require interaction with our sales team.

Can users still purchase perpetual licenses of Deadline?
Yes! We offer both usage-based licensing and perpetual licenses, which can be used separately or together in the cloud or on-premise.

How is Deadline usage tracked?
Usage is tracked per minute. For example, if you have 10,000 hours of usage-based licensing, that can be used on a single node for 10,000 hours, 10,000 nodes for one hour or anything in between. Minutes are only consumed while the Deadline Slave application is rendering, so if it’s sitting idle, minutes won’t be used.

What types of renderfarms are compatible with usage-based licensing?
Usage-based licensing works with both local- and cloud-based renderfarms. It can be used exclusively or alongside existing permanent and temporary licenses. You configure the Deadline Client on each machine for usage-based or standard licensing. Alternatively, Deadline’s Auto-Configuration feature allows you to automatically assign the licensing mode to groups of Slaves in the case of machines that might be dynamically spawned via our Balancer application. It’s easy to do, but if anyone is confused they can send us an email and we’ll schedule a session to step you through the process.

Can people try it out?
Of course! For the month of May, we’re providing free licensing hours of Deadline, Krakatoa, Nuke, Katana and V-Ray. Free hours can be used for on-premise or cloud-based rendering, and users are responsible for compute resources. Hours are offered on a first-come, first-served basis and any unused time will expire at 12am PDT on June 1.