Tag Archives: Maxon

Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.
Appex 1

Boxx offers two new workstations with Kaby Lake Intel processors

Boxx Technologies has introduced Apexx workstations featuring the new seventh-generation Kaby Lake Intel Core i7 processors. The integration of these processors provides the Apexx 1 1202 a base clock speed of 4.2GHz with a turbo boost of 4.5GHz. The ultra-compact Apexx 1 also features advanced liquid cooling and professional graphics. Apexx 1 (pictured in our main image) is designed for users working in visualization, 3D animation, modeling and motion media.

Apexx 2

The latest Intel Core i7 processor is also included in the new, compact, liquid-cooled Apexx 2 2203 workstation. Featuring the same base clock speed of 4.2GHz (and 4.5GHz turbo boost), Apexx 2 2203 is configurable with up to two full-size, pro GPUs and is optimized for software such as Autodesk’s 3ds Max and Maya and Maxon’s Cinema 4D, as well as other CAD and 3D design applications.

“Because Boxx specializes in high-performance workstations, we know that for greater efficiency and productivity, organizations require the latest technology and innovation,” says VP of marketing and business development Shoaib Mohammad. “The integration of new Intel Kaby Lake processors coupled with our space-saving chassis, liquid cooling, professional GPUs and other features, provides architects, engineers and motion media pros with maximum performance.”

Pricing for these new models is not yet available. The company says both these units have non-overclocked processors and would typically be priced lower than models with overclocked processors.

 

Virtual Reality Roundtable

By Randi Altman

Virtual reality is seemingly everywhere, especially this holiday season. Just one look at your favorite electronics store’s website and you will find VR headsets from the inexpensive, to the affordable, to the “if I win the lottery” ones.

While there are many companies popping up to service all aspects of VR/AR/360 production, for the most part traditional post and production companies are starting to add these services to their menu, learning best practices as they go.

We reached out to a sampling of pros who are working in this area to talk about the problems and evolution of this burgeoning segment of the industry.

Nice Shoes Creative Studio: Creative director Tom Westerlin

What is the biggest issue with VR productions at the moment? Is it lack of standards?
A big misconception is that a VR production is like a standard 2D video/animation commercial production. There are some similarities, but it gets more complicated when we add interaction, different hardware options, realtime data and multiple distribution platforms. It actually takes a lot more time and man hours to create a 360 video or VR experience relative to a 2D video production.

tom

Tom Westerlin

More development time needs to be scheduled for research, user experience and testing. We’re adding more stages to the overall production. None of this should discourage anyone from exploring a concept in virtual reality, but there is a lot of consideration and research that should be done in the early stages of a project. The lack of standards presents some creative challenges for brands and agencies considering a VR project. The hardware and software choices made for distribution can have an impact on the size of the audience you want to reach as well as the approach to build it.

The current landscape provides the following options:
YouTube and Facebook can hit a ton of people with a 360 video, but has limited VR functionality; a WebVR experience, works within certain browsers like Chrome or Firefox, but not others, limiting your audience; a custom app or experimental installation using the Oculus or HTC Vive, allows for experiences with full interactivity, but presents the issue of audience limitations. There is currently no one best way to create a VR experience. It’s still very much a time of discovery and experimentation.

What should clients ask of their production and post teams when embarking on their VR project?
We shouldn’t just apply what we’ve all learned from 2D filmmaking to the creation of a VR experience, so it is crucial to include the production, post and development teams in the design phase of a project.

The current majority of clients are coming from a point of view where many standard constructs within the world of traditional production (quick camera moves or cuts, extreme close-ups) have negative physiological implications (nausea, disorientation, extreme nausea). The impact of seemingly simple creative or design decisions can have huge repercussions on complexity, time, cost and the user experience. It’s important for clients to be open to telling a story in a different manner than they’re used to.

What is the biggest misconception about VR — content, process or anything relating to VR?
The biggest misconception is clients thinking that 360 video and VR are the same. As we’ve started to introduce this technology to our clients, we’ve worked to explain the core differences between these extremely difference experiences: VR is interactive and most of the time a full CG environment, while 360 is video and although immersive, it’s a more passive experience. Each have their own unique challenges and rewards, so as we think about the end user’s experiences, we can determine what will work best.

There’s also the misconception that VR will make you sick. If executed poorly, VR can make a user sick, but the right creative ideas executed with the right equipment can result in an experience that’s quite enjoyable and nausea free.

Nice Shoes’ ‘Mio Garden’ 360 experience.

Another misconception is that VR is capable of anything. While many may confuse VR and 360 and think an experience is limited to passively looking around, there are others who have bought into the hype and inflated promises of a new storytelling medium. That’s why it’s so important to understand the limitations of different devices at the early stages of a concept, so that creative, production and post can all work together to deliver an experience that takes advantage of VR storytelling, rather than falling victims to the limitations of a specific device.

The advent of affordable systems that are capable of interactivity, like the Google Daydream, should lead to more popular apps that show off a higher level of interactivity. Even sharing video of people experiencing VR while interacting with their virtual worlds could have a huge impact on the understanding of the difference between passively watching and truly reaching out and touching.

How do we convince people this isn’t stereo 3D?
In one word: Interactivity. By definition VR is interactive and giving the user the ability to manipulate the world and actually affect it is the magic of virtual reality.

Assimilate: CEO Jeff Edson

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue in VR is straightforward workflows — from camera to delivery — and then, of course, delivery to what? Compared to a year ago, shooting 360/VR video today has made big steps in ease of use because more people have experience doing it. But it is a LONG way from point and shoot. As integrated 360/VR video cameras come to market more and more, VR storytelling will become much more straightforward and the creators can focus more on the story.

Jeff Edson

And then delivery to what? There are many online platforms for 360/VR video playback today: Facebook, YouTube 360 and others for mobile headset viewing, and then there is delivery to a PC for non-mobile headset viewing. The viewing perspective is different for all of these, which means extra work to ensure continuity on all the platforms. To cover all possible viewers one needs to publish to all. This is not an optimal business model, which is really the crux of this issue.

Can standards help in this? Standards as we have known in the video world, yes and no. The standards for 360/VR video are happening by default, such as equirectangular and cubic formats, and delivery formats like H.264, Mov and more. Standards would help, but they are not the limiting factor for growth. The market is not waiting on a defined set of formats because demand for VR is quickly moving forward. People are busy creating.

What should clients ask of their production and post teams when embarking on their VR project?
We hear from our customers that the best results will come when the director, DP and post supervisor collaborate on the expectations for look and feel, as well as the possible creative challenges and resolutions. And experience and budget are big contributors. A key issue is, what camera/rig requirements are needed for your targeted platform(s)? For example, how many cameras and what type of cameras (4K, 6K, GoPro, etc.) as well as lighting? When what about sound, which plays a key role in the viewer’s VR experience.

unexpected concert

This Yael Naim mini-concert was posted in Scratch VR by Alex Regeffe at Neotopy.

What is the biggest misconception about VR — content, process or anything relating to VR?
I see two. One: The perception that VR is a flash in the pan, just a fad. What we see today is just the launch pad. The applications for VR are vast within entertainment alone, and then there is the extensive list of other markets like training and learning in such fields as medical, military, online universities, flight, manufacturing and so forth. Two: That VR post production is a difficult process. There are too many steps and tools. This definitely doesn’t need to be the case. Our Scratch VR customers are getting high-quality results within a single, simplified VR workflow

How do we convince people this isn’t stereo 3D?
The main issue with stereo 3D is that it has really never scaled beyond a theater experience. Whereas with VR, it may end up being just the opposite. It’s unclear if VR can be a true theater experience other than classical technologies like domes and simulators. 360/VR video in the near term is, in general, a short-form media play. It’s clear that sooner than later smart phones will be able to shoot 360/VR video as a standard feature and usage will sky rocket overnight. And when that happens, the younger demographic will never shoot anything that is not 360. So the Snapchat/Instagram kinds of platforms will be filled with 360 snippets. VR headsets based upon mobile devices make the pure number of displays significant. The initial tethered devices are not insignificant in numbers, but with the next-generation of higher-resolution and untethered devices, maybe most significantly at a much lower price point, we will see the numbers become massive. None of this was ever the case with stereo 3D film/video.

Pixvana: Executive producer Aaron Rhodes

What is the biggest issue with VR productions at the moment? Is it lack of standards?
There are many issues with VR productions, many of them are just growing pains: not being able to see a live stitch, how to direct without being in the shot, what to do about lighting — but these are all part of the learning curve and evolution of VR as a craft. Resolution and management around big data are the biggest issues I see on the set. Pixvana is all about resolution — it plays a key role in better immersion. Many of the cameras out there only master at 4K and that just doesn’t cut it. But when they do shoot 8K and above, the data management is extreme. Don’t under estimate the responsibility you are giving to your DIT!

aaron rhodes

Aaron Rhodes

The biggest issue is this is early days for VR capture. We’re used to a century of 2D filmmaking and decade of high-definition capture with an assortment of camera gear. All current VR camera rigs have compromises, and will, until technology catches up. It’s too early for standards since we’re still learning and this space is changing rapidly. VR production and post also require different approaches. In some cases we have to unlearn what worked in standard 2D filmmaking.

What should clients ask of their production and post teams when embarking on their VR project?
Give me a schedule, and make it realistic. Stitching takes time, and unless you have a fleet of render nodes at your disposal, rendering your shot locally is going to take time — and everything you need to update or change it will take more time. VR post has lots in common with a non-VR spot, but the magnitude of data and rendering is much greater — make sure you plan for it.

Other questions to ask, because you really can’t ask enough:
• Why is this project being done as VR?
• Does the client have team members who understand the VR medium?
• If not will they be willing to work with a production team to design and execute with VR in mind?
• Has this project been designed for VR rather than just a 2D project in VR?
• Where will this be distributed? (Headsets? Which ones? YouTube? Facebook? Etc.)
• Will this require an app or will it be distributed to headsets through other channels?
• If it is an app, who will build the app and submit it to the VR stores?
• Do they want to future proof it by finishing greater than 4K?
• Is this to be mono or stereo? (If it’s stereo it better be very good stereo)
• What quality level are they aiming for? (Seamless stitches? Good stereo?)
• Is there time and budget to accomplish the quality they want?
• Is this to have spatialized audio?

What is the biggest misconception about VR — content, process or anything relating to VR?
VR is a narrative component, just like any actor or plot line. It’s not something that should just be done to do it. It should be purposeful to shoot VR. It’s the same with stereo. Don’t shoot stereo just because you can — sure, you can experiment and play (we need to do that always), but don’t without purpose. The medium of VR is not for every situation.
Other misconceptions because there are a lot out there:
• it’s as easy as shooting normal 2D.
• you need to have action going on constantly in 360 degrees.
• everything has to be in stereo.
• there are fixed rules.
• you can simply shoot with a VR camera and it will be interesting, without any idea of specific placement, story or design.
How do we convince people this isn’t stereo 3D?
Education. There are tiers of immersion with VR, and stereo 3D is one of them. I see these tiers starting with the desktop experience and going up in immersion from there, and it’s important to the strengths and weakness of each:
• YouTube/Facebook on the desktop [low immersion]
• Cardboard, GearVR, Daydream 2D/3D low-resolution
• Headset Rift and Vive 2D/3D 6 degrees of freedom [high immersion]
• Computer generated experiences [high immersion]

Maxon US: President/CEO Paul Babb

paul babb

Paul Babb

What is the biggest issue with VR productions at the moment? Is it lack of standards?
Project file size. Huge files. Lots of pixels. Telling a story. How do you get the viewer to look where you want them to look? How do you tell and drive a story in a 360 environment.

What should clients ask of their production and post teams when embarking on their VR project?
I think it’s more that production teams are going to have to ask the questions to focus what clients want out of their VR. Too many companies just want to get into VR (buzz!) without knowing what they want to do, what they should do and what the goal of the piece is.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
Oh boy. Let me tell you, that’s a tough one. People don’t even know that “3D” is really “stereography.”

Experience 360°: CEO Ryan Moore

What is the biggest issue with VR productions at the moment? Is it lack of standards?
One of the biggest issues plaguing the current VR production landscape is the lack of true professionals that exist in the field. While a vast majority of independent filmmakers are doing their best at adapting their current techniques, they have been unsuccessful in perceiving ryan moorehow films and VR experiences genuinely differ. This apparent lack of virtual understanding generally leads to poor UX creation within finalized VR products.

Given the novelty of virtual reality and 360 video, standards are only just being determined in terms of minimum quality and image specifications. These, however, are constantly changing. In order to keep a finger on the pulse, it is encouraged for VR companies to be plugged into 360 video communities through social media platforms. It is through this essential interaction that VR production technology can continually be reintroduced.

What should clients ask of their production and post teams when embarking on their VR project?
When first embarking on a VR project, it is highly beneficial to walk prospective clients through the entirety of the process, before production actually begins. This allows the client a full understanding of how the workflow is used, while also ensuring client satisfaction with the eventual partnership. It’s vital that production partners convey an ultimate understanding of VR and its use, and explain their tactics in “cutting” VR scenes in post — this can affect the user’s experience in a pronounced way.

‘The Backwoods Tennessee VR Experience’ via Experience 360.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people that this isn’t stereo 3D?
The biggest misconception about VR and 360 video is that it is an offshoot of traditional storytelling, and can be used in ways similar to both cinematic and documentary worlds. The mistake in the VR producer equating this connection is that it can often limit the potential of the user’s experience to that of a voyeur only. Content producers need to think much farther out of this box, and begin to embrace having images paired with interaction and interactivity. It helps to keep in mind that the intended user will feel as if these VR experiences are very personal to them, because they are usually isolated in a HMD when viewing the final product.

VR is being met with appropriate skepticism, and is widely still considered a ‘“fad” without the media landscape. This is often because the critic has not actually had a chance to try a virtual reality experience firsthand themselves, and does not understand the wide reaching potential of immersive media. At three years in, a majority of the adults in the United States have never had a chance to try VR themselves, relying on what they understand from TV commercials and online reviews. One of the best ways to convince a doubtful viewer is to give them a chance to try a VR headset themselves.

Radeon Technologies Group at AMD: Head of VR James Knight

What is the biggest issue with VR productions at the moment? Is it lack of standards?
The biggest issue for us is (or was) probably stitching and the excessive amount of time it takes, but we’re tacking that head on with Project Loom. We have realtime stitching with Loom. You can already download an early version of it on GPUopen.com. But you’re correct, there is a lack of standards in VR/360 production. It’s mainly because there are no really established common practices. That’s to be expected though when you’re shooting for a new medium. Hollywood and entertainment professionals are showing up to the space in a big way, so I suspect we’ll all be working out lots of the common practices in 2017 on sets.

James Knight

What should clients ask of their production and post teams when embarking on their VR project?
Double check they have experience shooting 360 and ask them for a detailed post production pipeline outline. Occasionally, we hear horror stories of people awarding projects to companies that think they can shoot 360 without having personally explored 360 shooting themselves and making mistakes. You want to use an experienced crew that’s made the mistakes, and mostly is cognizant of what works and what doesn’t. The caveat there though is, again, there’s no established rules necessarily, so people should be willing to try new things… sometimes it takes someone not knowing they shouldn’t do something to discover something great, if that makes sense.

What is the biggest misconception about VR — content, process or anything relating to VR? How do we convince people this isn’t stereo 3D?
That’s a fun question. The overarching misconception for me, honestly, is just as though a cliché politician might, for example, make a fleeting judgment that video games are bad for society, people are often times making assumptions that VR if for kids or 16 year old boys at home in their boxer shorts. It isn’t. This young industry is really starting to build up a decent library of content, and the payoff is huge when you see well produced content! It’s transformative and you can genuinely envision the potential when you first put on a VR headset.

The biggest way to convince them this isn’t 3D is to convince a naysayer put the headset on… let’s agree we all look rather silly with a VR headset on, and once you get over that, you’ll find out what’s inside. It’s magical. I had the CEO of BAFTA LA, Chantal Rickards, tell me upon seeing VR for the first time, “I remember when my father had arrived home on Christmas Eve with a color TV set in the 1960s and the excitement that brought to me and my siblings. The thrill of seeing virtual reality for the first time was like seeing color TV for the first time, but times 100!”

Missing Pieces: Head of AR/VR/360 Catherine Day

Catherine Day

What is the biggest issue with VR productions at the moment?
The biggest issue with VR production today is the fact that everything keeps changing so quickly. Every day there’s a new camera, a new set of tools, a new proprietary technology and new formats to work with. It’s difficult to understand how all of these things work, and even harder to make them work together seamlessly in a deadline-driven production setting. So much of what is happening on the technology side of VR production is evolving very rapidly. Teams often reinvent the wheel from one project to the next as there are endless ways to tell stories in VR, and the workflows can differ wildly depending on the creative vision.

The lack of funding for creative content is also a huge issue. There’s ample funding to create in other mediums, and we need more great VR content to drive consumer adoption.

Is it lack of standards?
In any new medium and any pioneering phase of an industry, it’s dangerous to create standards too early. You don’t want to stifle people from trying new things. As an example, with our recent NBA VR project, we broke all of the conventional rules that exist around VR — there was a linear narrative, fast cut edits, it was over 25 minutes long — yet still was very well received. So it’s not a lack of standards, just a lack of bravery.

What should clients ask of their production and post teams when embarking on their VR project?
Ask to see what kind of work that team has done in the past. They should also delve in and find out exactly who completed the work and how much, if any, of it was outsourced. There is a curtain that often closes between the client and the production/post company and it closes once the work is awarded. Clients need to know who exactly is working on their project, as much of the legwork involved in creating a VR project — stitching, compositing etc. — is outsourced.

It’s also important to work with a very experienced post supervisor — one with a very discerning eye. You want someone who really knows VR that can evaluate every aspect of what a facility will assemble. Everything from stitching, compositing to editorial and color — the level of attention to detail and quality control for VR is paramount. This is key not only for current releases, but as technology evolves — and as new standards and formats are applied — you want your produced content to be as future-proofed as possible so that if it requires a re-render to accommodate a new, higher-res format in the future, it will still hold up and look fantastic.

What is the biggest misconception about VR — content, process or anything relating to VR?
On the consumer level, the biggest misconception is that people think that 360 video on YouTube or Facebook is VR. Another misconception is that regular filmmakers are the creative talents best suited to create VR content. Many of them are great at it, but traditional filmmakers have the luxury of being in control of everything, and in a VR production setting you have no box to work in and you have to think about a billion moving parts at once. So it either requires a creative that is good with improvisation, or a complete control freak with eyes in the back of their head. It’s been said before, but film and theater are as different as film and VR. Another misconception is that you can take any story and tell it in VR — you actually should only embark on telling stories in VR if they can, in some way, be elevated through the medium.

How do we convince people this isn’t stereo 3D?
With stereo 3D, there was no simple, affordable path for consumer adoption. We’re still getting there with VR, but today there are a number of options for consumers and soon enough there will be a demand for room-scale VR and more advanced immersive technologies in the home.

Behind the Title: 3D artist Trevor Kerr

NAME: Trevor Kerr (@kerrmotion)

WHAT’S YOUR JOB TITLE?
I am a freelance 3D Generalist.

WHAT DOES THAT ENTAIL?
Most often a generalist like myself will tackle anything from layout to composite and everything in between. Lately, I’ve been focusing on environments and effects to ultimately specialize in one or the other.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think that it can be surprising how much one person can tackle on their own. I’ve finished some fairly intricate shots for a single artist pipeline.
My latest Star Wars short film was made almost completely by myself in under two months. Of course, working with a team has incredible multidisciplinary benefits as well.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I’ve been in 3D since 2012, and started pursuing visual effects in late 2014.

HOW HAS THE VFX/GRAPHICS INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? 
One difference of note in day-to-day life, in my short experience is the arrival of the IPR for many render solutions. I think learning 3D without an IPR forces you to think about efficiency which is, in many ways, a good thing. Instant feedback and progressive rendering is a massive time-saver, but I’m curious to see what long-term effects it has on the communal rendering psyche.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
As a child I was most certainly inspired and motivated by Star Wars and Jurassic Park. I was very interested in figuring out how to take the audience on a journey in the same way that these films did.

DID YOU GO TO SCHOOL FOR VFX/GRAPHICS?
I went to school for music and art history, but I ended up taking a job for a studio before I finished my bachelors. My drive to work in entertainment and film always motivated my personal learning and continues to do so every day!

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is seeing everything come together. I have a massive appreciation for each step of the process — from concepting and layout to assembly and composite. Seeing the final frames in motion is always a thrill.

WHAT’S YOUR LEAST FAVORITE?
I think it’s hard to really nail down a least favorite, per se, because of how double-sided so many aspects of this industry are. A good example of this would be at the start of a job — what looks like an impossible task staring you in the face also doubles as extreme excitement and motivation to get started. To me, the subject is too nuanced to simply say, “This part is no good.”

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
This is a fantastic question, because I really cannot see myself doing anything else. I dabbled in audio engineering for a little while, so maybe something along the way of sound design — but is that so dissimilar from what I do now? It would certainly be something film-related, I’m sure.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Well, I currently have the pleasure of working on a project for League of Legends. I was also recently at Siggraph presenting for both Maxon and Autodesk on my recent Star Wars personal project. Prior to that was a piece for Disney’s Jungle Book and presenting for Maxon at NAB.

possible-mainWHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Well, from an overall execution standpoint, I think I’m most proud of my recent Star Wars personal project. The timeline was a little under two months — so for the timeline I think it is my best work. The layout, shaders and composite could use much more work — but I’m still happy to have learned everything I did along the way.

WHAT TOOLS DO YOU USE DAY TO DAY?
I mostly use Cinema 4D and Houdini for 3D work. My preferred render suite is primarily Arnold, but am also versed in Octane. Compositing is typically handled in Nuke or After Effects. Lately, I’ve been learning Clarisse, as well as specializing further in Houdini.

WHERE DO YOU FIND INSPIRATION?
I hate to pull out some super-cliché answers here, but my girlfriend, my three year old, and love for feature films and the technology we’ve created over the past century to make them. I feel very strongly about good production design and story, especially when it comes to environments.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Well, I try to spend all of my time as efficiently as possible — but every now and then you just have to just do nothing and unwind. I find that going back to the source of my inspiration can help remind me why I got into the work I do when things get hard. Sitting down in my living room and taking in a favorite film of mine will often put me at ease.

My first trip to IBC

By Sophia Kyriacou

When I was asked by the team at Maxon to present my work at their IBC stand this year, I jumped at the chance. I’m a London-based working professional with 20 years of experience as a designer and 3D artist, but I had never been to an IBC. My first impression of the RAI convention center in Amsterdam was that it’s super huge and easy to get lost in for days. But once I found the halls relevant to my interests, the creative and technical buzz hit me like heat in the face when disembarking from a plane in a hot humid summer. It was immediate, and it felt so good!

The sounds and lights were intense. I was surrounded by booths with baselines of audio vibrating against the floor changing as you walked along. It was a great atmosphere; so warm and friendly.

My first Maxon presentation was on day two of IBC — it was a show-and-tell of three award-winning and nominated sequences I created for the BBC in London and one for Noon Visual Creatives. As a Cinema 4D user, it was great to see the audience at the stand captivated by my work. and knowing it was streamed live to a large audience globally made it even more exciting.

The great thing about IBC is that it’s not only about companies shouting about their new toys. I also saw how it brings passionate pros from all over the world together — people you would never meet in your usual day-to-day work life. I met people from all over globe and made new friends. Everyone appeared to share the same or similar experience, which was wonderful.

The great thing about having the first presentation of the day at Maxon meant I could take a breather and look around the show. I also sat in on a Dell Precision/Radeon Technologies roundtable event one afternoon. That was a really interesting meeting. We were a group of pros from varied disciplines within the industry. It was great to talk about what hardware works, what doesn’t work, and how it could all get better. I don’t work in a realtime area, but I do know what I would like to see as someone who works in 3D. It was incredibly interesting, and everyone was so welcoming. Thoroughly enjoyed it.

Sunday evening, I went over to the SuperMeet — such an energetic and friendly vibe. The stage demos were very interesting. I was particularly taken with the fayIN tracker plug-in for Adobe After Effects. It appears to be a very effective tool, and I will certainly look into purchasing it. The new Adobe Premiere features look fantastic as well.

Everything about my time at IBC was so enjoyable. I went back London buzzing, and am already looking forward to next year’s IBC show.

Sophia Kyriacou is a London-based broadcast designer and 3D artist who splits her time working as a freelancer and for the BBC.

Review: HP’s zBook 17 G3 mobile workstation

By Brady Betzel

Desktop workstations have long been considered the highest of the high end and the fastest of the fast. From the Windows-driven HP Z820 powerhouse to Apple’s ubiquitous Mac Pro,  multimedia pros, video editors, VFX editors, sound engineers and others are constantly looking for ways to speed up their workflow.

Whether you feel that OS X is more stable than Windows 10, or you love the ability to use Nvidia’s Quadro line of graphics cards, one thing that pros need is a reliable system that can process monster DNxHR, ProRes 4444, even DPX files, and crunch them down to YouTube-sized gems and Twitter-sized GIFs in as little time as possible.

What if you need the ability to render a 4K composition in Adobe After Effects while simultaneously editing in Adobe Premiere on an airplane or train? You have a few options: Dell makes some pretty high-end mobile workstations, and Apple makes an outdated MacBook Pro that might hold up. What other options are there? Well, how about HP’s latest line — the HP zBook Generation 3? I’m focusing on the 17-inch for this review.

One of the fringe benefits when buying a workstation targeted at post pros is they are tested with apps like Adobe’s Creative Cloud, Avid Media Composer and Autodesk’s Suite of apps — better known as ISV Certification (ISV= Independent Software Vendor). HP and selected software vendors spend tons of time making sure the apps that are most likely used by the high-end zBook users are strenuously tested. Most of the time this means increased efficiency.

For example, being able to choose a graphics solution like the Nvidia Quadro M5000M with 8GB of RAM and 1,536 CUDA Cores instead of the AMD FirePro W6150M with 4GB of RAM because you want CUDA-enabled renders is a choice you get because HP spent time testing the highest-end graphics cards to be placed in this system.

Here is a rundown of the specs in the zBook G3 I tested:
– Processor: Intel Xeon CPU E3-1535M v5 — four cores, eight threads, 2.9 GHz
– Memory: 32GB DDR4, 2133MHz
– NVMe SSD drive: NVMe Samsung MZVPV512 – 512GB
– Graphics card 1: HD graphics P530 1GB
– Graphics Card 2: Nvidia Quadro M5000M 8GB
– Screen: 17.3-inch diagonal FHD UWVA IPS anti-glare LED-backlit (1920×1080)
– Audio: Bang & Olufsen HD audio
– Built-In Battery: HP Long Life 6-cell 96 WHr Li-ion prismatic
– External Ports: four USB 3, Gigabit RJ-45, SD media, smart card reader, microphone/headphone port, two Thunderbolt 3, HDMI, VGA, power and security cable slot.
– Full-size spill resistant keyboard with numeric keypad
– Operating system: Windows 10
– Warranty: 3/3/3 – three years parts, labor and on-site (limited restrictions apply)

What Do I Really Think?
Some initial takeaways after using the zBook G3 are: it features very sturdy construction, it offers lightning quick speed and connections, and it has an amazing battery life when paired with the power the zBook G3 harnesses. Obviously, the battery life drains faster when really using the zBook G3 in conjunction with power hungry apps such as Maxon’s Cinema 4D, Adobe’s After Effects, Premiere or Media Encoder, but the now built-in battery is the longest lasting that I have experienced in a mobile workhorse.

I recently took this mobile workstation to San Francisco for the GoPro Developer Program announcement, and it lasted all day. Lasting all day is nice because the power supply is not small and it is not light. I wish I had left it at home, but I was scared I would run out of battery power. When talking with the HP crew during this review process, they stressed how they improved the battery life even though the machine’s speed and power was increased, and they were not lying. But like I said, when using apps like Adobe Media Encoder you are going to drain your battery faster. But I could get two to three hours while transcoding in Media Encoder, which is still pretty great.

Stress Test
With powerful workstations like the HP zBook G3, I like to run Cinebench (a standard in benchmarking for many reviews), a render and speed stress test made by Maxon. I had some interesting results, for OpenGL it was 5th, bested by some desktop graphics cards like the AMD Radeon HD 5770, Nvidia GTX 460, Nvidia Quadro 4000 and the mobile card the Nvidia Quadro K4000M. The Intel Xeon CPU E3-1535M v3 tested 5th, topped by three Intel i7s and one Xeon — all desktop processors. Surprisingly, when tested for CPU single core it ranked second, topped only by the Intel i7-4770K.

Practical Test
As an editor with a lot of experience in the prep and delivery of footage and final products, when I hear workstation I think an encoding and transcoding beast. A typical task in my daily work is to transcode hour-long episodic QuickTimes from codecs like ProRes or DNxHD to something like an H.264 or an MP4. My first test was to compress a two-hour DNxHD 175 QuickTime to the YouTube 1080p setting in Adobe’s Media Encoder, which is a 1920×1080, 16 Mbps, MP4 — fit for decent quality, balanced with a low file size. It took 80 minutes (about 2/3 realtime), which is pretty good considering I’m working on a mobile workstation. On a high-end desktop workstation like the Mac Pro or z840 I might get that down to about (1/4 realtime, or about 30-40 minutes).

My next test was to transcode a 44-minute DNxHD QuickTime to the YouTube 1080p setting in Adobe’s Media Encoder. This file took 33 minutes to transcode, roughly ¾ of realtime. I tried compressing a ProRes HQ 50-minute long QuickTime to the YouTube 1080p MP4 setting and it took around 40 minutes. So all in all, you are getting a little faster than realtime, and if you need it to be faster you should probably be compressing on a desktop workstation.

Other Observations
I was able to really appreciate the large IPS screen that is very bright and very clear. One thing I notice as I get older is that I need larger screens (yuck, I think I just fainted… definitely getting old). On mobile workstations it’s hard to get a large screen that is also easy to view for multiple hours, but this HP matte screen is great.

Another thing I really like is the branded speakers. Most laptops have half decent speakers at best, but the zBook comes with Bang & Olufsen speakers that offer sound way above other laptop speakers I’ve heard. I definitely plugged in headphones, but in a pinch these were more than good. I particularly liked the full-sized keyboard with numeric keypad (any editor who has to enter timecode knows how important the numeric keypad is for this).

In the End
I love HP’s line of z series workstations, from the super-high-end z840 to this zBook G3. If you are looking to transcode a 44-minute QuickTime in under 15 minutes, you are going to need a system like the HP z840 with 64GB of RAM and an SSD under the hood.

If you need similar power to the z840 but in a mobile powerhouse, the zBook G3 is for you. With peripherals like the HP Thunderbolt 3 dock you can keep your Thunderbolt 3 RAID, display ports for your UHD/4K monitors and even more USB 3 ports stationary at home without having to always hook up and unhook your peripherals every time you get home from office. The 200W dock will cost $249, and the 150W dock is $229 (for the 17-inch G3 you will need the 200W version). The power supply to charge the zBook G3 is not small, so using the dock as a charging station and peripheral connector is definitely the way to go.

One issue I had with the zBook has to do with HP ditching the Thunderbolt 1/2 connectors. It’s kind of funny to see a VGA port next to an HDMI and Thunderbolt 3 ports without a Thunderbolt 2 connection, or at the least I would have hoped HP would include an adapter with their zBook. I asked HP about this and they said other companies were already tackling the Thunderbolt 1/2-to-3 converters. While it’s not a huge issue, it’s interesting to see them ditch such a new interface like Thunderbolt 2 (which was in the zBook G2) when I know their customers have recently invested in Thunderbolt 2 devices and there is no easy way to connect them to this zBook G3, other than buying a $100 adapter, after paying for the mobile workstation. Obviously I am nitpicking, but it stood out to me.

Moving on, the zBook G3 is one of the most solid mobile workstations I have touched. It’s not light, but it’s not meant to be. HP has other options for users looking for a Windows-based PC that rivals the MacBook Air. The zBook isn’t as powerful as its stationary workstation line, but it won’t let you down if you need something to encode QuickTimes on the go or create proxies for your Blackmagic Resolve 12.5 or Avid Media Composer 8.5 projects. It will even run Cinema 4D without skipping a beat.

If you have the money, the zBook G3 is at the top of my list for a workstation that fits in a backpack, lasts upwards of five hours on battery life, and can chew up and spit out media files.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter @allbetzroff. Brady was recently nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Review: Maxon’s Cinema 4D R17

By Brady Betzel

Over the years, I have seen Maxon take Cinema 4D from something that only lived on the periphery of my workflow to an active tool alongside apps such as Adobe After Effects and Adobe Premiere and Avid’s Media Composer/Symphony.

What I have seen happenis the simplification of workflow and capabilities within Cinema 4D’s releases. This brings us to the latest Cinema 4D release: Cinema 4D R17. This release not only builds on the previous R16 release, like improved Motion Tracking and Shaders, but Maxon continues to add new functionality with things like the Take System, Color Chooser or the Variation Shader.

Variation Shader

Because I work in television, I previously thought that I only needed Cinema 4D when creating titles — I couldn’t quite get that gravitas that I was looking for in apps like Media Composer’s Title Tool, After Effects or even Photoshop (i.e. raytracing or great reflections and ambient occlusion that Cinema 4D always conveyed to me). These days I am searching out tutorials and examples of new concepts and getting close to committing to designing one thing a day, much like @beeple or @gsg3d’s daily renders.

Doing a daily render is a great way to get really good at a tool like Cinema 4D. It feels like Maxon is shaping a tool that, much like After Effects, is becoming usable to almost anyone that can get their hands on it — which is a lot of people, especially if you subscribe to Adobe’s Creative Cloud with After Effects, because Cinema 4D Lite/Cineware is included.

Since I am no EJ Hassenfratz (@eyedesyn), I won’t be covering the minute details of Cinema 4D R17, but I do want to write about a few of my favorite updates in hopes that you’ll get excited and jump into the sculpting, modeling or compositing world inside of Cinema 4D R17.

The Take System
If you’ve ever had to render out multiple versions of the same scene, you will want to pay attention to the new Take System in Cinema 4D R17. Typically, you build many projects with duplicated scenes to create different versions of the same scene. Maybe you are modeling a kitchen and want to create five different animations, each with their own materials for the cabinet faces as well as unique camera moves. Thanks to Cinema 4D R17’s Take System you can create different takes within the same project saving tons of time and file size.

Take System

Under the Objects tab you will see a new Takes tab. From there you will generate new takes, enable Auto Take (much like auto keyframing, it saves each unique take’s actions) and perform other take specific functions like overrides. The Take System uses a hierarchical structure that allows for child takes to take the properties of its parents. At the top is the main take, and any changes made there affect all of its children underneath.

Say you want your kitchen to have the same floor but different cabinet face materials. You would first create your scene as you want it to look in the overall sense, then in the Take menu you would add takes for each version you want, name it appropriately for easy navigation later, enable Auto Take, change any attributes to that specific take, save and render!

In the Render Settings under Save > File… you can choose from the drop-down menu how you want your takes named upon export. There are a bunch of presets in there that will get you started. Technically, Maxon refers to this update as Variable Path and File Names, or “Tokens.”

This is a gigantic addition to Cinema 4D’s powerful toolset that should breathe a sigh of relief into anyone who has had to export multiple versions of the same scene. Now instead of multiple projects you can save all of your versions in one place.

Pen and Spline Tools
One of the hardest things to wrap my head around when I was diving into the world of 3D was how someone actually draws in 3D space. What the hell is a Z-Axis anyways? I barely know x and y! Well, after Googling what a Z-Axis is, you will also understand that technically, with a modern-day computing set-up, you can’t literally draw in 3D space without some special hardware.

pen tool

However, in Cinema 4D you can draw on one plane (i.e., Front View), then place that shape inside of a Lathe and bam! — you have drawn in 3D space complete with x,y and z dimensions. So while that is a super-basic example, the new Pen Tool and Spline Tool options allow for someone with little to no 3D experience to jump into Cinema 4D R17 and immediately get modeling.

For an example, if you grab the Pen tool and draw some sort of geometry and then want to cut a hole in it, you can now grab a new circle, place it where you want it to intersect the beautiful object you just drew, highlight the object you want to use as the object that will do the cutting (if you use the Spline Subtract), then hold Control on Windows and Command Mac and click on the object you want to cut from. Then go into the Pen/Spline menu and click Spline Subtract, Spline Union, Spline And, Spline Or or Spline Intersect. You now have a new permanent way to alter your geometry in a much more efficient way. Try it yourself; it’s a lot easier than reading about it.

I used this to create some — I’ll call them unique — shapes and was able to make intersection cuts easily and painlessly.

I also like the Spline Smooth tools. You’ve drawn your spline but want to add some flare —click on the Spline Smooth tools and under the options check off exactly what you want to do to your spline with your brush (think of Spline Smooth like the Liquify tool in Photoshop where you can bulge, flatten or even spiral your work). Under the options you can choose Smooth, Flatten, Random, Pull, Spiral, Inflate and Project. The Spiral function is a great way to give some unique wave-like looks to your geometry

Color Chooser
Another update to Cinema 4D R17 that I really love is the updated Color Chooser. While in theory it’s a small update, it’s a huge update for me. I really like to use different color harmonies when doing anything related to color and color correction. In Cinema 4D R17 you can choose from RGB, HSV or Kelvin color modes. In RGB there are presets to help guide you in making harmonious color choices with presets for Monochromatic, Analogous, Complementary, Tetrad, Semi-ComplemenColor Choosertary and Equiangular color choices. If you don’t have much experience in color theory it might be a good time to run to your local library and find a book; it will really help you make conscious and appropriate color choices when you need to.

Besides the updated color theory based layouts, you can import your own image and create a custom color swatch that can be saved. In addition, a personal favorite is the Color Mixer. You can choose two colors and use a slider to find a mix of the two colors you chose. A lot of great experimentation can happen here.

Lens Distortion
When compositing objects, text or whatever you can think of into a scene it can get frustrating when dealing with footage that has extreme lens curvature. In Cinema 4D R17 you can easily create a Lens Profile that can then be applied as either a shader or a post effect to your final render.

To do this you need to build a Lens Profile by going to the Tools menu and clicking Lens Distortion, then load the image you want to use as reference. From there you need to tell Cinema 4D R17 what it should consider a straight line — like a sidewalk, which in theory should be horizontally straight, or a light pole, which should be vertically straight

Lens Distortion

To do this you need to click Add N-Point Line and line it up against your “straight” object, you can add multiple points as necessary to create changes in line angle, choose a lens distortion model that you think should be close to your lens type (3D Standard Classic is a good one to start with), click Auto Solve and then save your profile to apply when you render your scene. To load the profile on render find your Render Settings > Effects > Lens Distortion and load it from there.

Book Generator
I love that Maxon includes some shiny bells and whistles to their updates. Whether it is a staircase from R16 or Grow Grass, etc, I always love updates that make me say, “Wow that’s cool.” Whether or not I use it a lot is another story.

In Cinema 4D R17, the Book Generator is the “wow” factor for me. Obviously it has a very niche use but it’s still really cool. In your content browser just search for Book Generator and throw it on your scene. To make the books land on a shelf you need to first create the shelves, make them editable, then click “Add Selection as One Group” or “Add Selection as Separate Groups” if you want to control them individually. Afterwards under the Book Generator object you can click on the Selection, which are the actual books. Under User Data you can customize things like overall book size, type of books or magazine, randomness, textures and bookends, and even make the books lean on each other if they are spaced out.

book generator

It’s pretty sweet once you understand how it works. If you want different pre-made textures for your magazines or books you can search for “book” in the Content Browser. They have many different kinds of textures including one for the inside pages.

Summing Up
I detailed just a few great updates to Maxon’s Cinema 4D R17, but there are tons more. The awesome ability to import SketchUp files directly into Cinema 4D R17 is very handy and keyframe handling updates and the possibilities from the Variation Shader make Cinema 4D R17 full of endless possibilities.

If you aren’t ready to drop the $3,695 on the Cinema 4D R17 Studio edition, $2,295 on the Visualize edition, $1,695 on the Broadcast edition or $995 on the Prime edition, make sure you check out the version that comes with Adobe After Effects CC (Cineware/Cinema 4D Light). While it won’t be as robust as the other versions, it will give you a taste of what is possible and may even spark your imagination to try something new like modeling! Check out the different versions here: http://www.maxon.net/products/general-information/general-information/product-comparison.html.

Keep in mind that if you are new to the world of 3D modeling or Cinema 4D and want to find some great learning resources you should check out Sean Frangella on YouTube: https://www.youtube.com/user/seanfrangella @seanfrangellawww.greyscalegorilla.com/blog, Cineversity: and www.motionworks.net @motionworks. Cineversity even used my alma mater, California Lutheran University in their tutorials!

Brady Betzel is an online editor at Margarita Mix in Hollywood. Previously, he was editing The Real World at Bunim-Murray Productions. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter, @allbetzroff.

Review: Maxon Cinema 4D Studio R16

By Brady Betzel

It’s not every day that I need a full-fledged 3D application when editing in reality television, but when I do I call on 
Maxon’s Cinema 4D. The Cinema 4D Studio R16 release is chock full of features aimed at people like me who want to get in and get out of their 3D app without pulling out all of their hair.

I previously reviewed Cinema 4D Studio R15, and that is when I began to fall in love with just how easy it was becoming to build raytraced titles or grow grass with the click of my Wacom stylus. Now we are seeing the evolution from not just a standard 3D app but a motion graphics powerhouse that can be used to craft a powerful set of opening credits or seamlessly composite a beautiful flower vase using the new motion tracker all inside of Cinema 4D Studio R16.

I’ve grown up with Cinema 4D, so I may be a little partial to it, but luckily for me the great Continue reading

Maxon intros next-gen Cinema 4D

Maxon has updated its 3D motion graphics, visual effects, visualization, painting and rendering software Cinema 4D to Release 16. Some of the new features in this newest version include a modeling PolyPen “super-tool,” a motion tracker for easily integrating 3D content within live footage and a Reflectance material channel that allows for multi-layered reflections and specularity.

The company will be at Siggraph next week with the new version and it’s scheduled to ship in September.

CINEMA_4D_R16_Packshot_Range_Books_Left_RGB copy

Key highlights include:
Motion Tracker – This offers fast and seamless integration of 3D elements into real-world footage. So footage can be tracked automatically or manually, and aligned to the 3D environment using position, vector and planar constraints.

Interaction Tag – This gives users control over 3D objects and works with the new Tweak mode to provide information on object movement and highlighting. Suited for technical directors and character riggers, the tag reports all mouse interaction and allows object control via XPresso, COFFEE or Python.

PolyPen – With this tool users can paint polygons and polish points as well as easily move, clone, cut and weld points and edges of 3D models. You can even re-topologize complex meshes. Enable snapping for greater precision or to snap to a surface.

Bevel Deformer – The Bevel toolset in Cinema 4D can now be applied nondestructively to entire objects or specific selection sets. Users can also animate and adjust bevel attributes to create all new effects.

Sculpting – R16 offers many improvements and dynamic features to sculpt with precision and expand the overall modeling toolset. The new Select tool gives users access to powerful symmetry and fill options to define point and polygon selections on any editable object. Additional features give users more control and flexibility for sculpting details on parametric objects, creating curves, defining masks, stamps and stencils, as well as tools for users to create their own sculpt brushes and more.

Other modeling features in R16 include an all-new Cogwheel spline primitive to generate involute and ratchet gears; a new Mesh Check tool to evaluate the integrity of a polygonal mesh; Deformer Falloff options and Cap enhancements to easily add textures to the caps of MoText, Extrude, Loft, Lathe and Sweep objects.

Reflectance Channel (main image) – This provides more control over reflections and specularity within a single new channel. Features include the ability to build-up multiple layers for complex surfaces such as metallic car paint, woven cloth surfaces, and options to render separate multi-pass layers for each reflection layer to achieve higher quality realistic imagery.

New Render Engine for Hair & Sketch – A completely new unified effects render engine allows artists to seamlessly raytrace Hair and Sketch lines within the same render pass to give users higher quality results in a fraction of the time.

Rendering

Rendered image

Team Render, introduced by Maxon in 2013, features many new enhancements including a client-server architecture allowing users to control all the render jobs for a studio via a browser.

Other Workflow Features/Updates
Content Library – Completely re-organized and optimized for Release 16, the preset library contains custom made solutions with specific target groups in mind. New house and stair generators, as well as modular doors and windows have been added for architectural visualizers. Product and advertising designers can take advantage of a powerful tool to animate the folding of die-cut packaging, as well as modular bottles, tubes and boxes. Motion designers will enjoy the addition of high-quality models made for MoGraph, preset title animations and interactive chart templates.

Exchange/Pipeline Support – Users can now exchange assets throughout the production pipeline more reliably in R16 with support for the most current versions of FBX and Alembic.

Solo Button – Offers artists a production-friendly solution to isolate individual objects and hierarchies for refinement when modeling. Soloing also speeds up the viewport performance for improved workflow on massive scenes.

Annotations – Tag specific objects, clones or points in any scene with annotations that appear directly in view for a dependable solution to reference online pre-production materials, target areas of a scene for enhancement, and more.

UV Peeler – An effective means to quickly unwrap the UV’s of cylindrical objects for optimized texturing.

NAB: Maxon’s Paul Babb checks in with partnership news

Las Vegas — At the NAB show, Maxon, makers of Cinema 4D, talked up some partnerships with companies like Thinkbox and Vizrt. They also hosted 20 CG artists on the main stage of its booth, taking people through their most recent projects and how Cinema 4D played a part.

Getting back to the partnership news, Thinkbox has released the Krakatoa particle renderer as a plug-in for Cinema 4D R14 and R15 users. Highlights include:

  • Point or voxel representation of particle data with various filter modes, motion blur and depth of field camera effects, and HDRI render passes output to OpenEXR files.
  • Concurrent support for additive and volumetric shading models, with per-particle control over data, including color, emission, absorption, density and more.
  • Integration with the native particle systems of Cinema 4D as well as with third-party Continue reading