Category Archives: NAB

NAB 2016 from an EP’s perspective

By Tara Holmes

Almost two weeks ago, I found myself at NAB for the first time. I am the executive producer of color and finishing at Nice Shoes, a post production studio in New York City. I am not an engineer and I am not an artist, so why would an EP go to NAB? I went because one of my main goals for 2016 is to make sure the studio remains at the forefront of technology. While I feel that our engineering team and artists represent us well in that respect, I wanted to make sure that I, along with our producers, were fully educated on these emerging technologies.

One of our first priorities for NAB was to meet with top monitor manufacturers to hopefully land on what UHD HDR monitors we would find to meet our standards for professional client viewing. We came to the conclusion that the industry is not there yet and we have more research to do before we upgrade our studio viewing environments.

Everyone with me was in agreement. They aren’t where they need to be. Most are only outputting around 400-800 nits and are experiencing luminance and contrast issues. None of this should stop the process of coloring for HDR. For the master monitor for the colorist, the Sony BVM-X300 OLED master monitor, which we are currently using, seems to be the ideal choice as you can still work in traditional Rec 709 as well as Rec 2020 for HDR.

After checking out some monitors, we headed to the FilmLight booth to go over the 5.0 upgrades to Baselight. Our colorist Ron Sudul, along with Nice Shoes Creative Studio VFX supervisor Adrian Winter, sat with myself and the FilmLight reps to discuss the upgrades, which included incredible new isolation tracking capabilities.  These upgrades will reinvent what can be achieved in the color suite: from realtime comps to retouch being done in color. The possibilities are exciting.

I also spent time learning about the upgrades to Filmlight’s Flip, which is their on-set color hardware. The Flip can allow you to develop your color look on set, apply it during your edit process (with the Baselight plug-in for Avid) and refine it in final color, all without affecting your RAW files. In addition to the Flip, they developed a software that supports on-set look development and grading called Prelight. I asked if these new technologies could enable us to even do high-end things like sky replacements on set and was told that the hardware within the Flip very well could.

We also visited our friends at DFT, the manufacturers of the Scanity film scanner, to catch up and discuss the business of archiving. With Scanity, Nice Shoes can scan 4K when other scanners only scan up to 2K resolution. This is a vital tool in not only preserving past materials, but in future proofing for emerging formats when archiving scans from film.

VR
On Sunday evening before the exhibits opened, we attended a panel on VR that was hosted by the Foundry. At this event we got to experience a few of the most talked about VR projects including Defrost, one of the first narrative VR films, from the director of Grease, Randal Kleiser, who was on the panel along with moderator Morris May (CEO/founder, Specular Theory), Bryn Mooser (co-founder, RYOT), Tim Dillon (executive producer, MPC) and Jake Black (head of VR, Create Advertising).

The Foundry’s VR panel.

The panel inspired me to delve deeper into the VR world, and on Wednesday I spent most of my last day exploring the Virtual & Augmented Reality Pavilion. In addition to seeing the newest VR camera rig offerings and experiencing a live VR feed, as well as demo-ing the Samsung Gear, I explored viewing options for the color workflow. Some people I spoke to mentioned that multiple Oculus set-ups all attached to a single feed was the way to go for color workflow, but another option that we did a very preliminary exploration of was the “dome” possibility, which offers a focused 180-degree view for everyone involved to comment on the same section of a VR scene. This would enable all involved to be sure they are experiencing and viewing the same thing at the same time.

HDR Workflow
Another panel we attended was about HDR workflows. Nice Shoes has already had the opportunity to work on HDR material and have begun to develop workflows for this emerging medium. Most HDR deliverables are for episodic and long form for such companies as Netflix, Hulu and the like. It may be some time before commercial clients are requesting an HDR deliverable, but the workflows will be much the same so the development being performed now is extremely valuable.

My biggest take away was that there are still no set standards. There’s Dolby Vision vs. HDR 10 vs. PQ vs. others. But it appears that everyone agrees that standards are not needed right now. We need to get tools into the hands of the artists and figure out what works best. Standards will come out of that. The good news is that we appear to be future-proofed for the standard to change. Meaning for the most part, every camera we are shooting on is shooting for HDR and should standards change — say from 1000 nits to 10,000 nits — the footage and process is still there to go back in and color for the new request.

Summing Up
I truly believe my time spent at NAB has prepared me for the myriad of questions that will be put forth throughout the year and will help us develop our workflows to evolve the creative process of post. I’ll be sure to be there again next year in order to prepare myself for the questions of 2017 and beyond.

Our Main Image: The view walking into the South Hall Lower at the LVCC.

Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

G-Tech 6-15

NAB 2016: VR/AR/MR and light field technology impressed

By Greg Ciaccio

The NAB 2016 schedule included its usual share of evolutionary developments, which are truly exciting (HDR, cloud hosting/rendering, etc.). One, however, was a game changer with reach far beyond media and entertainment.

This year’s NAB floor plan featured a Virtual Reality Pavilion in the North Hall. In addition, the ETC (USC’s Entertainment Technology Center) held a Virtual Reality Summit that featured many great panel discussions and opened quite a few minds. At least that’s what I gathered by the standing room only crowds that filled the suite. The ETC’s Ken Williams and Erik Weaver, among others, should be credited for delivering quite a program. While VR itself is not a new development, the availability of relatively inexpensive viewers (with Google Cardboard the most accessible) will put VR in the hands of practically everyone.

Programs included discussions on where VR/AR (Augmented Reality) and now MR (Mixed Reality) are heading, business cases and, not to be forgotten, audio. Keep in mind that with headset VR experiences, multi-channel directional sound must be perceivable with just our two ears.

The panels included experts in the field, including Dolby, DTS, Nokia, NextVR, Fox and CNN. In fact, Juan Santillian from Vantage.tv mentioned that Coachella is streaming live in VR. Often, concerts and other live events have a fixed audience size, and many can’t attend due to financial or sell-out situations. VR can allow a much more intimate and immersive experience than being almost anywhere but onstage.

One example, from Fox Sports’ Michael Davies, involved two friends in different cities virtually attending a football game in a third city. They sat next to each other and chatted during the game, with their audio correctly mapped to their seats. There are no limits to applications for VR/AR/MR, and, by all accounts, once you experience it, there is no doubt that this tech is here to stay.

I’ve heard many times this year that mobile will be the monetary driver for wide adoption of VR. Halsey Minor with Voxelus estimates that 85 percent of VR usage will be via a mobile device. Given that more photos and videos are shot on our phones (by far) than on dedicated cameras, this is not surprising. Some of the latest crop of mobile phones are not only fast and contain high dynamic range and wide color gamut, they feature high-end audio processing from Dolby and others. Plus, our reliance on our mobiles ensures that you’ll never forget to bring it with you.

Light Field Imaging
On both Sunday and Tuesday of NAB 2016, programs were devoted to light field imaging. I was already familiar with this truly revolutionary tech, and learned about Lytro, Inc. a few years ago from Internet ads for an early consumer camera. I was intrigued with the idea of controlling focus after shooting. I visited www.lytro.com and was impressed, but the resolution was low, so, for me, this was mainly a proof of concept. Fast forward three years, and Lytro now has a cinema camera!

Jon Karafin (pictured right), Lytro’s head of Light Field Imaging, not only unveiled the camera onstage, but debuted their short Life, produced in association with The Virtual Reality Company (VRC). Life takes us through a man’s life and is told with no dialog, letting us take in the moving images without distraction. Jon then took us through all the picture aspects using Nuke plug-ins, and minds started blowing. The short is directed by Academy Award-winner Robert Stromberg, and shot by veteran cinematographer David Stump, who is chief imaging scientist at VRC.

Many of us are familiar with camera raw capture and know that ISO, color temperature and other picture aspects can be changed post-shooting. This has proven to be very valuable. However, things like focus, f-stop, shutter angle and many other parameters can now be changed, thanks to light field technology — think of it as an X-ray compared to an MRI. In the interests of trying to keep a complicated technology relatively simple, sensors in the camera capture light fields in not only in X and Y space, but two more “angular” directions, forming what Lytro calls 4D space. The result is accurate depth mapping which opens up so many options for filmmakers.

Lytro_Cinema_2

Lytro Cinema Camera

For those who may think that this opens up too many options in post, all parameters can be locked so only those who are granted access can make edits. Some of the parameters that can be changed in post include: Focus, F-Stop, Depth of Field, Shutter Speed, Camera Position, Shutter Angle, Shutter Blade Count, Aperture Aspect Ratio and Fine Control of Depth (for mattes/comps).

Yes, this camera generates a lot of data. The good news is that you can make changes anywhere with an Internet connection, thanks to proxy mode in Nuke and processing rendered in the cloud. Jon demoed this, and images were quickly processed using Google’s cloud.

The camera itself is very large, but Lytro knows that they’ll need to reduce the size (from around seven feet long) to a more maneuverable form factor. However, this is a huge step in proving that a light field cinema camera and a powerful, manageable workflow is not only possible, but will no doubt prove valuable to filmmakers wanting the power and control offered by light field cinematography.

Greg Ciaccio is a technologist focused primarily on finding new technology and workflow solutions for Motion Picture and Television clients. Ciaccio served in technical management roles for the respective Creative Services divisions for both Deluxe and Technicolor.


Dolby Audio at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for the company’s offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. In this post, the focus will be on Dolby’s audio technologies.

Why would Dolby create AC-4? Dolby AC-3 is over 20 years old, and as a function of its age, it does not do new things well. What are those new things and how will Dolby AC-4 elevate your audio experience?

First, let’s define some acronyms, as they are part of the past and present of Dolby audio in broadcasting. OTA stands for Over The Air, as in what you can receive with an antenna. ATSC stands for Advanced Television Systems Committee, an organization based in the US that standardized HDTV (ATSC 1.0) in the US 20 years ago and is working to standardize Ultra HDTV broadcasts as ATSC 3.0. Ultra HD is referred to as UHD.

Now, some math. Dolby AC-3, which is used with ATSC 1.0, uses up to 384 kbps for 5.1 audio. Dolby AC-4 needs only 128 kbps for 5.1 audio. That increased coding efficiency, along with a maximum bit rate of 640 kbps, leaves 512 kbps to work with. What can be done with that extra 512 kbps?

If you are watching sporting events, Dolby AC-4 allows broadcasters to provide you with the option to select which audio stream you are listening to. You can choose which team’s audio broadcast to listen to, listen to another language, hear what is happening on the field of play, or listen to the audio description of what is happening. This could be applicable to other types of broadcasts, though the demos I have heard, including one at this year’s NAB Show, have all been for sporting events.

Dolby AC-4 allows the viewer to select from three types of dialog enhancement: none, low and high. The dialog enhancement processing is done at the encoder, where it runs a sophisticated dialog identification algorithm and then creates a parametric description that is included as metadata in the Dolby AC-4 bit stream.

What if I told you that after implementing what I described above in a Dolby AC-4 bit stream that there were still bits available for other audio content? It is true, and Dolby AC-4 is what allows Dolby Atmos, a next-generation, rich, and complex object audio system, to be inside ATSC 3.0 audio streams in the US, At my NAB demo, I heard a clip of Mad Max: Fury Road, which was mixed in Dolby Atmos, from a Yamaha sound bar. I perceived elements of the mix coming from places other than the screen, even though the sound bar was where all of the sound waves originated from. Whatever is being done with psychoacoustics to make the experience of surround sound from a sound bar possible is convincing.

The advancements in both the coding and presentation of audio have applications beyond broadcasting. The next challenge that Dolby is taking on is mobile. Dolby’s audio codecs are being licensed to mobile applications, which allows them to be pushed out via apps, which in turn removes the dependency from the mobile device’s OS. I heard a Dolby Atmos clip from a Samsung mobile device. While the device had to be centered in front of me to perceive surround sound, I did perceive it.

Years of R&D at Dolby have yielded efficiencies in coding and new ways of presenting audio that will elevate your experience. From home theater, to mobile, and once broadcasters adopt ATSC 3.0, Ultra HDTV.

Check out my coverage of Dolby’s Dolby Vision offerings at NAB as well.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.


Sony at NAB with new 4K OLED monitor, 4K, 8X Ultra HFR camera

At last year’s NAB, Sony introduced its first 4K OLED reference monitor for critical viewing — the BVM-X300. This year, Sony added a new monitor, the the PVM-X550, a 55-inch, OLED panel with 12-bit signal processing, perfect for client viewing. The Trimaster EL PVM-X550 supports HDR through various Electro-Optical Transfer Functions (EOTF), such as S-Log3, SMPTE ST.2084 and Hybrid Log-Gamma, covering applications for both cinematography and broadcast. The PVM-X550 is a quad-view OLED monitor, which allows customized individual display settings across four distinct views in HD. It is equipped with the same signal-processing engine as the BVM-X300, providing a 12-bit output signal for picture accuracy and consistency. It also supports industry standard color spaces including the wider ITU-R BT.2020 for Ultra High Definition.

HFR Camera
At NAB 2016, Sony displayed their newest camera system: the HDC-4800 combines 4K resolution with enhanced high frame rate capabilities, capturing up to 8X at 4K, and 16X in full HD. “This camera system can do a lot of everything — very high frame rate, very high resolution,” said Rob Willox, marketing manager for content creation, Sony Electronics.

I broke the second paragraph into two, and they are now: The HDC-4800 uses a new Super 35mm 4K CMOS sensor, supporting a wide color space (both BT.2020 and BT.709), and provides an industry standard PL lens mount, giving the system the capability of using the highest quality cinematic lenses for clear and crisp high resolution images.The new sensor brings the system into the cinematic family of RED and Alexa, making it well suited as a competitor to today’s modern, high end cinematic digital solutions.

An added feature of the HDC-4800 is how it’s specifically designed to integrate with Sony’s companion system, the Sony HDC-4300, a 2/3 inch image sensor 4k/HD camera. Using matching colorimetry and deep toolset camera adjustments, and with the ability to take advantage of existing build-up kits, remote control panels and master setup units, the two cameras can blend seamlessly.

Archive
Sony also showed the second generation of its Optical Disc Archive System, which adopts new, high-capacity optical media, rated with a 100 year shelf life with double the transfer rate and double the capacity of a single cartridge at 3.3 TB. The Generation 2 Optical Disc Archive System also adds an 8-channel optical drive unit, doubling the read/write speeds of the previous generation, helping to meet the data needs of real-time 4K production.

NAB 1/17

Scale Logic at NAB with new Genesis HyperMDC

At the 2016 NAB Show, Scale Logic Inc. (SLI) demoed its Genesis HyperMDC, a high-performance metadata cluster supporting the HyperFS file system.The HyperMDC addresses enterprise storage needs by simplifying the implementation and administration of HyperFS.

The new Genesis HyperMDC features a purpose-built scale-out NAS and SAN metadata controller with additional storage options and/or open architecture platform support.The HyperMDC is designed for media and entertainment workflows in post production, sports broadcast, visual effects and other specialized areas.

HyperMDC offers two configurations: the 100S and the 200D. As a metadata controller, the 100S allows for high performance while allowing users to make their own storage choices. The 200D takes the performance of the 100S and adds high availability, with fully redundant, low-latency, metadata leveraging, dual controller RAID and bonded network and fabric. Either choice provides the best in metadata controllers for HyperFS.

eMAM has worked with SLI’s application engineering team to qualify a very specific customer requirement that included SLI’s HyperMDC, MXF Server and SGL Archive Management, as well as the Empress Media Asset Manager. Empress has engaged with the SLI team for years, and its ability to use SLI’s interoperability lab enables it to qualify storage and networking solutions, as well as the interactions of the entire deployment with many creative applications. Hitachi Data Systems also has partnered with SLI, and the companies have jointly deployed shared storage solutions worldwide.

NAB 1/17

NAB: The making of Jon Favreau’s ‘The Jungle Book’

By Bob Hoffman

While crowds lined up above the south hall at NAB to experience the unveiling of the new Lytro camera, across the hall a packed theatre conference room geeked-out as the curtain was slightly pulled back during a panel on the making of director Jon Favreau’s cinematic wonder, The Jungle Book.   Moderated by ICG Magazine editor David Geffner, Oscar-winning VFX supervisor Rob Legato, ASC, along with Jungle Book producer Brigham Taylor and Technicolor master colorist Mike Sowa enchanted the packed room with stories of the making and finishing of the hit film.

Legato first started developing his concepts for “virtual production” techniques on Martin Scorsese’s The Aviator, and shortly thereafter, with James Cameron and his hit Avatar. During the panel, Legato took the audience through a set of short demo clips of various scenes in the film while providing background on the production processes used by cinematographer Bill Pope, ASC, and Favreau to capture the live-action component of the film. Legato pointedly explained that his process is informed by a very traditional analog approach. The development of his thinking is based on a commitment to giving the filmmaking team tools and methodologies that allow them to work within their own particular comfort zones.

They may be working in a virtual environment, but Favreau’s wonderful touch is brilliantly demonstrated by the performance of 12-year-old Neel Sethi on his theatrical debut feature. Geffner noted more than once that “the emotional stakes are so well done you get involved emotionally” — without any notion of the technical complexity underlying the narrative.  One other area noted by Legato and Sowa was the myriad of theatrical-HDR deliverables for The Jungle Book, including up to 14-foot lamberts for the 3D presentation.  This film, and presentation, was just another clear indicator that HDR is a clear differentiator that audiences are clamoring for.

Bob Hoffman works at Technicolor in Hollywood.


NAB: Autodesk buys Solid Angle, updates products

At the NAB show, Autodesk announced that it has acquired Solid Angle, developer of Arnold, an advanced, ray-tracing image renderer for high-quality 3D animation and visual effects creation used in film, television and advertising worldwide. Arnold has been used on Academy Award-winning films such as Ex Machina and The Martian, as well as Emmy Award-winning series Game of Thrones, among other popular features, TV shows and commercials.

As part of Autodesk, Solid Angle’s development team will continue to evolve Arnold, working in close collaboration with its user community. Arnold will remain available as a standalone renderer for both Autodesk products and third-party applications including Houdini, Katana, and Cinema 4D on Linux, Mac OS X and Windows. Both Autodesk 3ds Max and Autodesk Maya will also continue to support other third-party renderers.

“We’re constantly looking out for promising technologies that help artists boost creativity and productivity,” shared Chris Bradshaw, senior VP, Autodesk Media & Entertainment. “Efficient rendering is increasingly critical for 3D content creation and acquiring Solid Angle will allow us to help customers better tackle this computationally intensive part of the creative process. Together, we can improve rendering workflows within our products as well as accelerate the development of new rendering solutions that tap into the full potential of the cloud, helping all studios scale production.”

“Autodesk shares our passion for numerical methods and computational performance and our desire to simplify the rendering pipeline, so artists can create top quality visuals more easily,” said Solid Angle Founder Marcos Fajardo. “With Autodesk, we’ll be able to accelerate development as well as scale our marketing, sales and support operations for Arnold to better meet the needs of our growing user base. Working side-by-side, we can solve production challenges in rendering and beyond.”

Arnold pricing and packaging is unchanged and Autodesk will continue to offer perpetual licenses of Arnold. Customers should continue to purchase Arnold through their usual Solid Angle channels.

Product Updates
In other news, Autodesk updated three of its products.

Autodesk Flame 2017:
– Camera FX scene-based tools enable the creation of sophisticated 3D composites in Action.Powered by algorithms from the Stingray game engine, artists can use these highly interactive VFX tools for ambient occlusion, realistic reflections, and depth of field without slowing interactivity.
– Connected color workflow introduces a new level of integration between VFX and proven color grading. This new workflow brings color grading information from Autodesk Lustre directly into Flame’s node-based compositing environment and maintains a live connection so that composites can be rendered and seen in context in Lustre (our main image). This collaborative workflow allows artists to rapidly finish high-end projects by moving seamlessly between compositing, VFX and look development tools.
– Color management enhancements to Flame, Autodesk Flare and Autodesk Flame Assist allow users to quickly standardize the way a source’s colorspace is identified and processed.
– User-requested enhancements include improvements to desktop reels, conform and timeline workflow, batch, media panel and the UI.

Autodesk Maya 2016 Extension 2:
Extension 2 adds new capabilities for creating 3D motion graphics, a new rendering workflow and tools for artists that allow them to create and animate characters faster and easier than ever.
– New motion graphics tools bring a procedural, node-based 3D design workflow directly into Maya. Combining powerful motion graphics tools with Maya’s deep creative toolset allows artists to quickly create sophisticated and unique 3D motion graphics such as futuristic UIs, expressive text, and organic animation and effects.
– Updated render management makes segmenting your scenes into render layers easier and faster, giving artists more control.
– Character creation workflows with a new quick rig tool and shape authoring enhancements that allow artists to create, rig and animate characters faster. Additional updates include: improvements to symmetry and poly modeling, UV editing, animation performance, rigging, the Bifrost workflow and XGen; a content browser; deep adaptive fluid simulation and high-accuracy viscosity in Bifrost; and XGen hair cards.

Autodesk 3ds Max 2017:
Freeing up more time for creativity, 3ds Max 2017 offers artists a fresh new look as well as modeling, animation and rendering enhancements, including:
– A new UI with support for high DPI displays expands the array of monitors and laptops users may run the software on while correctly applying Windows display scaling.
– Autodesk Raytracer Renderer (ART) a fast, physically-based renderer, enables the creation of photoreal imagery and videos.
– 3ds Max asset library, available via the Autodesk Exchange App Store, offers quick access to model libraries; simply search assets and drag and drop them into a scene.
– Additional updates include fast form hard surfaces; UV mapping, object tool and animation productivity enhancements; a scene converter for moving from one renderer to another or to realtime engines; and tighter pipeline integration via an improved Python/.NET toolset.


NAB: Critique upped to version 4, using AWS for cloud

From the minds at LA-based post house DigitalFilm Tree comes a new version of Critique, its cloud-collaboration software. Critique, which is now in v.4, is already used on shows such as Modern Family, The Simpsons and NCIS: Los Angeles. In addition to many new features and security controls in Critique 4, this is the first time the app has been deployed on AWS.

Critique’s new relationship with AWS is key to version 4, says Guillaume Aubuchon, CIO of Critique. “AWS is not only the largest cloud provider, but they are the cloud provider of choice in the M&E space. Our infrastructure shift to AWS afforded us the ability to architect the software to leverage the range of services in the AWS cloud platform. It allowed us to build Critique 4 from scratch in a matter of mere months.”

Critique 4 is a secure digital media asset management (MAM) platform with extensive features to support creative processes and production workflow for both the media and entertainment space as well as enterprise. Built to be extremely easy to use, Critique facilitates collaboration through realtime chat, live annotations, and secure sharing over the Internet to deliver productions on time and on budget. Realtime chat and drawing annotations are viewable across the Web and iOS — they also work with the new Apple Pencil for iPad Pro.

Designed to improve workflow, the software facilitates every step from protected dailies screening to VFX workflows to post to distribution while capitalizing on enterprise-level security to protect valuable assets.

Critique 4 was born of the minds of its executive team of Aubuchon, a veteran in the production space having worked on such projects such as Her, NCIS:LA and Angie Tribeca, and Chris Chen, an expert in the production streaming space and the former CTO of DAX. With its ability to use its own DigitalFilm Tree as a beta test site, Critique is built to ensure it works in real-world media environments.

One of the new exciting features of Critique 4 is its ability to index Amazon Simple Storage Service (Amazon S3) to allow companies to manage their own content inside of Critique’s award-winning interface. It also offers high-performance cloud MAM for simultaneous video and document management: Users can collaborate with Critique’s review, approval and annotation workflows not only for video but also for production documents including scripts, graphics and still images.

“Digital Rights Management (DRM) protection is rarely used, if at all, for unreleased content, which is arguably where it is needed the most,” notes Chen. “Critique was designed to leverage DRM invisibly throughout its video distribution system on desktop, web, and mobile environments. This allows Critique to break through the legacy walled-garden approach, allowing a new level of flexibility in collaboration while maintaining security. But we do it in such a way that the users don’t even know it’s there.”

The ability to share assets in this way expands its mobility and Critique is available via web, phones, tablets and Apple TV. The video service is backed by a true CDN running multi-bit-rate video to prevent glitches on any platform. “Users can take advantage of Critique anywhere — in their office, living room, the subway or even on a plane,” explains Chen. “And it will be true to the original media.

Other highlights of Critique 4 include: storage, archiving and management of Raw material; automatic transcoding of Raw material into a proxy format for viewing; granular permissions on files, folders, and projects; easy-to-manage sharing functions for users outside the system with the ability to time-limit and revoke/extend individual permissions; customizable watermarking on videos.

While Critique was born in the creative and operations side of the media and entertainment market, it is extending to enterprise, small to medium-size businesses, publishing, education and government/military sectors.

This latest version of Critique is available now for a free 30-day trial (AWS usage fees apply). Pricing is extremely competitive with 10, 20, 50 and 100 user levels starting as low as $39 per user. Enterprise level contracts are available for larger projects and companies with multiple projects. The fee includes unlimited streaming of current content and 24/7 white-glove tech support. AppleTV, Apple iPad and iPhone apps are also included. For a nominal fee, users can add DRM, high-resolution cloud transcode and storage for camera raw and mezzanine files.

NAB: AMD intros FirePro workstation graphics card, FireRender plug-in for 3ds Max

At the 2016 NAB Show, AMD has introduced the AMD FirePro W9100 32GB, a workstation graphics card with 32GB memory support for large asset workflows with creative applications. The company also introduced the AMD FireRender plug-in for Autodesk 3ds Max (shown), which enables VR storytellers to use enhanced 4K workflows and photorealistic rendering functionality.

Throughout the show, StudioXperience’s AMD FirePro GPU Zone will be featuring leading applications in a demo of efficient content creation workloads with high visual quality, application responsiveness and compute performance. The zone showcases solutions from Adobe, Apple, Autodesk, Avid, Blackmagic Design, Dell, HP and Rhino, offering attendees a range of hands-on workflow experiences powered by AMD FirePro professional graphics. Demos include a VR production workflow, computer-aided engineering and visualization and 4k workflows, among others.