Tag Archives: NAB 2016

What does Fraunhofer Digital Media Alliance do? A lot!

By Jonathan Abrams

While the vast majority of the companies with exhibit space at NAB are for-profit, there is one non-profit that stands out. With a history of providing ubiquitous technology to the masses since 1949, Fraunhofer focuses on applied research and developments that end up — at some point in the near future — as practical products or ready-for-market technology.

In terms of their revenue, one-third of their funding is for basic research, with the remaining two-thirds applied toward industry projects and coming directly from private companies. Their business model is focused on contract research and licensing of technologies. They have sold first prototypes and work with distributors, though Fraunhofer always keeps the rights to continue development.

What projects were they showcasing at NAB 2106 that have real-world applications in the near future? You may have heard about the Lytro camera. Fraunhofer Digital Media Alliance member Fraunhofer IIS has been taking a camera agnostic approach to their work with light-field technology. Their goal is to make this technology available for many different camera set-ups, and they were proving it with a demo of their multi-cam light-field plug-in for The Foundry’s Nuke. After capturing a light-field, users can perform framing correction and relighting, including changes to angles, depth and the creation of point clouds.

The Nuke plug-in (see our main image) allows the user to create virtual lighting (relighting) and interactive lighting. Light-field data also allows for depth estimation (called depth maps) and is useful for mattes and secondary color correction. Similar to Lytro, focus pulling can be performed with this light-field plug-in. Why Nuke? That is what their users requested. Even though Nuke is an OFX host, the Fraunhofer IIS light field plug-in only works within Nuke. As for using this light-field plug-in outside of Nuke, I was told that “porting to Mac should be an easy task.” Hopefully that is an accurate statement, though we will have to wait to find out.

DCP
Fraunhofer IIS has its hand in other parts of production and post as well. The last two steps of most projects are the creation of deliverables and their delivery. If you need to create and deliver a DCP (Digital Cinema Package), then easyDCP may be for you.easydcp1

This project began in 2008, when creating a DCP was not as familiar as it is today to most users, and a deep expertise of the specifications for correctly making a DCP was very complex. Small- to medium-sized post companies, in particular, profit from the easy-to-use easyDCP suite. The engineers of Fraunhofer IIS were also working on behalf of the DCI specifications for Digital Cinema, therefore they are experienced in integrating all important features in this software for DCPs.

The demo I saw indicated that the JPEG2000 encode was as fast as 108fps! In 2013, Fraunhofer partnered with both Blackmagic and Quantel to make this software available to the users of those respective finishing suites. The demo I saw was using a Final Cut Pro X project file and it was with the Creator+ version since it had support for encryption. Avid Media Composer users will have to export their sequence and import it into Resolve to use easyDCP Creator. Amazingly, this software works as far back as Mac OS X Leopard. IMF creation and playback can also be done with the easyDCP software suite.

VR/360
VR and 360-degree video were prominent at NAB, and the institutes of the Fraunhofer Digital Media Alliance are involved in this as well, having worked on live streaming and surround sound as part of a project with the Berlin Symphony Orchestra.

Fraunhofer had a VR demo pod at the ATSC 3.0 Consumer Experience (in South Hall Upper) — I tried it and the sound did track with my head movement. Speaking of ATSC 3.0, it calls for an immersive audio codec. Each country or geographic region that adopts ATSC 3.0 can choose to implement either Dolby AC-4 or MPEG-H, the latter of which is the result of research and development by Fraunhofer, Technicolor and Qualcomm. South Korea announced earlier this year that they will begin ATSC 3.0 (UHDTV) broadcasting in February 2017 using the MPEG-H audio codec.

From what you see to what you hear, from post to delivery, the Fraunhofer Digital Media Alliance has been involved in the process.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

UHD Alliance’s Victor Matsuda: updates from NAB 2016

Victor Matsuda from the UHD Alliance was at NAB 2016. The Alliance was formed about 15 months ago as 4K UHD products began exploding into the market. The goal of the Alliance was to establish certifications for these new products and for content. All of this is to ensure a quality experience for consumers, who will ultimately drive 4K/UHD adoption throughout the market.

Watch our video with Matsuda to find out more.

NAB 2016 from an EP’s perspective

By Tara Holmes

Almost two weeks ago, I found myself at NAB for the first time. I am the executive producer of color and finishing at Nice Shoes, a post production studio in New York City. I am not an engineer and I am not an artist, so why would an EP go to NAB? I went because one of my main goals for 2016 is to make sure the studio remains at the forefront of technology. While I feel that our engineering team and artists represent us well in that respect, I wanted to make sure that I, along with our producers, were fully educated on these emerging technologies.

One of our first priorities for NAB was to meet with top monitor manufacturers to hopefully land on what UHD HDR monitors we would find to meet our standards for professional client viewing. We came to the conclusion that the industry is not there yet and we have more research to do before we upgrade our studio viewing environments.

Everyone with me was in agreement. They aren’t where they need to be. Most are only outputting around 400-800 nits and are experiencing luminance and contrast issues. None of this should stop the process of coloring for HDR. For the master monitor for the colorist, the Sony BVM-X300 OLED master monitor, which we are currently using, seems to be the ideal choice as you can still work in traditional Rec 709 as well as Rec 2020 for HDR.

After checking out some monitors, we headed to the FilmLight booth to go over the 5.0 upgrades to Baselight. Our colorist Ron Sudul, along with Nice Shoes Creative Studio VFX supervisor Adrian Winter, sat with myself and the FilmLight reps to discuss the upgrades, which included incredible new isolation tracking capabilities.  These upgrades will reinvent what can be achieved in the color suite: from realtime comps to retouch being done in color. The possibilities are exciting.

I also spent time learning about the upgrades to Filmlight’s Flip, which is their on-set color hardware. The Flip can allow you to develop your color look on set, apply it during your edit process (with the Baselight plug-in for Avid) and refine it in final color, all without affecting your RAW files. In addition to the Flip, they developed a software that supports on-set look development and grading called Prelight. I asked if these new technologies could enable us to even do high-end things like sky replacements on set and was told that the hardware within the Flip very well could.

We also visited our friends at DFT, the manufacturers of the Scanity film scanner, to catch up and discuss the business of archiving. With Scanity, Nice Shoes can scan 4K when other scanners only scan up to 2K resolution. This is a vital tool in not only preserving past materials, but in future proofing for emerging formats when archiving scans from film.

VR
On Sunday evening before the exhibits opened, we attended a panel on VR that was hosted by the Foundry. At this event we got to experience a few of the most talked about VR projects including Defrost, one of the first narrative VR films, from the director of Grease, Randal Kleiser, who was on the panel along with moderator Morris May (CEO/founder, Specular Theory), Bryn Mooser (co-founder, RYOT), Tim Dillon (executive producer, MPC) and Jake Black (head of VR, Create Advertising).

The Foundry’s VR panel.

The panel inspired me to delve deeper into the VR world, and on Wednesday I spent most of my last day exploring the Virtual & Augmented Reality Pavilion. In addition to seeing the newest VR camera rig offerings and experiencing a live VR feed, as well as demo-ing the Samsung Gear, I explored viewing options for the color workflow. Some people I spoke to mentioned that multiple Oculus set-ups all attached to a single feed was the way to go for color workflow, but another option that we did a very preliminary exploration of was the “dome” possibility, which offers a focused 180-degree view for everyone involved to comment on the same section of a VR scene. This would enable all involved to be sure they are experiencing and viewing the same thing at the same time.

HDR Workflow
Another panel we attended was about HDR workflows. Nice Shoes has already had the opportunity to work on HDR material and have begun to develop workflows for this emerging medium. Most HDR deliverables are for episodic and long form for such companies as Netflix, Hulu and the like. It may be some time before commercial clients are requesting an HDR deliverable, but the workflows will be much the same so the development being performed now is extremely valuable.

My biggest take away was that there are still no set standards. There’s Dolby Vision vs. HDR 10 vs. PQ vs. others. But it appears that everyone agrees that standards are not needed right now. We need to get tools into the hands of the artists and figure out what works best. Standards will come out of that. The good news is that we appear to be future-proofed for the standard to change. Meaning for the most part, every camera we are shooting on is shooting for HDR and should standards change — say from 1000 nits to 10,000 nits — the footage and process is still there to go back in and color for the new request.

Summing Up
I truly believe my time spent at NAB has prepared me for the myriad of questions that will be put forth throughout the year and will help us develop our workflows to evolve the creative process of post. I’ll be sure to be there again next year in order to prepare myself for the questions of 2017 and beyond.

Our Main Image: The view walking into the South Hall Lower at the LVCC.

Talking storage with LaCie at NAB

By Isaac Spedding

As I power-walked my way through the NAB show floor, carefully avoiding eye contact with hopeful booth minders, my mind was trying to come up with fancy questions to ask the team at LaCie that would cement my knowledge of storage solutions and justify my press badge. After drawing a blank, I decided to just ask what I had always wanted to know about storage companies in general: How reliable are your drives and how do you prove it? Why is there a blue bubble on your enclosures? Why are drives still so damn heavy?

Fortunately, I met with two members of the LaCie team, who kindly answered my tough questions with valuable information and great stories. I should note that just prior to this NAB trip I had submitted an RMA for 10 ADATA USB.3.0 drives, as all the connectors on them had become loose and fallen out or into the single-piece enclosure. So, as you can imagine, at that moment in time, I was not exactly the biggest fan of hard drive companies in general.

“We are never going to tell you (a drive) will never fail,” said Clement Barberis, marketing manager for LaCie. “We tell people to keep multiple copies. It doesn’t matter how, just copies. It’s not about losing your drive it’s about losing your data.”

LaCie offers a three-to five-year warranty on all its products and has several services available, including fast replacement and data recovery. Connectors and drives are the two main points of failure for any portable drive product.

two shot

LaCie’s Clement Barberis and Kristin Macrostie.

Owned by Seagate, LaCie has a very close connection with that team and can select drives based on what the product needs. Design, development and target-user all have an impact on drive and connection selection. Importantly, LaCie decides on the connection options not by what is the newest but by what works best with the internal drive speed.

Their brand new 12-bay enclosure, the LaCie 12big Thunderbolt 3 (our main image), captures the speed of Thunderbolt 3, and with a 96TB capacity (around 100 hours of uncompressed 4K), the system can transfer around 2600 MB/s (yes, not bits). It is targeted at small production houses shooting high-resolution material.

Why So Heavy?
After Barberis showed me the new LaCie 12big, I asked why the form factor and weight had not been redesigned after all these years. I mean, 96TB is great and all but it’s not light — at 17.6kg (38.9 pounds) it’s not easy to take on the plane. Currently, the largest single drive available is 8TB and features six platters inside the traditional form factor. Each additional platter increases the weight of each drive (and its capacity), but the weight increase means that a smaller form factor for a drive array is possible. That’s why drive arrays have been staying the same size and gaining weight and storage capacity. So your sleek drive will be getting heavier.

LaCie produces several ranges of hard drives with different designs. It’s most visually noticeable in LaCie’s Rugged drive series, which features bright orange bumpers. Other products feature a “Porsche-like” design and feature the blue LaCie bubble. If you are like me, you might be curious how this look came about.

rugged

According to Kristin MacRostie, PR manager for LaCie, “The company founder, Philippe Spruch, wasn’t happy with the design of the products LaCie was putting out 25 years ago — in his words, they were ‘geeky and industrial.’ So, Spruch took a hard drive and a sticky note and he wrote, ‘Our hard drives look like shit, please help,’ and messengered it over to (designer) Philippe Starck’s office in Paris. Starck called Spruch right away.”

The sleek design started with Philippe Starck and then Neil Poulton, who was an apprentice to Starck, and who was brought on to design the drives we see today. The drive designs target the intended consumers, with the “Porsche design” aligning itself to Apple users.

Hearing the story behind LaCie’s design choice, the recommendation to keep multiple drives and not rely on just one, and the explanation of why each product is designed, convinced me that LaCie is producing drive solutions that are built for reliability and usability. Although not the cheapest option on the market today, the LaCie solutions justify this with solid design and logic behind the decision of components, connectors and cost. Besides, at the end of the day, your data is the most important thing and you shouldn’t be keeping it on the cheapest possible drive you found at Best Buy.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.

Fusion launches at NAB 2016 with 4K OLED display

Industry veterans Carl J. Dempsey and Steve Farmer introduced their new company, Fusion, at NAB 2016. They also unveiled their first product — a 55-inch OLED 4K reference display system called the ORD-55.

The ORD-55 features independent-processing quad-mode operation (IPQ), where four individual processors provide independent control of all channels, and uses a single-link 12G input. In quad mode, it provides four independent 27.5-inch FHD displays. The display can also be configured to show one large 4K picture with three smaller preview panes in FHD. The system features a deep “Black Level,” super-wide viewing angle of 178 degrees, a 10 microseconds response time, 100,000:1 contrast ratio, ultra-wide color gamut with 1.07 billion colors, and 12-bit color processing.

Dempsey and Farmer bring a ton of experience to Fusion. Dempsey, a 25-year industry vet, was most recently president and CEO of Wohler Technologies. Farmer brings 22 years in the broadcast industry to the new company, including a stint as director of engineering at Wohler. Starting as a design engineer, he then took on senior management roles in both engineering and product management.

In their new venture, Dempsey will serve as Fusion’s CEO, while Farmer will hold the CTO post.

Quick Review: Cartoni Focus 8 tripod

By Isaac Spedding

As I had travelled to Las Vegas from New Zealand for one of the world’s largest film and video conferences, the NAB Show, I hoped that it would be relatively easy to borrow a tripod for a shoot I was doing on the Vegas Strip.

The shoot was two one-hour static videos in the same location — one at night and one in the day. The footage would be played on a loop inside two large doors in a glade as part of the Christchurch Botanic D’Lights exhibit in New Zealand. I was one of several artists who would be exhibiting works in the botanic gardens based on the theme “Lost Vegas.”

Fortunately, postPerspective’s Randi Altman knew of a team who could help. I was given the Cartoni Focus 8 for 24 hours. The Focus 8 kept core Cartoni values of usability and reliability. I was instantly impressed by two features: one was the easy clip for the leg extension, which audibly snapped open and closed, and the other was the robust base plate that came with the system. There is no way you are going to lose that screw in there. What’s great is that the Focus 8 head also accommodates some Manfrotto and Sachtler base plates too!

I had to walk quite a bit of the Vegas Strip to capture what I needed, and the tripod was easy to carry and set up. The Sony A6000 I was shooting with was too light to really test the Focus 8 under weight, but the core functionality was retained even with a super light camera, allowing me to easily balance and lock the camera in place.

I had nothing pinch me, nothing “kind of work” and nothing that annoyed me about this tripod. Once our current line of tripods reach the end of their life I think the Cartoni range will be on my wish list. Although plastic, and at the lower end for Cartoni platforms, the Focus 8 cuts no corners and is a great solution for anyone looking for a sub $1,000 ($845.75 to be exact) set-up.

A special thanks to The Studio-B&H and Manios Digital for making my shoot happen.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.

Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

NAB 2016: VR/AR/MR and light field technology impressed

By Greg Ciaccio

The NAB 2016 schedule included its usual share of evolutionary developments, which are truly exciting (HDR, cloud hosting/rendering, etc.). One, however, was a game changer with reach far beyond media and entertainment.

This year’s NAB floor plan featured a Virtual Reality Pavilion in the North Hall. In addition, the ETC (USC’s Entertainment Technology Center) held a Virtual Reality Summit that featured many great panel discussions and opened quite a few minds. At least that’s what I gathered by the standing room only crowds that filled the suite. The ETC’s Ken Williams and Erik Weaver, among others, should be credited for delivering quite a program. While VR itself is not a new development, the availability of relatively inexpensive viewers (with Google Cardboard the most accessible) will put VR in the hands of practically everyone.

Programs included discussions on where VR/AR (Augmented Reality) and now MR (Mixed Reality) are heading, business cases and, not to be forgotten, audio. Keep in mind that with headset VR experiences, multi-channel directional sound must be perceivable with just our two ears.

The panels included experts in the field, including Dolby, DTS, Nokia, NextVR, Fox and CNN. In fact, Juan Santillian from Vantage.tv mentioned that Coachella is streaming live in VR. Often, concerts and other live events have a fixed audience size, and many can’t attend due to financial or sell-out situations. VR can allow a much more intimate and immersive experience than being almost anywhere but onstage.

One example, from Fox Sports’ Michael Davies, involved two friends in different cities virtually attending a football game in a third city. They sat next to each other and chatted during the game, with their audio correctly mapped to their seats. There are no limits to applications for VR/AR/MR, and, by all accounts, once you experience it, there is no doubt that this tech is here to stay.

I’ve heard many times this year that mobile will be the monetary driver for wide adoption of VR. Halsey Minor with Voxelus estimates that 85 percent of VR usage will be via a mobile device. Given that more photos and videos are shot on our phones (by far) than on dedicated cameras, this is not surprising. Some of the latest crop of mobile phones are not only fast and contain high dynamic range and wide color gamut, they feature high-end audio processing from Dolby and others. Plus, our reliance on our mobiles ensures that you’ll never forget to bring it with you.

Light Field Imaging
On both Sunday and Tuesday of NAB 2016, programs were devoted to light field imaging. I was already familiar with this truly revolutionary tech, and learned about Lytro, Inc. a few years ago from Internet ads for an early consumer camera. I was intrigued with the idea of controlling focus after shooting. I visited www.lytro.com and was impressed, but the resolution was low, so, for me, this was mainly a proof of concept. Fast forward three years, and Lytro now has a cinema camera!

Jon Karafin (pictured right), Lytro’s head of Light Field Imaging, not only unveiled the camera onstage, but debuted their short Life, produced in association with The Virtual Reality Company (VRC). Life takes us through a man’s life and is told with no dialog, letting us take in the moving images without distraction. Jon then took us through all the picture aspects using Nuke plug-ins, and minds started blowing. The short is directed by Academy Award-winner Robert Stromberg, and shot by veteran cinematographer David Stump, who is chief imaging scientist at VRC.

Many of us are familiar with camera raw capture and know that ISO, color temperature and other picture aspects can be changed post-shooting. This has proven to be very valuable. However, things like focus, f-stop, shutter angle and many other parameters can now be changed, thanks to light field technology — think of it as an X-ray compared to an MRI. In the interests of trying to keep a complicated technology relatively simple, sensors in the camera capture light fields in not only in X and Y space, but two more “angular” directions, forming what Lytro calls 4D space. The result is accurate depth mapping which opens up so many options for filmmakers.

Lytro_Cinema_2

Lytro Cinema Camera

For those who may think that this opens up too many options in post, all parameters can be locked so only those who are granted access can make edits. Some of the parameters that can be changed in post include: Focus, F-Stop, Depth of Field, Shutter Speed, Camera Position, Shutter Angle, Shutter Blade Count, Aperture Aspect Ratio and Fine Control of Depth (for mattes/comps).

Yes, this camera generates a lot of data. The good news is that you can make changes anywhere with an Internet connection, thanks to proxy mode in Nuke and processing rendered in the cloud. Jon demoed this, and images were quickly processed using Google’s cloud.

The camera itself is very large, but Lytro knows that they’ll need to reduce the size (from around seven feet long) to a more maneuverable form factor. However, this is a huge step in proving that a light field cinema camera and a powerful, manageable workflow is not only possible, but will no doubt prove valuable to filmmakers wanting the power and control offered by light field cinematography.

Greg Ciaccio is a technologist focused primarily on finding new technology and workflow solutions for Motion Picture and Television clients. Ciaccio served in technical management roles for the respective Creative Services divisions for both Deluxe and Technicolor.

Scale Logic at NAB with new Genesis HyperMDC

At the 2016 NAB Show, Scale Logic Inc. (SLI) demoed its Genesis HyperMDC, a high-performance metadata cluster supporting the HyperFS file system.The HyperMDC addresses enterprise storage needs by simplifying the implementation and administration of HyperFS.

The new Genesis HyperMDC features a purpose-built scale-out NAS and SAN metadata controller with additional storage options and/or open architecture platform support.The HyperMDC is designed for media and entertainment workflows in post production, sports broadcast, visual effects and other specialized areas.

HyperMDC offers two configurations: the 100S and the 200D. As a metadata controller, the 100S allows for high performance while allowing users to make their own storage choices. The 200D takes the performance of the 100S and adds high availability, with fully redundant, low-latency, metadata leveraging, dual controller RAID and bonded network and fabric. Either choice provides the best in metadata controllers for HyperFS.

eMAM has worked with SLI’s application engineering team to qualify a very specific customer requirement that included SLI’s HyperMDC, MXF Server and SGL Archive Management, as well as the Empress Media Asset Manager. Empress has engaged with the SLI team for years, and its ability to use SLI’s interoperability lab enables it to qualify storage and networking solutions, as well as the interactions of the entire deployment with many creative applications. Hitachi Data Systems also has partnered with SLI, and the companies have jointly deployed shared storage solutions worldwide.

NAB: Autodesk buys Solid Angle, updates products

At the NAB show, Autodesk announced that it has acquired Solid Angle, developer of Arnold, an advanced, ray-tracing image renderer for high-quality 3D animation and visual effects creation used in film, television and advertising worldwide. Arnold has been used on Academy Award-winning films such as Ex Machina and The Martian, as well as Emmy Award-winning series Game of Thrones, among other popular features, TV shows and commercials.

As part of Autodesk, Solid Angle’s development team will continue to evolve Arnold, working in close collaboration with its user community. Arnold will remain available as a standalone renderer for both Autodesk products and third-party applications including Houdini, Katana, and Cinema 4D on Linux, Mac OS X and Windows. Both Autodesk 3ds Max and Autodesk Maya will also continue to support other third-party renderers.

“We’re constantly looking out for promising technologies that help artists boost creativity and productivity,” shared Chris Bradshaw, senior VP, Autodesk Media & Entertainment. “Efficient rendering is increasingly critical for 3D content creation and acquiring Solid Angle will allow us to help customers better tackle this computationally intensive part of the creative process. Together, we can improve rendering workflows within our products as well as accelerate the development of new rendering solutions that tap into the full potential of the cloud, helping all studios scale production.”

“Autodesk shares our passion for numerical methods and computational performance and our desire to simplify the rendering pipeline, so artists can create top quality visuals more easily,” said Solid Angle Founder Marcos Fajardo. “With Autodesk, we’ll be able to accelerate development as well as scale our marketing, sales and support operations for Arnold to better meet the needs of our growing user base. Working side-by-side, we can solve production challenges in rendering and beyond.”

Arnold pricing and packaging is unchanged and Autodesk will continue to offer perpetual licenses of Arnold. Customers should continue to purchase Arnold through their usual Solid Angle channels.

Product Updates
In other news, Autodesk updated three of its products.

Autodesk Flame 2017:
– Camera FX scene-based tools enable the creation of sophisticated 3D composites in Action.Powered by algorithms from the Stingray game engine, artists can use these highly interactive VFX tools for ambient occlusion, realistic reflections, and depth of field without slowing interactivity.
– Connected color workflow introduces a new level of integration between VFX and proven color grading. This new workflow brings color grading information from Autodesk Lustre directly into Flame’s node-based compositing environment and maintains a live connection so that composites can be rendered and seen in context in Lustre (our main image). This collaborative workflow allows artists to rapidly finish high-end projects by moving seamlessly between compositing, VFX and look development tools.
– Color management enhancements to Flame, Autodesk Flare and Autodesk Flame Assist allow users to quickly standardize the way a source’s colorspace is identified and processed.
– User-requested enhancements include improvements to desktop reels, conform and timeline workflow, batch, media panel and the UI.

Autodesk Maya 2016 Extension 2:
Extension 2 adds new capabilities for creating 3D motion graphics, a new rendering workflow and tools for artists that allow them to create and animate characters faster and easier than ever.
– New motion graphics tools bring a procedural, node-based 3D design workflow directly into Maya. Combining powerful motion graphics tools with Maya’s deep creative toolset allows artists to quickly create sophisticated and unique 3D motion graphics such as futuristic UIs, expressive text, and organic animation and effects.
– Updated render management makes segmenting your scenes into render layers easier and faster, giving artists more control.
– Character creation workflows with a new quick rig tool and shape authoring enhancements that allow artists to create, rig and animate characters faster. Additional updates include: improvements to symmetry and poly modeling, UV editing, animation performance, rigging, the Bifrost workflow and XGen; a content browser; deep adaptive fluid simulation and high-accuracy viscosity in Bifrost; and XGen hair cards.

Autodesk 3ds Max 2017:
Freeing up more time for creativity, 3ds Max 2017 offers artists a fresh new look as well as modeling, animation and rendering enhancements, including:
– A new UI with support for high DPI displays expands the array of monitors and laptops users may run the software on while correctly applying Windows display scaling.
– Autodesk Raytracer Renderer (ART) a fast, physically-based renderer, enables the creation of photoreal imagery and videos.
– 3ds Max asset library, available via the Autodesk Exchange App Store, offers quick access to model libraries; simply search assets and drag and drop them into a scene.
– Additional updates include fast form hard surfaces; UV mapping, object tool and animation productivity enhancements; a scene converter for moving from one renderer to another or to realtime engines; and tighter pipeline integration via an improved Python/.NET toolset.

NAB 2016: My pick for this year’s gamechanger is Lytro

By Isaac Spedding

There has been a lot of buzz around what the gamechanger was at this year’s NAB show. What was released that will really change the way we all work? I was present for the conference session where an eloquent Jon Karafin, head of Light Field Video, explained that Lytro has created a camera system that essentially captures every aspect of your shot and allows you to recreate it in any way, at any position you want, using light field technology.

Typically, with game changing technology comes uncertainty from the established industry, and that was made clear during the rushed Q+A session, where several people (after congratulating the Lytro team) nervously asked if they had thought about the fate of positions in the industry which the technology would make redundant. Jon’s reply was that core positions won’t change, however, the way in which they operate will. The mob of eager filmmakers, producers and young scientists that queued to meet him (I was one of them) was another sign that the technology is incredibly interesting and exciting for many.

Lytro2“It’s a birth of a new technology that very well could replace the way that Hollywood makes films.” These are words from Robert Stromberg (DGA), CCO and founder of The Virtual Reality Company, in the preview video for Lytros’ debut film Life, which will be screened on Tuesday to an audience of 500 lucky attendees. Karafin and Jason Rosenthal, CEO at Lytro, will provide a Lytro Cinema demonstration and breakdown of the short film.

Lytro Cinema is my pick for the NAB 2016 game changing technology and it looks like it will not only advance capture, but also change post production methodology and open up new roles, possibilities and challenges for everyone in the industry.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.

NAB: Critique upped to version 4, using AWS for cloud

From the minds at LA-based post house DigitalFilm Tree comes a new version of Critique, its cloud-collaboration software. Critique, which is now in v.4, is already used on shows such as Modern Family, The Simpsons and NCIS: Los Angeles. In addition to many new features and security controls in Critique 4, this is the first time the app has been deployed on AWS.

Critique’s new relationship with AWS is key to version 4, says Guillaume Aubuchon, CIO of Critique. “AWS is not only the largest cloud provider, but they are the cloud provider of choice in the M&E space. Our infrastructure shift to AWS afforded us the ability to architect the software to leverage the range of services in the AWS cloud platform. It allowed us to build Critique 4 from scratch in a matter of mere months.”

Critique 4 is a secure digital media asset management (MAM) platform with extensive features to support creative processes and production workflow for both the media and entertainment space as well as enterprise. Built to be extremely easy to use, Critique facilitates collaboration through realtime chat, live annotations, and secure sharing over the Internet to deliver productions on time and on budget. Realtime chat and drawing annotations are viewable across the Web and iOS — they also work with the new Apple Pencil for iPad Pro.

Designed to improve workflow, the software facilitates every step from protected dailies screening to VFX workflows to post to distribution while capitalizing on enterprise-level security to protect valuable assets.

Critique 4 was born of the minds of its executive team of Aubuchon, a veteran in the production space having worked on such projects such as Her, NCIS:LA and Angie Tribeca, and Chris Chen, an expert in the production streaming space and the former CTO of DAX. With its ability to use its own DigitalFilm Tree as a beta test site, Critique is built to ensure it works in real-world media environments.

One of the new exciting features of Critique 4 is its ability to index Amazon Simple Storage Service (Amazon S3) to allow companies to manage their own content inside of Critique’s award-winning interface. It also offers high-performance cloud MAM for simultaneous video and document management: Users can collaborate with Critique’s review, approval and annotation workflows not only for video but also for production documents including scripts, graphics and still images.

“Digital Rights Management (DRM) protection is rarely used, if at all, for unreleased content, which is arguably where it is needed the most,” notes Chen. “Critique was designed to leverage DRM invisibly throughout its video distribution system on desktop, web, and mobile environments. This allows Critique to break through the legacy walled-garden approach, allowing a new level of flexibility in collaboration while maintaining security. But we do it in such a way that the users don’t even know it’s there.”

The ability to share assets in this way expands its mobility and Critique is available via web, phones, tablets and Apple TV. The video service is backed by a true CDN running multi-bit-rate video to prevent glitches on any platform. “Users can take advantage of Critique anywhere — in their office, living room, the subway or even on a plane,” explains Chen. “And it will be true to the original media.

Other highlights of Critique 4 include: storage, archiving and management of Raw material; automatic transcoding of Raw material into a proxy format for viewing; granular permissions on files, folders, and projects; easy-to-manage sharing functions for users outside the system with the ability to time-limit and revoke/extend individual permissions; customizable watermarking on videos.

While Critique was born in the creative and operations side of the media and entertainment market, it is extending to enterprise, small to medium-size businesses, publishing, education and government/military sectors.

This latest version of Critique is available now for a free 30-day trial (AWS usage fees apply). Pricing is extremely competitive with 10, 20, 50 and 100 user levels starting as low as $39 per user. Enterprise level contracts are available for larger projects and companies with multiple projects. The fee includes unlimited streaming of current content and 24/7 white-glove tech support. AppleTV, Apple iPad and iPhone apps are also included. For a nominal fee, users can add DRM, high-resolution cloud transcode and storage for camera raw and mezzanine files.

NAB: AMD intros FirePro workstation graphics card, FireRender plug-in for 3ds Max

At the 2016 NAB Show, AMD has introduced the AMD FirePro W9100 32GB, a workstation graphics card with 32GB memory support for large asset workflows with creative applications. The company also introduced the AMD FireRender plug-in for Autodesk 3ds Max (shown), which enables VR storytellers to use enhanced 4K workflows and photorealistic rendering functionality.

Throughout the show, StudioXperience’s AMD FirePro GPU Zone will be featuring leading applications in a demo of efficient content creation workloads with high visual quality, application responsiveness and compute performance. The zone showcases solutions from Adobe, Apple, Autodesk, Avid, Blackmagic Design, Dell, HP and Rhino, offering attendees a range of hands-on workflow experiences powered by AMD FirePro professional graphics. Demos include a VR production workflow, computer-aided engineering and visualization and 4k workflows, among others.

FilmLight at NAB with Baselight 5.0

FilmLight, which makes management and grading technologies, is at NAB 2016 with Baselight 5.0, a new version of the company’s flagship color finishing system. Baselight 5.0 includes a new set of tools to optimize both high dynamic range (HDR) grading and extended color gamut, and offers more than 50 new features designed to help colorists and other creative artists.

The new Base Grade tool improves color grading techniques by giving colorists access to subtle grading. Moving away from the traditional lift/gamma/gain approach, it offers controls that accurately mimic the way the eye appreciates color — via exposure, temperature and balance — to yield a more natural feel and smooth, consistent changes. Because Base Grade works in a perceptually linear space, it is ideal for grading RAW formats, OpenEXRs, and other scene-referred data for both HDR and standard range displays

Baselight 5.0 also features added HDR capabilities through color space “families” that simplify the deliverables process for distinct viewing environments such as television, 4K projection and handheld devices. Gamut optimization provides natural gamut mapping deliverables and prevents clipping when captured colors can’t be displayed on a cinema or television screen.

A new gamut optimization feature in Baselight 5.0 provides simple-to-implement gamut mapping for wide dynamic range images, which form part of the new generation of HDR displays. As new high-end cameras capture colors that could never be displayed on current television screens, this feature offers an easy fix, providing natural gamut mapping for deliverables. Where an HDR image results in colors outside a standard color gamut, the new gamut compression feature sensitively brings it back in, compressing the outer volume of the gamut without affecting the inner volume. Bright, saturated colors won’t clip or destroy the image.

Additional Baselight 5.0 tools tailored to improve colorists’ creative control and efficiency include a perspective operator that makes screen replacement and re-projection easy; perspective tracking of images, shapes, paint strokes and grid warps using either four 1-point trackers or new perspective-capable area tracker; a grid warper; a dedicated keyer for production-quality blue and green screen keying; a paint tool for retouching, such as logo removal; a relight tool to add virtual lights to a scene; and a matchbox shader including support for Flame Matchbox shaders.

Building on the concept of metadata-driven grading, in which the raw footage remains untouched and realtime viewing uses color metadata to render the grade, Baselight 5.0 allows facilities and freelancers in remote sites to browse any scene independently or lock into the master suite and follow a grading session live. The remote colorist can take over and suggest changes, instantly reflected on the other systems.

Baselight 5.0 will be available for all BLG-enabled products from FilmLight, including the Daylight dailies and media management platform, as well as Baselight for Avid and Baselight for Nuke in the Baselight Editions range.

NAB: Codex Production Suite 4.5 for ingest to post, VR camera rig

At NAB 2016 in Las Vegas, Codex introduced its Codex Production Suite 4.5, an all-in-one software package allowing the color grading, review, metadata management, transcoding, QC and archiving of media generated by the most widely used digital cinema cameras. Codex Production Suite 4.5 provides one workflow for multiple types of cameras — from Arri Alexa 65 to GoPro — from ingest to post.

Codex Production Suite is available on a variety of platforms, including Mac Pro and MacBook Pro as well as Codex’s own hardware: the S-Series and XL-Series Vault. Codex worked closely with their customers on this product, DITs in particular, providing them the tools they need to deliver color-accurate, on-set or near-set dailies and to securely archive camera-original material in one workflow.

The new features of Codex Production Suite 4.5 include non-destructive, CDL-based color grading, enabling the creation, modification and safe communication of looks from on set to editorial and the final DI color session, and import and processing of externally-created CDLs/LUTs, so looks can be applied overall or shot-by-shot. Looks can be baked into editorial dailies or appended in the metadata of deliverables, and dailies can be viewed as intended by the DP.There is seamless integration with Codex Live for a consistent color pipeline from camera through to deliverables and beyond, and also with Tangent panels for grading purposes. There is a full, end-to-end ACES-compliant color pipeline; audio sync toolset, enabling the import of WAV files, playback of shots in a proxy window. Finally, there is synchronization of audio files to shots, based on timecode.

Codex has also introduced a new pricing model: customers can purchase the software only, buy Codex Dock (Thunderbolt) with free software, and gain access to Codex’s workflow and technical support, with free upgrades, through Codex Connect.

Virtual Reality Camera Rig
Also on the Codex booth at NAB was a pretty cool VR camera rig built by LA-based Radiant Images, using 17 Codex Action Cams. Codex Action Cam is a tiny camera head shooting up to 60fps. It uses a 2/3-inch single-chip sensor, with a global shutter, capturing 12-bit RAW, 1920×1080 HD images, at a dynamic range of 11-stops. The camera head connects to the Codex Camera Control Recorder, and is capable of recording two HD streams via a coax cable of up to 50m.

“We quickly realized that Codex Action Cam could help us get to the absolute sweet spot in the equation of making a new, cinematic VR system,” says Radiant Images co-founder Michael Mansouri. “As it captures 12-bit uncompressed RAW, it has the necessary resolution, dynamic range and pixels-per-degree for future-proof VR, and the images are very clean. It has global shutter control too, and the cameras can be genlocked together. Out of all of the lenses we tested, we liked the Kowa 5mm PL Mount. This lens combination with the Codex Action Cam sensor is equivalent to a 14mm in Super 35mm. Although you cannot immediately fit filters, we quickly machined fittings to take ND and other filters. There were few compromises or limitations.”

The final design of the Headcase Cinema Quality VR 360 Rig was made by Radiant’s director of engineering, Sinclair Fleming. It was an iterative process, taking 27 revisions. The result uses 17 Codex Action Cams, in a spherical array, for 360-degree recording with nine recorders. The camera head measures 13 inches wide and 15 inches high, weighing 16 pounds.

NAB: Las Vegas SuperMeet adds VR/360 to event coverage

The Rio Hotel will be hopping on April 19 when it hosts this year’s Las Vegas SuperMeet. The annual Creative Pro User Group (CPUG) Network event draws Final Cut Pro, Adobe, Avid and DaVinci Resolve editors, gurus, digital filmmakers and content creators during NAB.

The second half of this year’s event is focusing on VR and 360 video, the hot topics at this year’s show. We wanted to know what attendees can expect, so we threw some questions at Daniel Bérubé and Michael Horton, the architects of this event, to find out more.

Some compare VR and 360 video to stereo 3D. Why do you feel this is different?
VR/360 video is more accessible to the indie filmmaker than 3D ever was. The camera rigs can be inexpensive and still be professional, or you can rent the expensive ones. The feeling we are getting from everyone is one of revolution, and we have not seen that since the year 2000. This is a new way to tell stories. There are no rules yet, and we are making a lot of this stuff up as we go along, but that’s what is fun. We are actually seeing people giggle again. We never saw this level of excitement with 3D. All we really saw was skepticism.

In what ways are you going to be highlighting VR/360 video?
The second half of the SuperMeet will be devoted to VR and 360 video. We are titling it, “Can I Tell a Compelling Story in VR and 360 Video?” Futurist Ted Schilowitz is going to act as a sort of ringmaster and introduce us to what we need to know. He will then bring on Csillia Kozma Andersen from Nokia to show off the new Ozo camera and how to use it. Next will be John Hendicott of Aurelia Soundworks, who will explain how spatial audio works. And, finally, we will introduce Alex Gollner, who will show how we edit all this stuff.

So the idea here is to try and give you a bit of what you need to know, and then hope it will help you get started on your way to creating your own compelling VR masterpiece.

What can attendees expect?
Expect to have a crazy fun time. Even if you have zero interest in 360 video, SuperMeets are a place to hang out with each other and network. Honestly, you just might meet someone who will change your life. You also can hang out at one of the 25 sponsor tables, where folks will be showing off the latest and greatest software and hardware solutions. VR camera rigs will be running around this area as well. And there will be free food, cash bars and close to $100,000 worth of raffle prizes to give away. It’s going to be a great show and, more importantly, a great time.

To enjoy $5 off your ticket price for the Las Vegas SuperMeet, courtesy of postPerspective, click here.

Daniel Bérubé, of the Boston Creative Pro User Group (BOSCPUG), is co-producer of these SuperMeets with Michael Horton, the founder of the Los Angeles Creative Pro User Group (LACPUG).

Dell embraces VR via Precision Towers

It’s going to be hard to walk the floor at NAB this year without being invited to demo some sort of virtual reality experience. More and more companies are diving in and offering technology that optimizes the creation and viewing of VR content. Dell is one of the latest to jump in.

Dell has been working closely on this topic with their hardware and software partners, and are formalizing their commitment to the future of VR by offering solutions that are optimized for VR consumption and creation alongside the mainstream professional ISV apps used by industry pros.

Dell has introduced new, recommended minimum system hardware configurations to support an optimal VR experience for pro users with HTC Vive or Oculus Rift VR solutions. The VR-ready solutions feature a set of three criteria, whether users are consuming or creating VR content; minimum CPU, memory and graphics requirements to support VR viewing experiences; graphics drivers that are qualified to work with these solutions; and pass performance tests conducted by the company using test criteria based on HMD (head-mounted display) suppliers, ISVs or third-party benchmarks.

Dell has also made upgrades to their Dell Precision Tower, including increased performance, graphics and memory for VR content creation. The refreshed Dell Precision Tower 5810, 7810 and 7910 workstations and rack 7910 have been upgraded with new Intel Broadwell EP processors that have more cores and performance for multi-threaded applications that support professional modeling, analysis and calculations.

Additional upgrades include the latest pro graphics technology from AMD and Nvidia, Dell Precision Ultra-Speed PCle drives with up to 4x faster performance than traditional SATA SSD storage, and up to 1TB of DDR4 Memory running at 2400MHz speed.