Author Archives: Amy

AJA intros Ki Pro Go, Corvid 44 12G and more at NAB

AJA was at NAB this year showing the new Ki Pro Go H.264 multichannel HD/SD recorder/player, as well as 14 openGear converter cards featuring DashBoard software support, two new IP video transmitters that bridge HDMI and 3G-SDI signals to SMPTE ST 2110 and the Corvid 44 12G I/O card for AJA Developers. AJA also introduced updates featuring improvements for its FS-HDR HDR/WCG converter, desktop and mobile I/O products, AJA Control Room software, HDR Image Analyzer and the Helo recorder/streamer.

Ki Pro Go is a genlock-free, multichannel H.264 HD and SD recorder/player with a flexible architecture. This portable device allows users to record up to four channels of pristine HD and SD content from SDI and HDMI sources to off-the-shelf USB media via 4x USB 3.0 ports, with a fifth port for redundant recording. The Ki Pro Go will be available in June for $3,995.

A FS-HDR v3.0 firmware update features enhanced coloring tools and support for multichannel Dynamic LUTs, plus other improvements. The release includes a new integrated Colorfront Engine Film Mode offering a rich grading and look creation toolset with optional ACES colorspace, ASC color decision list controls and built-in look selection. It’s available in June as a free update.

Developed with Colorfront, the HDR Image Analyzer v1.1 firmware update features several new enhancements, including a new web UI that simplifies remote configuration and control from multiple machines, with updates over Ethernet offering the ability to download logs and screenshots. New remote desktop support provides facility-friendly control from desktops, laptops and tablets on any operating system. The update also adds new HDR monitoring and analysis tools. It’s available soon as a free update.

The Desktop Software v15.2 update offers new features and performance enhancements for AJA Kona and Io products. It offers psupport for Apple ProRes capture and playback across Windows, Linux and macOS in AJA Control Room, at up to 8K resolutions, while also adding new IP SMPTE ST 2110 workflows using AJA Io IP and updates for Kona IP, including ST 2110-40 ANC support. The free Desktop Software update will be available in May.

The Helo v4.0 firmware update introduces new features that allow users to customize their streaming service and improve monitoring and control. AV Mute makes it easy to personalize the viewing experience with custom service branding when muting audio and video streams, while Event Logging enables encoder activity monitoring for simpler troubleshooting. It’s available in May as a free update.

The new openGear converter cards combine the capabilities of AJA’s mini converters with openGear’s high-density architecture and support for DashBoard, enabling industry-standard configuration, monitoring and control in broadcast and live event environments over a PC or local network on Windows, macOS or Linux. New models include re-clocking SDI distribution amplifiers, single-mode 3G-SDI fiber converters plus Multi-Mode variants and an SDI audio embedder/ disembedder. The openGear cards are available now, with pricing dependent upon the model.

AJA’s new IPT-10G2-HDMI and IPT-10G2-SDI mini converters are single-channel IP video transmitters for bridging traditional HDMI and 3G-SDI signals to SMPTE ST 2110 for IP-based workflows. Both models feature dual 10 GigE SFP+ ports for facilities using SMPTE ST 2022-7 for redundancy in critical distribution and monitoring. They will be available soon for $1,295.

The Corvid 44 12G is an 8-lane PCIe 3.0 video and audio I/O card featuring support for 12G-SDI I/O in a low-profile design for workstations and servers and 8K/UltraHD2/4K/UltraHD high frame rate, deep color and HDR workflows. Corvid 44 12G also facilitates multichannel 12G-SDI I/O, enabling either 8K or multiple 4K streams of input or output. It is compatible across macOS, Windows and Linux and used in high-performance applications for imaging, post, broadcast and virtual production. Corvid 44 12G cards will be available soon.

Sony’s NAB updates — a cinematographer’s perspective

By Daniel Rodriguez

With its NAB offerings, Sony once again showed that they have a firm presence in nearly every stage of production, be it motion picture, broadcast media or short form. The company continues to keep up to date with the current demands while simultaneously preparing for the inevitable wave of change that seems to come faster and faster each year. While the introduction of new hardware was kept to a short list this year, many improvements to existing hardware and software were released to ensure Sony products — both new and existing — still have a firm presence in the future.

The ability to easily access, manipulate, share and stream media has always been a priority for Sony. This year at NAB, Sony continued to demonstrate its IP Live, SR Live, XDCAM Air and Media Backbone Hive platforms, which give users the opportunity to manage media all over the globe. IP Live allows users to access remote production, which contains core processing hardware while accessing it anywhere. This extends to 4K and HDR/SDR streaming as well, which is where SR Live comes into play. SR Live allows for a native 4K HDR signal to be processed into full HD and regular SDR signals, and a core improvement is the ability to adjust the curves during a live broadcast for any issues that may arise in converting HDR signals to SDR.

For other media, including XDCAM-based cameras, XDCAM Air allows for the wireless transfer and streaming of most media through QoS services, and turns almost any easily accessible camera with wireless capabilities into a streaming tool.

Media Backbone Hive allows users to access their media anywhere they want. Rather than just being an elaborate cloud service, Media Backbone Hive allows internal Adobe Cloud-based editing, accepts nearly every file type, allows a user to embed metadata and makes searching simple with keywords and phrases that are spoken in the media itself.

For the broadcast market, Sony introduced the Sony HDC-5500 4K HDR three-CMOS sensor camcorder which they are calling their “flagship” camera in this market. Offering 4K HDR and high frame rates, the camera also offers a global shutter — which is essential for dealing with strobing from lights — and can now capture fast action without the infamous rolling shutter blur. The camera allows for 4K output over 12G SDI, allowing for 4K monitoring and HDR, and as these outputs continue to be the norm, the introduction of the HDC-5500 will surely be a hit with users, especially with the addition of global shutter.

Sony is very much a company that likes to focus on the longevity of their previous releases… cameras especially. Sony’s FS7 is a camera that has excelled in its field since its introduction in 2014, and to this day is an extremely popular choice for short form, narrative and broadcast media. Like other Sony camera bodies, the FS7 allows for modular builds and add-ons, and this is where the new CBK-FS7BK ENG Build-Up Kit comes in. Sporting a shoulder mount and ENG viewfinder, the kit includes an extension in the back that allows for two wireless audio inputs, RAW output, streaming and file transfer via Wireless LAN or 4G/LTE connection, as well as QoS streaming (only through XDCAM Air) and timecode input. This CBK-FS7BK ENG Build-Up Kit turns the FS7 into an even more well-rounded workhorse.

The Sony Venice is Sony’s flagship Cinema camera, replacing the Sony F65, which is still brilliant and a popular camera. Having popped up as recently as last year’s Annihilation, the Venice takes a leap further in entering the full-frame, VistaVision market. Boasting top-of-the-line specs and a smaller, more modular build than the F65, the camera isn’t exactly a new release — it came out in November 2017 — but Sony has secured longevity in their flagship camera in a time when other camera manufacturers are just releasing their own VistaVision-sensored cameras and smaller alternatives.

Sony recently released a firmware update to the Venice that allows X-OCN XT — their highest form of compressed 16-bit RAW — two new imager modes, allowing the camera to sample 5.7K 16:9 in full frame and 6K 2.39:1 in full width, as well as 4K signal over 6G/12G SDI output and wireless remote control with the CBK-WA02. Since the Venice is smaller and able to be mounted on harder-to-reach mounts, wireless control is quickly becoming a feature that many camera assistants need. Newer anamorphic desqueeze modes for 1.25x, 1.3x, 1.5x and 1.8x have also been added, which is huge, since many older and newer lenses are constantly being created and revisited, such as the Technovision 1.5x — made famous by Vittorio Storaro on Apocalypse Now (1979) — and the Cooke Full Frame Anamorphics 1.8X. With VistaVision full frame now being an easily accessible way of filming, new forms of lensing are now becoming common, so systems like anamorphic are no longer limited to 1.3X and 2X. It’s reassuring to see Sony look out for storytellers who may want to employ less common anamorphic desqueeze sizes.

As larger resolutions and higher frame rates become the norm, Sony has introduced the new Sony SxS Pro X cards. A follow up to the hugely successful Sony SxS Pro+ cards, these new cards boost an incredible transfer speed of 10Gbps (1250Mbps) in 120GB and 240GB cards. This is a huge step up from the previous SxS Pro+ cards that offered a read speed of 3.5Gbps and a write speed of 2.8Gbps. Probably the most exciting part of these new cards being introduced is the corresponding SBAC-T40 card reader which guarantees a full 240GB card to be offloaded in 3.5 minutes.

Sony’s newest addition to the Venice camera is the Rialto extension system. Using the Venice’s modular build, the Rialto is a hardware extension that allows you to remove the main body’s sensor and install it into a smaller body unit which is then tethered either nine or 18 feet by cable back to the main body. Very reminiscent of the design of ARRI’s Alexa M unit, the Rialto goes further by being an extension of its main system rather than a singular system, which may bring its own issues. The Rialto allows users to reach spots where it may otherwise prove difficult using the actual Venice body. Its lightweight design allows users to mount it nearly anywhere. Where other camera bodies that are designed to be smaller end up heavy when outfitted with accessories such as batteries and wireless transmitters, the Rialto can easily be rigged to aerials, handhelds, and Steadicams. Though some may question why you wouldn’t just get a smaller body from another camera company, the big thing to consider is that the Rialto isn’t a solution to the size of the Venice body — which is already very small, especially compared to the previous F65 — but simply another tool to get the most out of the Venice system, especially considering you’re not sacrificing anything as far as features or frame rates. The Rialto is currently being used on James Cameron’s Avatar sequels, as its smaller body allows him to employ two simultaneously for true 3D recording whilst giving all the options of the Venice system.

With innovations in broadcast and motion picture production, there is a constant drive to push boundaries and make capture/distribution instant. Creating a huge network for distribution, streaming, capture, and storage has secured Sony not only as the powerhouse that it already is, but also ensures its presence in the ever-changing future.


Daniel Rodriguez is a New York based director and cinematographer. Having spent years working for such companies as Light Iron, Panavision and ARRI Rental, he currently works as a freelance cinematographer, filming narrative and commercial work throughout the five boroughs. 

 

NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.

Facilis Launches Hub shared storage line

Facilis Technology rolled out its new Hub Shared Storage line for media production workflows during the NAB show. Facilis Hub includes new hardware and an integrated disk-caching system for cloud and LTO backup and archive designed to provide block-level virtualization and multi-connectivity performance.

“Hub Shared Storage is an all-new product based on our Hub Server that launched in 2017. It’s the answer to our customers’ requests for a more compact server chassis, lower-cost hybrid (SSD and HDD) options and integrated cloud and LTO archive features,” says Jim McKenna, VP of sales and marketing at Facilis. “We deliver all of this with new, more powerful hardware, new drive capacity options and a new look to both the system and software interface.”

The Facilis shared storage network allows both block-mode Fibre Channel and Ethernet connectivity simultaneously with the ability to connect through either method with the same permissions, user accounts and desktop appearance. This expands user access, connection resiliency and network permissions. The system can be configured as a direct-attached drive or segmented into various-sized volumes that carry individual permissions for read and write access.

Facilis Object Cloud
Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for an annual fee. The Facilis Virtual Volume can display cloud, tape and spinning disk data in the same directory structure on the client desktop.

“A big problem for our customers is managing multiple interfaces for the various locations of their data. With Object Cloud, files in multiple locations reside in the same directory structure and are tracked by our FastTracker asset tracking in the same database as any active media asset,” says McKenna. “Object Cloud uses Object Storage technology to virtualize a Facilis volume with cloud and LTO locations. This gives access to files that exist entirely on disk, in the Cloud or on LTO, or even partially on disk and partially in the cloud.”

Every Facilis Hub Shared Storage server comes with unlimited seats in the Facilis FastTracker asset tracking application. The Object Cloud Software and Storage package is available for most Facilis servers running version 7.2 or higher.

Behind the Title: Weta Digital’s Paolo Emilio Selva

NAME: Paolo Emilio Selva 

COMPANY: Weta Digital

CAN YOU DESCRIBE YOUR COMPANY?
In the middle of Middle-earth, Weta Digital is a VFX company with more than a thousand artists and developers. While focusing on delivering amazing movies, Weta Digital also focuses on research and development for VFX. 

WHAT’S YOUR JOB TITLE?
Head of Software Engineering 

WHAT DOES THAT ENTAIL?
In the software engineering department, we write tools for artists and make sure their creative intent is maintained across the pipeline. We also make sure production isn’t disrupted across the facility.  

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Writing code, maybe? Yeah, I’m still writing code when I can, mostly fixing bugs and off-loading other developers from nasty issues, keeping them focused on the research and development and providing support.  

HOW DID YOU START YOUR CAREER?
I started my career as researcher in Human-Computer interfaces at a university in Rome. I liked to solve problems, and the VFX industry has lots of problems to be solved 😉 

HOW LONG HAVE YOU BEEN WORKING IN VFX?
Ten years  

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
I grew up with Pixar movies and lots of animated short movies. I also played video games. I was always fascinated by what was behind those things. I wanted to replicate them, and which I did by re-writing games or effects seen in movies.

 I started by using existing tools. Then, during high school — thanks to my older cousin — I found Basic and started writing my own tools. I found that I was able to control external devices with Basic and my Commodore64. I also started enjoying electronics and micro-controllers. All of this reached the acme with my thesis at university when I created a data-glove from scratch — from the hardware to the software — and started looking at example applications for it. This was in between 1999 and 2001, when I also started working at the Human-Computer Interaction Lab.  

WHAT’S YOUR FAVORITE PART OF THE JOB?
It’s challenging, in a good way, every day. And as problem solver, I like this part of my job. 

WHAT’S YOUR LEAST FAVORITE?
Sometimes too many meetings, but it’s important to communicate with every department and understand their needs. 

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably teaching and researching at university in Human-Computer Interaction. 

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Just to name some of them: War for the Planet of the Apes, Valerian, The BFG and Guardians of the Galaxy Vol. 2.          

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
I was lucky enough to be at Weta Digital when we worked on Avatar and The Jungle Book, which both won Oscars for Best Visual Effects, and also The Adventures of Tintin, where I was directly involved in the hair-rendering process and all the TopoClouds tools for the Pantaray pipeline.

WHAT TOOLS DO YOU USE DAY TO DAY?
Nowadays, it’s my email client, my phone and very little text-editor and C++ compilers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Mostly enjoy time with my wife, my cats, video games and the gym when I can.

Adobe Max 2018: Creative Cloud updates and more

By Mike McCarthy

I attended my first Adobe Max 2018 last week in Los Angeles. This huge conference takes over the LA convention center and overflows into the surrounding venues. It began on Monday morning with a two-and-a-half-hour keynote outlining the developments and features being released in the newest updates to Adobe’s Creative Cloud. This was followed by all sorts of smaller sessions and training labs for attendees to dig deeper into the new capabilities of the various tools and applications.

The South Hall was filled with booths from various hardware and software partners, with more available than any one person could possibly take in. Tuesday started off with some early morning hands-on labs, followed by a second keynote presentation about creative and career development. I got a front row seat to hear five different people, who are successful in their creative fields — including director Ron Howard — discuss their approach to work and life. The rest of the day was so packed with various briefings, meetings and interviews that I didn’t get to actually attend any of the classroom sessions.

By Wednesday, the event was beginning to wind down, but there was still a plethora of sessions and other options for attendees to split their time. I presented the workflow for my most recent project Grounds of Freedom at Nvidia’s booth in the community pavilion, and spent the rest of the time connecting with other hardware and software partners who had a presence there.

Adobe released updates for most of its creative applications concurrent with the event. Many of the most relevant updates to the video tools were previously announced at IBC in Amsterdam last month, so I won’t repeat those, but there are still a few new video ones, as well as many that are broader in scope in regards to media as a whole.

Adobe Premiere Rush
The biggest video-centric announcement is Adobe Premiere Rush, which offers simplified video editing workflows for mobile devices and PCs.  Currently releasing on iOS and Windows, with Android to follow in the future, it is a cloud-enabled application, with the option to offload much of the processing from the user device. Rush projects can be moved into Premiere Pro for finishing once you are back on the desktop.  It will also integrate with Team Projects for greater collaboration in larger organizations. It is free to start using, but most functionality will be limited to subscription users.

Let’s keep in mind that I am a finishing editor for feature films, so my first question (as a Razr-M user) was, “Who wants to edit video on their phone?” But what if the user shot the video on their phone? I don’t do that, but many people do, so I know this will be a valuable tool. This has me thinking about my own mentality toward video. I think if I was a sculptor I would be sculpting stone, while many people are sculpting with clay or silly putty. Because of that I would have trouble sculpting in clay and see little value in tools that are only able to sculpt clay. But there is probably benefit to being well versed in both.

I would have no trouble showing my son’s first-year video compilation to a prospective employer because it is just that good — I don’t make anything less than that. But there was no second-year video, even though I have the footage because that level of work takes way too much time. So I need to break free from that mentality, and get better at producing content that is “sufficient to tell a story” without being “technically and artistically flawless.” Learning to use Adobe Rush might be a good way for me to take a step in that direction. As a result, we may eventually see more videos in my articles as well. The current ones took me way too long to produce, but Adobe Rush should allow me to create content in a much shorter timeframe, if I am willing to compromise a bit on the precision and control offered by Premiere Pro and After Effects.

Rush allows up to four layers of video, with various effects and 32-bit Lumetri color controls, as well as AI-based audio filtering for noise reduction and de-reverb and lots of preset motion graphics templates for titling and such.  It should allow simple videos to be edited relatively easily, with good looking results, then shared directly to YouTube, Facebook and other platforms. While it doesn’t fit into my current workflow, I may need to create an entirely new “flow” for my personal videos. This seems like an interesting place to start, once they release an Android version and I get a new phone.

Photoshop Updates
There is a new version of Photoshop released nearly every year, and most of the time I can’t tell the difference between the new and the old. This year’s differences will probably be a lot more apparent to most users after a few minutes of use. The Undo command now works like other apps instead of being limited to toggling the last action. Transform operates very differently, in that they made proportional transform the default behavior instead of requiring users to hold Shift every time they scale. It allows the anchor point to be hidden to prevent people from moving the anchor instead of the image and the “commit changes” step at the end has been removed. All positive improvements, in my opinion, that might take a bit of getting used to for seasoned pros. There is also a new Framing Tool, which allows you to scale or crop any layer to a defined resolution. Maybe I am the only one, but I frequently find myself creating new documents in PS just so I can drag the new layer, that is preset to the resolution I need, back into my current document. For example, I need a 200x300px box in the middle of my HD frame — how else do you do that currently? This Framing tool should fill that hole in the features for more precise control over layer and object sizes and positions (As well as provide its easily adjustable non-destructive masking.).

They also showed off a very impressive AI-based auto selection of the subject or background.  It creates a standard selection that can be manually modified anywhere that the initial attempt didn’t give you what you were looking for.  Being someone who gives software demos, I don’t trust prepared demonstrations, so I wanted to try it for myself with a real-world asset. I opened up one of my source photos for my animation project and clicked the “Select Subject” button with no further input and got this result.  It needs some cleanup at the bottom, and refinement in the newly revamped “Select & Mask” tool, but this is a huge improvement over what I had to do on hundreds of layers earlier this year.  They also demonstrated a similar feature they are working on for video footage in Tuesday night’s Sneak previews.  Named “Project Fast Mask,” it automatically propagates masks of moving objects through video frames and, while not released yet, it looks promising.  Combined with the content-aware background fill for video that Jason Levine demonstrated in AE during the opening keynote, basic VFX work is going to get a lot easier.

There are also some smaller changes to the UI, allowing math expressions in the numerical value fields and making it easier to differentiate similarly named layers by showing the beginning and end of the name if it gets abbreviated.  They also added a function to distribute layers spatially based on the space between them, which accounts for their varying sizes, compared to the current solution which just evenly distributes based on their reference anchor point.

In other news, Photoshop is coming to iPad, and while that doesn’t affect me personally, I can see how this could be a big deal for some people. They have offered various trimmed down Photoshop editing applications for iOS in the past, but this new release is supposed to be based on the same underlying code as the desktop version and will eventually replicate all functionality, once they finish adapting the UI for touchscreens.

New Apps
Adobe also showed off Project Gemini, which is a sketch and painting tool for iPad that sits somewhere between Photoshop and Illustrator. (Hence the name, I assume) This doesn’t have much direct application to video workflows besides being able to record time-lapses of a sketch, which should make it easier to create those “white board illustration” videos that are becoming more popular.

Project Aero is a tool for creating AR experiences, and I can envision Premiere and After Effects being critical pieces in the puzzle for creating the visual assets that Aero will be placing into the augmented reality space.  This one is the hardest for me to fully conceptualize. I know Adobe is creating a lot of supporting infrastructure behind the scenes to enable the delivery of AR content in the future, but I haven’t yet been able to wrap my mind around a vision of what that future will be like.  VR I get, but AR is more complicated because of its interface with the real world and due to the variety of forms in which it can be experienced by users.  Similar to how web design is complicated by the need to support people on various browsers and cell phones, AR needs to support a variety of use cases and delivery platforms.  But Adobe is working on the tools to make that a reality, and Project Aero is the first public step in that larger process.

Community Pavilion
Adobe’s partner companies in the Community Pavilion were showing off a number of new products.  Dell has a new 49″ IPS monitor, the U4919DW, which is basically the resolution and desktop space of two 27-inch QHD displays without the seam (5120×1440 to be exact). HP was displaying their recently released ZBook Studio x360 convertible laptop workstation, (which I will be posting a review of soon), as well as their Zbook X2 tablet and the rest of their Z workstations.  NVidia was exhibiting their new Turing-based cards with 8K Red decoding acceleration, ray tracing in Adobe Dimension and other GPU accelerated tasks.  AMD was demoing 4K Red playback on a MacBookPro with an eGPU solution, and CPU based ray-tracing on their Ryzen systems.  The other booths spanned the gamut from GoPro cameras and server storage devices to paper stock products for designers.  I even won a Thunderbolt 3 docking station at Intel’s booth. (Although in the next drawing they gave away a brand new Dell Precision 5530 2-in-1 convertible laptop workstation.)   Microsoft also garnered quite a bit of attention when they gave away 30 MS Surface tablets near the end of the show.  There was lots to see and learn everywhere I looked.

The Significance of MAX
Adobe MAX is quite a significant event, especially now that I have been in the industry long enough to start to see the evolution of certain trends — things are not as static as we may expect.  I have attended NAB for the last 12 years, and the focus of that show has shifted significantly away from my primary professional focus. (No Red, Ncidia, or Apple booths, among many other changes)  This was the first year that I had the thought “I should have gone to Sundance,” and a number of other people I know had the same impression. Adobe Max is similar, although I have been a little slower to catch on to that change.  It has been happening for over ten years, but has grown dramatically in size and significance recently.  If I still lived in LA, I probably would have started attending sooner, but it was hardly on my radar until three weeks ago.  Now that I have seen it in person, I probably won’t miss it in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB NY: A DP’s perspective

By Barbie Leung

At this year’s NAB New York show, my third, I was able to wander the aisles in search of tools that fit into my world of cinematography. Here are just a few things that caught my eye…

Blackmagic, which had large booth at the entrance to the hall, was giving demos of its Resolve 15, among other tools. Panasonic also had a strong presence mid-floor, with an emphasis on the EVA-1 cameras. As usual, B&H attracted a lot of attention, as did Arri, which brought a couple of Arri Trinity rigs to demo.

During the HDR Video Essentials session, colorist Juan Salvo of TheColourSpace, talked about the emerging HDR 10+ standard proposed by Samsung and Amazon Video. Also mentioned was the trend of consumer displays getting brighter every year and that impact on content creation and content grading. Salvo pointed out the affordability of LG’s C7 OLEDs (about 700 Nits) for use as client monitors, while Flanders Scientific (which had a booth at the show) remains the expensive standard for grading. It was interesting to note that LG, while being the show’s Official Display Partner, was conspicuously absent from the floor.

Many of the panels and presentations unsurprisingly focused on content monetization — how to monetize faster and cheaper. Amazon Web Service’s stage sessions emphasized various AWS Elemental technologies, including automating the creation of video highlight clips for content like sports videos using facial recognition algorithms to generate closed captioning, and improving the streaming experience onboard airplanes. The latter will ultimately make content delivery a streamlined enough process for airlines that it would enable advertisers to enter this currently untapped space.

Editor Janis Vogel, a board member of the Blue Collar Post Collective, spoke at the #galsngear “Making Waves” panel, and noted the progression toward remote work in her field. She highlighted the fact that DaVinci Resolve, which had already made it possible for color work to be done remotely, is now also making it possible for editors to collaborate remotely. The ability to work remotely gives professionals the choice to work outside of the expensive-to-live-in major markets, which is highly desirable given that producers are trying to make more and more content while keeping budgets low.

Speaking at the same panel, director of photography/camera operator Selene Richholt spoke to the fact that crews are being monetized with content producers either asking production and post pros to provide standard service at substandard rates, or more services without paying more.

On a more exciting note, she cited recent 9×16 projects that she has shot with the camera mounted vertically (as opposed to shooting 16×9 and cropping in) in order to take full advantage of lens properties. She looks forward to the trend of more projects that can mix aspects ratios and push aesthetics.

Well, that’s it for this year. I’m already looking forward to next year.

 


Barbie Leung is a New York-based cinematographer and camera operator working in film, music video and branded content. Her work has played Sundance, the Tribeca Film Festival, Outfest and Newfest. She is also the DCP mastering technician at the Tribeca Film Festival.

Report: Sound for Film & TV conference focuses on collaboration

By Mel Lambert

The 5th annual Sound for Film & TV conference was once again held at Sony Pictures Studios in Culver City, in cooperation with Motion Picture Sound Editors and Cinema Audio Society and Mix Magazine. The one-day event featured a keynote address from veteran sound designer Scott Gershin, together with a broad cross section of panel discussions on virtually all aspects of contemporary sound and post production. Co-sponsors included Audionamix, Sound Particles, Tonsturm, Avid, Yamaha-Steinberg, iZotope, Meyer Sound, Dolby Labs, RSPE, Formosa Group and Westlake Audio, and attracted some 650 attendees.

With film credits that include Pacific Rim and The Book of Life, keynote speaker Gershin focused on advances in immersive sound and virtual reality experiences. Having recently joined Sound Lab at Keywords Studios, the sound designer and supervisor emphasized that “a single sound can set a scene,” ranging from a subtle footstep to an echo-laden yell of terror. “I like to use audio to create a foreign landscape, and produce immersive experiences,” he says, stressing that “dialog forms the center of attention, with music that shapes a scene emotionally and sound effects that glue the viewer into the scene.” In summary he concluded, “It is our role to develop a credible world with sound.”

The Sound of Streaming Content — The Cloverfield Paradox
Avid-sponsored panels within the Cary Grant Theater included an overview of OTT techniques titled “The Sound of Streaming Content,” which was moderated by Ozzie Sutherland, a production sound technology specialist with Netflix. Focusing on sound design and re-recording of the recent Netflix/Paramount Pictures sci-fi film mystery The Cloverfield Paradox from director Julius Onah, the panel included supervising sound editor/re-recording mixer Will Files, co-supervising sound editor/sound designer Robert Stambler and supervising dialog editor/re-recording mixer Lindsey Alvarez. Files and Stambler have collaborated on several projects with director J. J. Abrams through Abram’s Bad Robot production company, including Star Trek: Into Darkness (2013), Star Wars: The Force Awakens (2015) and 10 Cloverfield Lane (2016), as well as Venom (2018).

The Sound of Streaming Content panel: (L-R) Ozzie Sutherland, Will Files, Robert Stambler and Lindsey Alvarez

“Our biggest challenge,” Files readily acknowledges, “was the small crew we had on the project; initially, it was just Robby [Stambler] and me for six months. Then Star Wars: The Force Awakens came along, and we got busy!” “Yes,” confirmed Stambler, “we spent between 16 and 18 months on post production for The Cloverfield Paradox, which gave us plenty of time to think about sound; it was an enlightening experience, since everything happens off-screen.” While orbiting a planet on the brink of war, the film, starring Gugu Mbatha-Raw, David Oyelowo and Daniel Brühl, follows a team of scientists trying to solve an energy crisis that culminates in a dark alternate reality.

Having screened a pivotal scene from the film in which the spaceship’s crew discovers the effects of interdimensional travel while hearing strange sounds in a corridor, Alvarez explained how the complex dialog elements came into play, “That ‘Woman in The Wall’ scene involved a lot of Mandarin-language lines, 50% of which were re-written to modify the story lines and then added in ADR.” “We also used deep, layered sounds,” Stambler said, “to emphasize the screams,” produced by an astronaut from another dimension that had become fused with the ship’s hull. Continued Stambler, “We wanted to emphasize the mystery as the crew removes a cover panel: What is behind the wall? Is there really a woman behind the wall?” “We also designed happy parts of the ship and angry parts,” Files added. “Dependent on where we were on the ship, we emphasized that dominant flavor.”

Files explained that the theatrical mix for The Cloverfield Paradox in Dolby Atmos immersive surround took place at producer Abrams’ Bad Robot screening theater, with a temporary Avid S6 M40 console. Files also mixed the first Atmos film, Brave, back in 2013. “J. J. [Abrams] was busy at the time,” Files said, “but wanted to be around and involved,” as the soundtrack took shape. “We also had a sound-editorial suite close by,” Stambler noted. “We used several Futz elements from the Mission Control scenes as Atmos Objects,” added Alvarez.

“But then we received a request from Netflix for a near-field Atmos mix,” that could be used for over-the-top streaming, recalled Files. “So we lowered the overall speaker levels, and monitored on smaller speakers to ensure that we could hear the dialog elements clearly. Our Atmos balance also translated seamlessly to 5.1- and 7.1-channel delivery formats.”

“I like mixing in Native Atmos because you can make final decisions with creative talent in the room,” Files concluded. “You then know that everything will work in 5.1 and 7.1. If you upmix to Atmos from 7.1, for example, the creatives have often left by the time you get to the Atmos mix.”

The Sound and Music of Director Damien Chazelle’s First Man
The series of “Composers Lounge” presentations held in the Anthony Quinn Theater, sponsored by SoundWorks Collection and moderated by Glenn Kiser from The Dolby Institute, included “The Sound and Music of First Man” with sound designer/supervising sound editor/SFX re-recording mixer Ai-Ling Lee, supervising sound editor Mildred latrou Morgan, SFX re-recording mixer Frank Montaño, dialog/music re-recording mixer Jon Taylor, composer Justin Hurwitz and picture editor Tom Cross. First Man takes a close look at the life of the astronaut Neil Armstrong, and the space mission that led him to become the first man to walk on the Moon in July 1969. It stars Ryan Gosling, Claire Foy and Jason Clarke.

Having worked with the film’s director, Damien Chazelle, on two previous outings — La La Land (2016) and Whiplash (2014) — Cross advised that he likes to have sound available on his Avid workstation as soon as possible. “I had some rough music for the big action scenes,” he said, “together with effects recordings from Ai-Ling [Lee].” The latter included some of the SpaceX rockets, plus recordings of space suits and other NASA artifacts. “This gave me a sound bed for my first cut,” the picture editor continued. “I sent that temp track to Ai-Ling for her sound design and SFX, and to Milly [latrou Morgan] for dialog editorial.”

A key theme for the film was its documentary style, Taylor recalled, “That guided the shape of the soundtrack and the dialog pre-dubs. They had a cutting room next to the Hitchcock Theater [at Universal Studios, used for pre-dub mixes and finals] so that we could monitor progress.” There were no Temp Mixes on this project.

“We had a lot of close-up scenes to support Damien’s emotional feel, and used sound to build out the film,” Cross noted. “Damien watched a lot of NASA footage shot on 16 mm film, and wanted to make our film [immersive] and personal, using Neil Armstrong as a popular icon. In essence, we were telling the story as if we had taken a 16 mm camera into a capsule and shot the astronauts into space. And with an Atmos soundtrack!”

“We pre-scored the soundtrack against animatics in March 2017,” commented Hurwitz. “Damien [Chazelle] wanted to storyboard to music and use that as a basis for the first cut. I developed some themes on a piano and then full orchestral mock-ups for picture editorial. We then re-scored the film after we had a locked picture.” “We developed a grounded, gritty feel to support the documentary style that was not too polished,” Lee continued. “For the scenes on Earth we went for real-sounding backgrounds, Foley and effects. We also narrowed the mix field to complement the narrow image but, in contrast, opened it up for the set pieces to surround the audience.”

“The dialog had to sound how the film looked,” Morgan stressed. “To create that real-world environment I often used the mix channel for dialog in busy scenes like mission control, instead of the [individual] lavalier mics with their cleaner output. We also miked everybody in Mission Control – maybe 24 tracks in all.” “And we secured as many authentic sound recordings as we could,” Lee added. “In order to emphasize the emotional feel of being inside Neil Armstrong’s head space, we added surreal and surprising sounds like an elephant roar, lion growl or animal stampede to these cockpit sequences. We also used distortion and over-modulation to add ‘grit’ and realism.”

“It was a Native Atmos mix,” advised Montaño. “We used Atmos to reflect what the picture showed us, but not in a gimmicky way.” “During the rocket launch scenes,” Lee offered, “we also used the Atmos full-range surround channels to place many of the full-bodied, bombastic rocket roars and explosions around the audience.” “But we wanted to honor the documentary style,” Taylor added, “by keeping the music within the front LCR loudspeakers, and not coming too far out into the surrounds.”

“A Star Is Born” panel: (L-R) Steve Morrow, Dean Zupancic and Nick Baxter

The Sound of Director Bradley Cooper’s A Star Is Born
A subsequent panel discussion in the “Composers Lounge” series, again moderated by Kiser, focused on “The Sound of A Star Is Born,” with production sound mixer Steve Morrow, music production mixer Nick Baxter and re-recording mixer Dean Zupancic. The film is a retelling of the classic tale of a musician – Jackson Maine, played by Cooper – who helps a struggling singer find fame, even as age and alcoholism send his own career into a downward spiral. Morrow re-counted that the director’s costar, Lady Gaga, insisted that all vocals be recorded live.

“We arranged to record scenes during concerts at the Stagecoach 2017 Festival,” the production mixer explained. “But because these were new songs that would not be heard in the film until 18 months later, [to prevent unauthorized bootlegs] we had to keep the sound out of the PA system, and feed a pre-recorded band mix to on-stage wedges or in-ear monitors.” “We had just a handful of minutes before Willie Nelson was scheduled to take the stage,” Baxter added, “and so we had to work quickly” in front of an audience of 45,000 fans. “We rolled on the equipment, hooked up the microphones, connected the monitors and went for it!”

To recreate the sound of real-world concerts, Baxter made impulse-response recordings of each venue – in stereo as well as 5.1- and 7.1- channel formats. “To make the soundtrack sound totally live,” Morrow continued, “at Coachella Festival we also captured the IR sound echoing off nearby mountains.” Other scenes were shot during Lady Gaga’s “Joanne” Tour in August 2017 while on a stop in Los Angeles, and others in the Palm Springs Convention Center, where Cooper’s character is seen performing at a pharmaceutical convention.

“For scenes filmed at the Glastonbury Festival in the UK in front of 110,000 people,” Morrow recalled, “we had been allocated just 10 minutes to record parts for two original songs — ‘Maybe It’s Time’ and ‘Black Eyes’ — ahead of Kris Kristofferson’s set. But then we were told that, because the concert was running late, we only had three minutes. So we focused on securing 30 seconds of guitar and vocals for each song.”

During a scene shot in a parking lot outside a food market where Lady Gaga’s character sings acapella, Morrow advised that he had four microphones on the actors: “Two booms, top and bottom, for Bradley Cooper’s voice, and lavalier mikes; we used the boom track when Lady Gaga (as Ally) belted out. I always had my hand on the gain knob! That was a key scene because it established for the audience that Ally can sing.”

Zupancic noted that first-time director Cooper was intimately involved in all aspects of post production, just as he was in production. “Bradley Cooper is a student of film,” he said. “He worked closely with supervising sound editor Alan Robert Murray on the music and SFX collaboration.” The high-energy Atmos soundtrack was realized at Warner Bros Studio Facilities’ post production facility in Burbank; additional re-recording mixers included Michael Minkler, Matthew Iadarola and Jason King, who also handled SFX editing.

An Avid session called “Monitoring and Control Solutions for Post Production with Immersive Audio” featured the company’s senior product specialist, Jeff Komar, explaining how Pro Tools with an S6 Controller and an MTRX interface can manage complex immersive audio projects, while a MIX Panel entitled “Mixing Dialog: The Audio Pipeline,” moderated by Karol Urban from Cinema Audio Society, brought together re-recording mixers Gary Bourgeois and Mathew Waters with production mixer Phil Palmer and sound supervisor Andrew DeCristofaro. “The Business of Immersive,” moderated by Gadget Hopkins, EVP with Westlake Pro, addressed immersive audio technologies, including Dolby Atmos, DTS and Auro 3D; other key topics included outfitting a post facility, new distribution paradigms and ROI while future-proofing a stage.

A companion “Parade of Carts & Bags,” presented by Cinema Audio Society in the Barbra Streisand Scoring Stage, enabled production sound mixers to show off their highly customized methods of managing the tools of their trade, from large soundstage productions to reality TV and documentaries.

Finally, within the Atmos-equipped William Holden Theater, the regular “Sound Reel Showcase,” sponsored by Formosa Group, presented eight-minute reels from films likely to be in consideration for a Best Sound Oscar, MPSE Golden Reel and CAS Awards, including A Quiet Place (Paramount) introduced by Erik Aadahl, Black Panther introduced by Steve Boeddecker, Deadpool 2 introduced by Martyn Zub, Mile 22 introduced by Dror Mohar, Venom introduced by Will Files, Goosebumps 2 introduced by Sean McCormack, Operation Finale introduced by Scott Hecker, and Jane introduced by Josh Johnson.

Main image: The Sound of First Man panel — Ai-Ling Lee (left), Mildred latrou Morgan & Tom Cross.

All photos copyright of Mel Lambert


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

 

GoPro introduces new Hero7 camera lineup

GoPro’s new Hero7 lineup includes the company’s flagship Hero7 Black, which comes with a timelapse video mode, live streaming and improved video stabilization. The new video stabilization, HyperSmooth, allows users to capture professional-looking, gimbal-like stabilized video without  a motorized gimbal. HyperSmooth also works underwater and in high-shock and wind situations where gimbals fail.

With Hero7 Black, GoPro is also introducing a new form of video called TimeWarp. TimeWarp Video applies a high-speed, “magic-carpet-ride” effect, transforming longer experiences into short, flowing videos. Hero7 Black is the first GoPro to live stream, enabling users to automatically share in realtime to Facebook, Twitch, YouTube, Vimeo and other platforms internationally.

Other Hero7 Black features:

  • SuperPhoto – Intelligent scene analyzation for professional-looking photos via automatically applied HDR, Local Tone Mapping and Multi-Frame Noise Reduction
  • Portrait Mode – Native vertical-capture for easy sharing to Instagram Stories, Snapchat and others
  • Enhanced Audio – Re-engineered audio captures increased dynamic range, new microphone membrane reduces unwanted vibrations during mounted situations
  • Intuitive Touch Interface – 2-inch touch display with simplified user interface enables native vertical (portrait) use of camera
  • Face, Smile + Scene Detection – Hero7 Black recognizes faces, expressions and scene-types to enhance automatic QuikStory edits on the GoPro app
  • Short Clips – Restricts video recording to 15- or 30-second clips for faster transfer to phone, editing and sharing.
  • High Image Quality – 4K/60 video and 12MP photos
  • Ultra Slo-Mo – 8x slow motion in 1080p240
  • Waterproof – Waterproof without a housing to 33ft (10m)
  • Voice Control – Verbal commands are hands-free in 14 languages
  • Auto Transfer to Phone – Photos and videos move automatically from camera to phone when connected to the GoPro app for on-the-go sharing
  • GPS Performance Stickers – Users can track speed, distance and elevation, then highlight them by adding stickers to videos in the GoPro app

The Hero7 Black is available now on pre-order for $399.

Panavision, Sim, Saban Capital agree to merge

Saban Capital Acquisition Corp., a publicly traded special purpose acquisition company, Panavision and Sim Video International have agreed to combine their businesses to create a premier global provider of end-to-end production and post production services to the entertainment industry. Under the terms of the business combination agreement, Panavision and Sim will become wholly owned subsidiaries of Saban Capital Acquisition Corp. Upon completion, Saban Capital Acquisition Corp. will change its name to Panavision Holdings Inc. and is expected to continue to trade on the Nasdaq stock exchange. Kim Snyder, president and chief executive officer of Panavision, will serve as chairman and chief executive officer. Bill Roberts, chief financial officer of Panavision, will serve in that role for the combined company.

Panavision designs, manufactures and provides high-precision optics and camera technology for the entertainment industry and is a leading global provider of production equipment and services. Sim is a leading provider of production and post production solutions with facilities in Los Angeles, Vancouver, Atlanta, New York and Toronto.

“This acquisition will leverage the best of Panavision’s and Sim’s resources by providing comprehensive products and services to best address the ever-adapting needs of content creators globally,” says Snyder.

“We’re combining the talent and integrated services of Sim with two of the biggest names in the business, Panavision and Saban,” adds James Haggarty, president and CEO of Sim. “The resulting scale of the new combined enterprise will better serve our clients and help shape the content-creation landscape.”

The respective boards of directors of Saban Capital Acquisition Corp., Panavision and Sim have unanimously approved the merger with completion subject to Saban Capital Acquisition Corp. stockholder approval, certain regulatory approvals and other customary closing conditions. The parties expect that the process will be completed in the first quarter of 2019.

Quantum upgrades Xcellis scale-out storage with StoreNext 6.2, NVMe tech

Quantum has made enhancements to its Xcellisscale-out storage appliance portfolio with an upgrade to StorNext 6.2 and the introduction of NVMe storage. StorNext 6.2 bolsters performance for 4K and 8K video while enhancing integration with cloud-based workflows and global collaborative environments. NVMe storage significantly accelerates ingest and other aspects of media workflows.

Quantum’s Xcellis scale-out appliances provide high performance for increasingly demanding applications and higher resolution content. Adding NVMe storage to the Xcellis appliances offers ultra-fast performance: 22 GB/s single-client, uncached streaming bandwidth. Excelero’s NVMesh technology in combination with StorNext ensures all data is accessible by multiple clients in a global namespace, making it easy to access and cost-effective to share Flash-based resources.

Xcellis provides cross-protocol locking for shared access across SAN, NFS and SMB, helping users share content across both Fibre Channel and Ethernet.

With StorNext 6.2, Quantum now offers an S3 interface to Xcellis appliances, allowing them to serve as targets for applications designed to write to RESTful interfaces. This allows pros to use Xcellis as either a gateway to the cloud or as an S3 target for web-based applications.

Xcellis environments can now be managed with a new cloud monitoring tool that enables Quantum’s support team to monitor critical customer environmental factors, speed time to resolution and ultimately increase uptime. When combined with Xcellis Web Services — a suite of services that lets users set policies and adjust system configuration — overall system management is streamlined.

Available with StorNext 6.2, enhanced FlexSync replication capabilities enable users to create local or remote replicas of multitier file system content and metadata. With the ability to protect data for both high-performance systems and massive archives, users now have more flexibility to protect a single directory or an entire file system.

StorNext 6.2 lets administrators provide defined and enforceable quotas and implement quality of service levels for specific users, and it simplifies reporting of used storage capacity. These new features make it easier for administrators to manage large-scale media archives efficiently.

The new S3 interface and NVMe storage option are available today. The other StorNext features and capabilities will be available by December 2018.

 

Colorfront supports HDR, UHD, partners again with AJA

By Molly Hill

Colorfront released new products and updated current product support as part of NAB 2018, expanding their partnership with AJA. Both companies had demos of the new HDR Image Analyzer for UHD, HDR and WCG analysis. It can handle 4K, HDR and 60fps in realtime and shows information in various view modes including parade, pixel picker, color gamut and audio.

Other software updates include support for new cameras in On-Set Dailies and Express Dailies, as well as the inclusion of HDR analysis tools. QC Player and Transkoder 2018 were also released, with the latter now optimized for HDR and UHD.

Colorfront also demonstrated its tone-mapping capabilities (SDR/HDR) right in the Transkoder software, without the FS-HDR hardware (which is meant more for broadcast). Static (one light) or dynamic (per shot) mapping is available in either direction. Customization is available for different color gamuts, as well as peak brightness on a sliding scale, so it’s not limited to a pre-set LUT. Even just the static mapping for SDR-to-HDR looked great, with mostly faithful color reproduction.

The only issues were some slight hue shifts from blue to green, and clipping in some of the highlights in the HDR version, despite detail being available in the original SDR. Overall, it’s an impressive system that can save time and money for low-budget films when there isn’t the budget to hire a colorist to do a second pass.

Samsung’s 360 Round for 3D video

Samsung showed an enhanced Samsung 360 Round camera solution at NAB, with updates to its live streaming and post production software. The new solution gives professional video creators the tools they need — from capture to post — to tell immersive 360-degree and 3D stories for film and broadcast.

“At Samsung, we’ve been innovating in the VR technology space for many years, including introducing the 360 Round camera with its ruggedized design, superior low light and live streaming capabilities late last year,” says Eric McCarty of Samsung Electronics America.

The Samsung 360 Round offers realtime 3D video to PCs using the 360 Round’s bundled software so video creators can now view live video on their mobile devices using the 360 Round live preview app. In addition, the 360 Round live preview app allows creators to remotely control the camera settings, via Wi-Fi router, from afar. The updated 360 Round PC software now provides dual monitor support, which allows the editor to make adjustments and show the results on a separate monitor dedicated to the director.

Limiting luminance levels to 16-135, noise reduction and sharpness adjustments, as well as a hardware IR filter make it possible to get a clear shot in almost no light. The 360 Round also offers advanced stabilization software and the ability to color-correct on the fly, with an intuitive, easy-to-use histogram. In addition, users can set up profiles for each shot and save the camera settings, cutting down on the time required to prep each shot.

The 360 Round comes with Samsung’s advanced Stitching software, which weaves together video from each of the 360 Round’s 17 lenses. Creators can stitch, preview and broadcast in one step on a PC without the need for additional software. The 360 Round also enables fine-tuning of seamlines during a live production, such as moving them away from objects in realtime and calibrating individual stitchlines to fix misalignments. In addition, a new local warping feature allows for individual seamline calibrations in post, without requiring a global adjustment to all seamlines, giving creators quick and easy, fine-grain control of the final visuals.

The 360 Round delivers realtime 4K x 4K (3D) streaming with minimal latency. SDI capture card support enables live streaming through multiple cameras and broadcasting equipment with no additional encoding/decoding required. The newest update further streamlines the switching workflow for live productions with audio over SDI, giving producers less complex events (one producer managing audio and video switching) and a single switching source as the production transitions from camera to camera.

Additional new features:

  • Ability to record, stream and save RAW files simultaneously, making the process of creating dailies and managing live productions easier. Creators can now save the RAW files to make further improvements to live production recordings and create a higher quality post version to distribute as VOD.
  • Live streaming support for HLS over HTTP, which adds another transport streaming protocol in addition to the RTMP and RTSP protocols. HLS over HTTP eliminates the need to modify some restrictive enterprise firewall policies and is a more resilient protocol in unreliable networks.
  • Ability to upload direct (via 360 Round software) to Samsung VR creator account, as well as Facebook and YouTube, once the files are exported.

Blackmagic releases Resolve 15, with integrated VFX and motion graphics

Blackmagic has released Resolve 15, a massive update that fully integrates visual effects and motion graphics, making it the first solution to combine professional offline and online editing, color correction, audio post production, multi-user collaboration and visual effects together in one software tool. Resolve 15 adds an entirely new Fusion page with over 250 tools for compositing, paint, particles, animated titles and more. In addition, the solution includes a major update to Fairlight audio, along with over 100 new features and improvements that professional editors and colorists have asked for.

DaVinci Resolve 15 combines four high-end applications into different pages in one single piece of software. The edit page has all the tools professional editors need for both offline and online editing, the color page features advanced color correction tools, the Fairlight audio page is designed specifically for audio post production and the new Fusion page gives visual effects and motion graphics artists everything they need to create feature film-quality effects and animations. A single click moves the user instantly between editing, color, effects and audio, giving individual users creative flexibility to learn and explore different toolsets. The workflow also enables collaboration, which speeds up post by eliminating the need to import, export or translate projects between different software applications or to conform when changes are made. Everything is in the same software application.

The free version of Resolve 15 can be used for professional work and has more features than most paid applications. Resolve 15 Studio, which adds multi-user collaboration, 3D, VR, additional filters and effects, unlimited network rendering and other advanced features such as temporal and spatial noise reduction, is available to own for $299. There are no annual subscription fees or ongoing licensing costs. Resolve 15 Studio costs less than other cloud-based software subscriptions and does not require an internet connection once the software has been activated. That means users won’t lose work in the middle of a job if there is no internet connection.

“DaVinci Resolve 15 is a huge and exciting leap forward for post production because it’s the world’s first solution to combine editing, color, audio and now visual effects into a single software application,” says Grant Petty, CEO of Blackmagic Design. “We’ve listened to the incredible feedback we get from customers and have worked really hard to innovate as quickly as possible. DaVinci Resolve 15 gives customers unlimited creative power to do things they’ve never been able to do before. It’s finally possible to bring teams of editors, colorists, sound engineers and VFX artists together so they can collaborate on the same project at the same time, all in the same software application!”

Resolve 15 Overview

Resolve 15 features an entirely new Fusion page for feature-film-quality visual effects and motion graphics animation. Fusion was previously only available as a standalone application, but it is now built into Resolve 15. The new Fusion page gives customers a true 3D workspace with over 250 tools for compositing, vector paint, particles, keying, rotoscoping, text animation, tracking, stabilization and more. The addition of Fusion to Resolve will be completed over the next 12-18 months, but users can get started using Fusion now to complete nearly all of their visual effects and motion graphics work. The standalone version of Fusion is still available for those who need it.

In addition to bringing Fusion into Resolve 15, Blackmagic has also added support for Apple Metal, multiple GPUs and CUDA acceleration, making Fusion in Resolve faster than ever. To add visual effects or motion graphics, users simply select a clip in the timeline on the Edit page and then click on the Fusion page where they can use Fusion’s dedicated node-based interface, which is optimized for visual effects and motion graphics. Compositions created in the standalone version of Fusion can also be copied and pasted into Resolve 15 projects.

Resolve 15 also features a huge update to the Fairlight audio page. The Fairlight page now has a complete ADR toolset, static and variable audio retiming with pitch correction, audio normalization, 3D panners, audio and video scrollers, a fixed playhead with scrolling timeline, shared sound libraries, support for legacy Fairlight projects and built-in cross platform plugins such as reverb, hum removal, vocal channel and de-esser. With Resolve 15, FairlightFX plugins run natively on Mac, Windows and Linux, so users no longer have to worry about audio plugins when moving between the platforms.

Professional editors will find new features in Resolve 15 specifically designed to make cutting, trimming, organizing and working with large projects even better. Load times have been improved so that large projects with hundreds of timelines and thousands of clips now open instantly. New stacked timelines and timeline tabs let editors see multiple timelines at once, so they can quickly cut, paste, copy and compare scenes between timelines. There are also new markers with on-screen annotations, subtitle and closed captioning tools, auto save with versioning, improved keyboard customization tools, new 2D and 3D Fusion title templates, image stabilization on the Edit page, a floating timecode window, improved organization and metadata tools, Netflix render presets with IMF support and much more.

Colorists get an entirely new LUT browser for quickly previewing and applying LUTs, along with new shared nodes that are linked so when one is changed they all change. Multiple playheads allow users to quickly reference different shots in a program. Expanded HDR support includes GPU accelerated Dolby Vision metadata analysis and native HDR 10+ grading controls. The new ResolveFX lets users quickly patch blemishes or remove unwanted elements in a shot using smart fill technology, and allows for dust and scratch removal, lens and aperture diffraction effects and more.

For the ultimate high-speed workflow, users can add a Resolve Micro Panel, Resolve Mini Panel or a Resolve Advanced Panel. All controls are placed near natural hand positions. Smooth, high-resolution weighted trackballs and precision engineered knobs and dials provide the right amount of resistance to accurately adjust settings. The Resolve control panels give colorists and editors fluid, hands-on control over multiple parameters at the same time, allowing them to create looks that are simply impossible with a standard mouse.

In addition, Blackmagic also introduced new Fairlight audio consoles for audio post production that will be available later this year. The new Fairlight consoles will be available in two-, three- and five- bay configurations.

Availability and Price

The public beta of Resolve 15 is available today as a free download from the Blackmagic website for all current Resolve and Resolve Studio customers. Resolve Studio is available for $299 from Blackmagic resellers.

The Fairlight consoles will be available later this year and with prices starting at $21,995 for the Fairlight 2 Bay console. The Fairlight consoles will be available from Blackmagic resellers.

NAB: AJA intros HDR Image Analyzer, Kona 1, Kona HDMI

AJA Video Systems is exhibiting a tech preview of its new waveform, histogram, vectorscope and Nit level HDR monitoring solution at NAB. The HDR Image Analyzer simplifies monitoring and analysis of 4K/UltraHD/2K/HD, HDR and WCG content in production, post, quality control and mastering. AJA has also announced two new Kona cards, as well as Desktop Software v14.2. Kona HDMI is a PCIe card for multi-channel HD and single-channel 4K HDMI capture for live production, streaming, gaming, VR and post production. Kona1 is a PCIe card for single-channel HD/SD 3G-SDI capture/playback. Desktop Software v14.2 adds support for Kona 1 and Kona HDMI, plus new improvements for AJA Kona, Io and T-TAP products.

HDR Image Analyzer
A waveform, histogram, vectorscope and Nit level HDR monitoring solution, the HDR Image Analyzer combines AJA’s video and audio I/O with HDR analysis tools from Colorfront in a compact 1RU chassis. The HDR Image Analyzer is a flexible solution for monitoring and analyzing HDR formats including Perceptual Quantizer, Hybrid Log Gamma and Rec.2020 for 4K/UltraHD workflows.

The HDR Image Analyzer is the second technology collaboration between AJA and Colorfront, following the integration of Colorfront Engine into AJA’s FS-HDR. Colorfront has exclusively licensed its Colorfront HDR Image Analyzer software to AJA for the HDR Image Analyzer.

Key features include:

— Precise, high-quality UltraHD UI for native-resolution picture display
— Advanced out-of-gamut and out-of-brightness detection with error intolerance
— Support for SDR (Rec.709), ST2084/PQ and HLG analysis
— CIE graph, Vectorscope, Waveform, Histogram
— Out-of-gamut false color mode to easily spot out-of-gamut/out-of-brightness pixels
— Data analyzer with pixel picker
— Up to 4K/UltraHD 60p over 4x 3G-SDI inputs
— SDI auto-signal detection
— File base error logging with timecode
— Display and color processing look up table (LUT) support
— Line mode to focus a region of interest onto a single horizontal or vertical line
— Loop-through output to broadcast monitors
— Still store
— Nit levels and phase metering
— Built-in support for color spaces from ARRI, Canon, Panasonic, RED and Sony

“As 4K/UltraHD, HDR/WCG productions become more common, quality control is key to ensuring a pristine picture for audiences, and our new HDR Image Analyzer gives professionals an affordable and versatile set of tools to monitor and analyze HDR productions from start to finish, allowing them to deliver more engaging visuals for viewers,” says Rashby.

Adds Aron Jazberenyi, managing director of Colorfront, “Colorfront’s comprehensive UHD HDR software toolset optimizes the superlative performance of AJA video and audio I/O hardware, to deliver a powerful new solution for the critical task of HDR quality control.”

HDR Image Analyzer is being demonstrated as a technology preview only at NAB 2018.

Kona HDMI
An HDMI video capture solution, Kona HDMI supports a range of workflows, including live streaming, events, production, broadcast, editorial, VFX, vlogging, video game capture/streaming and more. Kona HDMI is highly flexible, designed for four simultaneous channels of HD capture with popular streaming and switching applications including Telestream Wirecast and vMix.

Additionally, Kona HDMI offers capture of one channel of UltraHD up to 60p over HDMI 2.0, using AJA Control Room software, for file compatibility with most NLE and effects packages. It is also compatible with other popular third-party solutions for live streaming, projection mapping and VR workflows. Developers use the platform to build multi-channel HDMI ingest systems and leverage VL42 compatibility on Linux. Features include: four full-size HDMI ports; the ability to easily switch between one channel of UltraHD or four channels of 2K/HD; and embedded HDMI audio in, up to eight embedded channels per input.

Kona 1
Designed for broadcast, post production and ProAV, as well as OEM developers, Kona 1 is a cost-efficient single-channel 3G-SDI 2K/HD 60p I/O PCIe card. Kona 1 offers serial control and reference/LTC, and features standard application plug-ins, as well as AJA SDK support. Kona 1 supports 3G-SDI capture, monitoring and/or playback with software applications from AJA, Adobe, Avid, Apple, Telestream and more. Kona 1 enables simultaneous monitoring during capture (pass-through) and includes: full-size SDI ports supporting 3G-SDI formats, embedded 16-channel SDI audio in/out, Genlock with reference/ LTC input and RS-422.

Desktop Software v14.2
Desktop Software v14.2 introduces support for Kona HDMI and Kona 1, as well as a new SMPTE ST 2110 IP video mode for Kona IP, with support for AJA Control Room, Adobe Premiere Pro CC, part of the Adobe Creative Cloud, and Avid Media Composer. The free software update also brings 10GigE support for 2K/HD video and audio over IP (uncompressed SMPTE 2022-6/7) to the new Thunderbolt 3-equipped Io IP and Avid DNxIP, as well as additional enhancements to other Kona, Io and T-TAP products, including HDR capture with Io 4K Plus. Io 4K Plus and DNxIV users also benefit from a new feature allowing all eight analog audio channels to be configured for either output, input or a 4-In/4-Out mode for full 7.1 ingest/monitoring, or I/O for stereo plus VO and discrete tracks.

“Speed, compatibility and reliability are key to delivering high-quality video I/O for our customers. Kona HDMI and Kona 1 give video professionals and enthusiasts new options to work more efficiently using their favorite tools, and with the reliability and support AJA products offer,” says Nick Rashby, president of AJA.

Kona HDMI will be available this June for $895, and Kona 1 will be available in May for $595. Both are available for pre-order now. Desktop Software v14.2 will also be available in May, as a free download from AJA’s support page.

Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.

Dell intros new Precision workstations, Dell Canvas and more

To celebrate the 20th anniversary of Dell Precision workstations, Dell announced additions to its Dell Precision fixed workstation portfolio, a special anniversary edition of its Dell Precision 5520 mobile workstation and the official availability of Dell Canvas, the new workspace device for digital creation.

Dell is showcasing its next-generation, fixed workstations at SIGGRAPH, including the Dell Precision 5820 Tower, Precision 7820 Tower, Precision 7920 Tower and Precision 7920 Rack, completely redesigned inside and out.

The three new Dell Precision towers combine a brand-new flexible chassis with the latest Intel Xeon processors, next-generation Radeon Pro graphics and highest-performing Nvidia Quadro professional graphics cards. Certified for professional software applications, the new towers are configured to complete the most complex projects, including virtual reality. Dell’s Reliable Memory Technology (RMT) Pro ensures memory challenges don’t kill your workflow, and Dell Precision Optimizer (DPO) tailors performance for your unique hardware and software combination.

The fully-customizable configuration options deliver the flexibility to tackle virtually any workload, including:

  • AI: The latest Intel Xeon processors are an excellent choice for artificial intelligence (AI), with agile performance across a variety of workloads, including machine learning (ML) and deep learning (DL) inference and training. If you’re just starting AI workloads, the new Dell Precision tower workstations allow you to use software optimized to your existing Intel infrastructure.
  • VR: The Nvidia Quadro GP100 powers the development and deployment of cognitive technologies like DL and ML applications. Additional Nvidia Pascal GPU options like HBM2 memory, and NVLink technologies allow professional users to create complex designs in computer-aided engineering (CAE) and experience life-like VR environments.
  • Editing and playback: Radeon Pro SSG Graphics with HBM2 memory and 2TB of SSD onboard allows real-time 8K video editing and playback, high-performance computing of massive datasets, and rendering of large projects.

The Dell Precision 7920 Rack is ideal for secure, remote workers and delivers the same power and scalability as the highest-performing tower workstation in a 2U form factor.  The Dell Precision 5820, 7820, 7920 towers and 7920 Rack will be available for order beginning October 3.

“Looking back at 20 years of Dell Precision workstations, you get a sense of how the capabilities of our workstations, combined with certified and optimized software and the creativity of our awesome customers, have achieved incredible things,” said Rahul Tikoo, vice president and general manager for Dell Precision workstations. “As great as those achievements are, this new lineup of Dell Precision workstations enables our customers to be ready for the next big technology revolution that is challenging business models and disrupting industries.”

Dell Canvas

Dell has also announced its highly-anticipated Dell Canvas, available now. Dell Canvas is a new workspace designed to make digital creative more natural. It features a 27” QHD touch screen that sits horizontally on your desk and can be powered by your current PC ecosystem and the latest Windows 10 Creator’s Update. Additionally, a digital pen provides precise tactile accuracy and the totem offers diverse menu and shortcut interaction.

For the 20th anniversary of Dell Precision, Dell is introducing a limited-edition anniversary model of its award-winning mobile workstation, the Dell Precision 5520. The Dell Precision 5520 Anniversary Edition is Dell’s thinnest, lightest, and smallest mobile workstation, available for a limited time, in hard-anodized aluminum, with a brushed metallic finish in a brand-new Abyss color with anti-finger print coating. The device is available now with two high-end configuration options.

Quick Look: Jaunt One’s 360 camera

By Claudio Santos

To those who have been following the virtual reality market from the beginning, one very interesting phenomenon is how the hardware development seems to have outpaced both the content creation and the software development. The industry has been in a constant state of excitement over the release of new and improved hardware that pushes the capabilities of the medium, and content creators are still scrambling to experiment and learn how to use the new technologies.

One of the products of this tech boom is the Jaunt One camera. It is a 360 camera that was developed with the explicit focus of addressing the many production complexities that plague real life field shooting. What do I mean by that? Well, the camera quickly disassembles and allows you to replace a broken camera module. After all, when you’re across the world and the elephant that is standing in your shot decides to play with the camera, it is quite useful to be able to quickly swap parts instead of having to replace the whole camera or sending it in for repair from the middle of the jungle.

Another of the main selling points of the Jaunt One camera is the streamlined cloud finishing service they provide. It takes the content creator all the way from shooting on set through stitching, editing, onlining and preparing the different deliverables for all the different publishing platforms available. The pipeline is also flexible enough to allow you to bring your footage in and out of the service at any point so you can pick and choose what services you want to use. You could, for example, do your own stitching in Nuke, AVP or any other software and use the Jaunt cloud service to edit and online these stitched videos.

The Jaunt One camera takes a few important details into consideration, such as the synchronization of all of the shutters in the lenses. This prevents stitching abnormalities in fast moving objects that are captured in different moments in time by adjacent lenses.

The camera doesn’t have an internal ambisonics microphone but the cloud service supports ambisonic recordings made in a dual system or Dolby Atmos. It was interesting to notice that one of the toolset apps they released was the Jaunt Slate, a tool that allows for easy slating on all the cameras (without having to run around the camera like a child, clapping repeatedly) and is meant to automatize the synchronization of the separate audio recordings in post.

The Jaunt One camera shows that the market is maturing past its initial DIY stage and the demand for reliable, robust solutions for higher budget productions is now significant enough to attract developers such as Jaunt. Let’s hope tools such as these encourage more and more filmmakers to produce new content in VR.

JVC GY-LS300CH camera offering 4K 4:2:2 recording, 60p output

JVC has announced version 4.0 of the firmware for its GY-LS300CH 4KCAM Super 35 handheld camcorder. The new firmware increases color resolution to 4:2:2 (8-bit) for 4K recording at 24/25/30p onboard to SDXC media cards. In addition, the IP remote function now allows remote control and image viewing in 4K. When using 4K 4:2:2 recording mode, the video output from the HDMI/SDI terminals is HD.

The GY-LS300CH also now has the ability to output Ultra HD (3840 x 2160) video at 60/50p via its HDMI 2.0b port. Through JVC’s partnership with Atomos, the GY-LS300CH integrates with the new Ninja Inferno and Shogun Inferno monitor recorders, triggering recording from the camera’s start/stop operation. Plus, when the camera is set to J-Log1 gamma recording mode, the Atomos units will record the HDR footage and display it on their integrated, 7-inch monitors.

“The upgrades included in our Version 4.0 firmware provide performance enhancements for high raster recording and IP remote capability in 4K, adding even more content creation flexibility to the GY-LS300CH,” says Craig Yanagi, product marketing manager at JVC. “Seamless integration with the new Ninja Inferno will help deliver 60p to our customers and allow them to produce outstanding footage for a variety of 4K and UHD productions.”

Designed for cinematographers, documentarians and broadcast production departments, the GY-LS300CH features JVC’s 4K Super 35 CMOS sensor and a Micro Four Thirds (MFT) lens mount. With its “Variable Scan Mapping” technology, the GY-LS300CH adjusts the sensor to provide native support for MFT, PL, EF and other lenses, which connect to the camera via third-party adapters. Other features include Prime Zoom, which allows shooters using fixed-focal (prime) lenses to zoom in and out without loss of resolution or depth, and a built-in HD streaming engine with Wi-Fi and 4G LTE connectivity for live HD transmission directly to hardware decoders as well as JVCVideocloud, Facebook Live and other CDNs.

The Version 4.0 firmware upgrade is free of charge for all current GY-LS300CH owners and will be available in late May.

Bluefish444 releases IngeSTore 1.1, adds edit-while-record capability

Bluefish444 was at NAB with Version 1.1 of its IngeSTore multichannel capture software, which is now available free from the Bluefish444 website. Compatible with all Bluefish444 video cards, IngeSTore captures multiple simultaneous channels of 3G/HD/SD-SDI to popular media files for archive, edit, encoding or analysis. IngeSTore improves efficiency in the digitization workflow by enabling multiple simultaneous recordings from VTRs, cameras and any other SDI source.

The new version of IngeSTore software also adds “Edit-While-Record” functionality and additional support for shared storage including Avid. Bluefish444 has partnered with Drastic Technologies to bring additional CODEC options to IngeSTore v1.1 including XDCAM, DNxHD, JPEG 2000, AVCi and more. Uncompressed, DV, DVCPro and DVCPro HD codecs will be made available free to Bluefish444 customers in the IngeSTore update.

The Edit-While-Record functionality allows editors access captured files while they are still being recorded to disk. Content creation tools such as Avid Media Composer, Adobe Premiere Pro CC, and Assimilate Scratch can output SDI and HDMI with Bluefish444 video cards while IngeSTore is recording and the files are growing in size and length.

Latest Autodesk Flame family updates and more

Autodesk was at NAB talking up new versions of its tools for media and entertainment, including the Autodesk Flame Family 2018 Update 1 for VFX, the Arnold 5.0 renderer, Maya 2017 Update 3 for 3D animation, performance updates for Shotgun production tracking and review software and 3DS Max 2018 software for 3D modeling.

The Autodesk Flame 2018 Update 1 includes new action and batch paint improvements such as 16-bit floating point (FP) depth support, scene detect and conform enhancements.

The Autodesk Maya 2017 Update 3 includes enhancements to character creation tools such as interactive grooming with XGen, an all-new UV workflow, and updates to the motion graphics toolset that includes a live link with Adobe After Effects and more.

Arnold 5.0 is offering several updates including better sampling, new standard surface, standard hair and standard volume shaders, Open Shading Language (OSL) support, light path expressions, refactored shading API and a VR camera.

— Shotgun updates accelerate multi-region performance and make media uploads and downloads faster regardless of location.

— Autodesk 3ds Max 2018 offers Arnold 5.0 rendering via a new MAXtoA 1.0 plug-in, customizable workspaces, smart asset creation tools, Bézier motion path animation, and a cloud-based large model viewer (LMV) that integrates with Autodesk Forge.

The Flame Family 2018 Update 1, Maya 2017 Update 3 and 3DS Max 2018 are all available now via Autodesk e-stores and Autodesk resellers. Arnold 5.0 and Shotgun are both available via their respective websites.

Boris FX merges with GenArts

Boris FX, maker of Boris Continuum Complete, has inked a deal to acquire visual effects plug-in developer GenArts, whose high-end plug-in line includes Sapphire. Sapphire has been used in at least one of each year’s VFX Oscar-nominated films since 1996. This acquisition follows the 2015 addition of Imagineer Systems, developer of Academy Award-winning planar tracking tool Mocha. Sapphire will continue to be developed and sold in its current form alongside Boris Continuum Complete (BCC) and Mocha Pro.

“We are excited to announce this strategic merger and welcome the Sapphire team to the Boris FX/Imagineer group,” says owner Boris Yamnitsky. “This acquisition makes Boris FX uniquely positioned to serve editors and effects artists with the industry’s leading tools for motion graphics, broadcast design, visual effects, image restoration, motion tracking and finishing — all under one roof. Sapphire’s suite of creative plug-ins has been used to design many of the last decades’ most memorable film images. Sapphire perfectly complements BCC and mocha as essential tools for professional VFX and we look forward to serving Sapphire’s extremely accomplished users.”

“Equally impressive is the team behind the technology,” continues Yamnitsky. “Key GenArts staff from engineering, sales, marketing and support will join our Boston office to ensure the smoothest transition for customers. Our shared goal is to serve our combined customer base with useful new tools and the highest quality training and technical support.”

 

 

NAB: The making of Jon Favreau’s ‘The Jungle Book’

By Bob Hoffman

While crowds lined up above the south hall at NAB to experience the unveiling of the new Lytro camera, across the hall a packed theatre conference room geeked-out as the curtain was slightly pulled back during a panel on the making of director Jon Favreau’s cinematic wonder, The Jungle Book.   Moderated by ICG Magazine editor David Geffner, Oscar-winning VFX supervisor Rob Legato, ASC, along with Jungle Book producer Brigham Taylor and Technicolor master colorist Mike Sowa enchanted the packed room with stories of the making and finishing of the hit film.

Legato first started developing his concepts for “virtual production” techniques on Martin Scorsese’s The Aviator, and shortly thereafter, with James Cameron and his hit Avatar. During the panel, Legato took the audience through a set of short demo clips of various scenes in the film while providing background on the production processes used by cinematographer Bill Pope, ASC, and Favreau to capture the live-action component of the film. Legato pointedly explained that his process is informed by a very traditional analog approach. The development of his thinking is based on a commitment to giving the filmmaking team tools and methodologies that allow them to work within their own particular comfort zones.

They may be working in a virtual environment, but Favreau’s wonderful touch is brilliantly demonstrated by the performance of 12-year-old Neel Sethi on his theatrical debut feature. Geffner noted more than once that “the emotional stakes are so well done you get involved emotionally” — without any notion of the technical complexity underlying the narrative.  One other area noted by Legato and Sowa was the myriad of theatrical-HDR deliverables for The Jungle Book, including up to 14-foot lamberts for the 3D presentation.  This film, and presentation, was just another clear indicator that HDR is a clear differentiator that audiences are clamoring for.

Bob Hoffman works at Technicolor in Hollywood.

Pixspan at NAB with 4K storage workflow solutions powered by Nvidia

During the NAB Show, Pixspan was demonstrating new storage workflows for full-quality 4K images powered by the Nvidia Quadro M6000. Addressing the challenges that higher resolutions and increasing amounts of data present for storage and network infrastructures, Pixspan is offering a solution that reduces storage requirements by 50-80 percent, in turn supporting 4K workflows on equipment designed for 2K while enabling data access times that are two to four times faster.

Pixspan software and the Nvidia Quadro M6000 GPU together deliver bit-accurate video decoding at up to 1.3GBs per second — enough to handle 4K digital intermediates or 4K/6K camera RAW files in realtime. Pixspan’s solution is based on its bit-exact compression technology, where each image is compressed into a smaller data file while retaining all the information from the original image, demonstrating how the processing power of the Quadro M6000 can be put to new uses in imaging storage and networking to save time and help users  meet tight deadlines.

Colorist Society International launches for color pros

Kevin Shaw

Kevin Shaw

At the opening of NAB, motion picture and television colorists Jim Wicks and Kevin Shaw announced Colorist Society International (CSI), the first the first professional association for colorists devoted exclusively to furthering and honoring the professional achievements of the colorist community. A non-profit organization, CSI represents professional colorists and promotes the creative art and science of color grading, restoration and finishing by advancing the craft, education, and public awareness of the art and science of color grading and color correction.

The Colorist Society International is a paid membership organization that will seek to increase the entertainment value of film and digital projects by attaining artistic pre-eminence and scientific achievement in the creative art of color; and to bring into close alliance those color artists who desire to advance the prestige and dignity of the color profession as educational and cultural resource rather than a labor union or guild.

“The colorist community has been growing for quite some time,” says Shaw. “We believe that a society by, for, and about colorists is long overdue. Current representation for colorists is fragmented and we feel that the industry would be better served with the coherent voice of the Colorist Society International”

Jim Wicks

Jim Wicks

Wicks added, “The notion of a colorist society is not farfetched. In much the same way, directors, cinematographers, and editors — the artists that we work closely with — have their own professional associations, each with similar mission statements and objectives.”

Membership is open to professional colorists, editor/colorists, DITs, telecine operators, color timers, finishers, and color scientists. Corporate sponsors and members from alliance organizations, such as cinematographers, directors, producers, are also welcome.

NAB 2016: My pick for this year’s gamechanger is Lytro

By Isaac Spedding

There has been a lot of buzz around what the gamechanger was at this year’s NAB show. What was released that will really change the way we all work? I was present for the conference session where an eloquent Jon Karafin, head of Light Field Video, explained that Lytro has created a camera system that essentially captures every aspect of your shot and allows you to recreate it in any way, at any position you want, using light field technology.

Typically, with game changing technology comes uncertainty from the established industry, and that was made clear during the rushed Q+A session, where several people (after congratulating the Lytro team) nervously asked if they had thought about the fate of positions in the industry which the technology would make redundant. Jon’s reply was that core positions won’t change, however, the way in which they operate will. The mob of eager filmmakers, producers and young scientists that queued to meet him (I was one of them) was another sign that the technology is incredibly interesting and exciting for many.

Lytro2“It’s a birth of a new technology that very well could replace the way that Hollywood makes films.” These are words from Robert Stromberg (DGA), CCO and founder of The Virtual Reality Company, in the preview video for Lytros’ debut film Life, which will be screened on Tuesday to an audience of 500 lucky attendees. Karafin and Jason Rosenthal, CEO at Lytro, will provide a Lytro Cinema demonstration and breakdown of the short film.

Lytro Cinema is my pick for the NAB 2016 game changing technology and it looks like it will not only advance capture, but also change post production methodology and open up new roles, possibilities and challenges for everyone in the industry.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.

Sony’s new PXW-FS5 camera, the FS7’s little brother

By Robert Loughlin

IBC is an incredibly exciting time of year for gearheads like me, but simultaneously frustrating if you making  it over to Amsterdam to see the tech in person. So when I was asked if I wanted to see what Sony was going to display at IBC before the trade show, I jumped at the chance.

I was treated to a great breakfast in the Sony Clubhouse, at the top of their building on Madison Avenue, surrounded by startling views of Manhattan and Long Island to the East. After a few minutes of chitchatting with the other writers, we were invited into a conference room to see what Sony had to show. They started by outlining what they believed their strengths were, and where they see themselves moving in the near future.

They stressed that they have tools for all corners of the market, from the F65 to the A7, and that these tools have been used in all ranges of environmental conditions — from extreme cold to scorching heat. Sony was very proud of the fact that they had a tool for almost any application you could think of. Sony’s director of digital imaging, Francois Gauthier, explained that if you started with the question, “What is my deliverable?” — meaning cinema, TV or web — Sony would have a solution for you. Yet, despite that broad range of product coverage, Sony felt that there was a missing piece in there, particularly between the FS7 and their cheaper A7 series of DSLRs. That’s where the PXW-FS5 comes in.

FS5-FS7The FS5
The FS5 is a brand-new camera that struck me as the FS7’s little brother. It sports a native 4K Super 35mm sensor, and we were told it’s the same 12 million-pixel Exmor sensor as the FS7. It records XAVC-L as well as AVCHD codecs, in S-Log 3, to dual SD card slots. The FS5 can also record high frame rates for both realtime recording and overcranking. The sensor itself is rated at EI 3200 with a dynamic range of about 14 stops. Internal recording is 8-bit 420 (at 4K — HD is 10-bit 4:2:2), but you can go out to an external recorder to get 10-bit 4K over the HDMI 2.0 port in the back. The camera also has one SDI port, but that only supports HD. You can record proxies simultaneously to the second SD card slot (though only when recording XAVC-L), and either have both slots sync up, or have individual record triggers for each. There is a 2K sensor crop mode, as well, that will let you either extend your lens, or use lenses designed for smaller image formats (like 16mm).

Controls-on-FS5

Controls on the side of the FS5

Product manager Juan Martinez stressed the power of the electronics inside, clocking boot time at less than five seconds, and mentioned that it is incredibly efficient (about two hours on the BP-U30, the smallest capacity). Additionally, he added that the camera doesn’t need to reboot if you’re changing recording formats. You just set it and you’re done.

The camera also has a new “Advanced Auto Focus” technology that can use facial recognition to track a subject. In addition to focus tools, the FS5 also has something called “Clear Image Zoom.” Clear Image Zoom is a way to blow up your picture — virtually extending the length of your lens — by first maximizing the optical zoom of the glass, then cleanly enlarging the image digitally. You can do this up to 2x, but it can be paired with the 2K sensor crop to get even more length out of your lens. The FS5 also has a built-in variable ND tool. There’s a dial on the side of the camera that lets you adjust iris to 1/100th of a stop, allowing the operator to do smooth iris pulls. Additionally, the camera has a silver knob on the front that allows you to assign up to three custom ND/iris values that you can quickly switch between.

In terms of design, it looks almost identical to the FS7, just shrunken down a bit. It has similar lines, but has the footprint and depth of the Canon C1/3/500, just a bit shorter. It’s a tiny camera. In like fashion, it’s also incredibly light. It weighs about two pounds — the magnesium body has something to do with that. It’s something I can easily hold in my hand all day. Its size and weight certainly make using this camera on gimbals and medium-sized drones very attractive. The remote operation applications become even more attractive with the FS5’s built in wireless streaming capability. You can stream the image to a computer, wireless streaming hardware (like Teradek), or your smartphone with Sony’s app. However, you can get higher bit-rates out of the stream by going over the Ethernet port on the back. Both Ethernet and wireless streaming are 720p. With the wireless capability, you can also connect to an FTP, enabling you to push media directly to a server from the field (provided you have the uplink available).

It’s also designed to work really well in your hand. The camera comes with a side grip that’s very repositionable with an easily reachable release lever. Just release the lever, and the grip is free to rotate. The grip fit perfectly in my palm, with controls either just under where my fingers naturally fell or within easy reach. The buttons included the standard remote buttons, like zoom and start/stop, but also a user definable button and a corresponding joystick, for quick access to menus.

Handgrip

Top: handgrip in hand, Bottom: button map

Top: the handgrip in hand, Bottom: button map

The grip is mounted very close to the camera body, in order to optimize the center of gravity while holding it. The camera is small and light enough that while holding it this way without the top handle and LCD viewfinder it’s reminiscent of holding a Handicam. However, if you have a long lens, or a similar setup where the center of gravity alters significantly, and need to move the grip up, you can remove it and mount an ARRI rosette plate (sold separately).

The FS5, without top handle or LCD viewfinder

The FS5, without top handle or LCD viewfinder

The camera also comes with a top handle that has GPS built-in, mounting points for the LCD viewfinder, an XLR input, and a Multi Interface hot-shoe mount. The handle also has its own stereo microphone built into the front, but the camera itself can only record two channels of audio.

Sony has positioned this camera to fall between DSLRs and the FS7. The MSRP is $6,699 for the body only, or $7,299 with a kit lens (18-105mm). The actual street prices will be lower than that, so the FS5 should fit comfortably between the two. Sony envisions this as their “grab and go” camera, ideal for remote documentary and unscripted TV or even web series. The camera is small, light and maneuverable enough to certainly be that. They wanted a camera that would be unintimidating to a non-professional, and I think they achieved that. However, without things like genlock timecode, and its E-mount lens mount, this camera is less ideal for cinema applications. There are other cameras around the same price point that are better suited for cinema (Blackmagic, RED Scarlet), so that’s totally fine. This camera definitely has its DNA deeply rooted in the camcorder days of yore, and will feel right at home with someone shooting and producing content for documentaries and TV. They showed a brief clip of footage, and it looked sharp with rich colors. I still tend to favor the color coming out of the Canon C series over the FS5, but it’s still solid footage. Projected availability is November 2015. For a full breakdown of specs, visit www.sony.com/fs5.

Sony PSZ-RA6T

Sony PSZ-RA6T

However, that wasn’t all Sony showed. The FS5 is pretty neat, but I was much more excited for the other thing Sony brought out. Tucked away in a corner of the room where they had put an FS5 in a “studio” set-up was a little download station. Centered around a MacBook Pro, the simple station had a Thunderbolt card reader and offload drive. The PSZ-RA drive is a brand new product from Sony, and I’m almost more excited about this little piece of hardware than I am about the new camera. It’s a small, two disk RAID that comes in 4TB and 6TB options. It’s similar to G-Tech’s popular G-RAIDs, with one notable exception. This thing is ruggedized. Imagine a LaCie Rugged the size and shape of a G-RAID (but without that awful orange — this is Sony-gray). The disks inside are buffered; it’s rated to be dropped from about a foot and can safely be tilted four inches in any direction. It supports RAID-0, -1 and JBOD. To me, set at RAID-1, it’s the perfect on-set shuttle drive. It even has a handle on top!

Overall, I saw a couple of really exciting things from Sony, and while I think a lot of people are really going to like the FS5, I’m dying to get the PSZ-RA drives on set.

Post production professional, specializing in dailies workflows as an Outpost Technician at Light Iron New York, and all-around tech-head.

IBC: Adobe upgrades Creative Cloud and Primetime

Adobe is adding new features to Adobe Creative Cloud, including support for Ultra HD (UHD), color-technology improvements and new touch workflows. In addition, Adobe Primetime, one of eight solutions inside Adobe Marketing Cloud, will extend its delivery and monetization capabilities for HTML5 video and offer new tools for pay-TV providers that make TV Everywhere authentication easier and more streamlined.

New video technology coming soon to Creative Cloud allows tools that will streamline workflows for broadcasters and media companies. They are:

  • Comprehensive native format support for editing 4K-to-8K footage in Premiere Pro CC.
  • Continued color advancements with support for High Dynamic Range (HDR) workflows in Premiere Pro CC.
  • Improved color fidelity and color adjustments in After Effects CC, as well as deeper support for ARRI RAW, Rec. 2020 and other Ultra HD and HDR formats.
  • A touch environment with Premiere Pro CC, After Effects CC and Character Animator optimized for Microsoft Surface Pro, Windows 8 tablets or Apple trackpad devices.
  • Remix, a new feature in Audition CC that adjusts the duration of a song to match video content. Remix automatically rearranges music to any duration while maintaining musicality and structure, creating custom tracks to fit storytelling needs.
  • Updated support for Creative Cloud Libraries across CC desktop video tools, powered by Adobe CreativeSync. Now, assets will instantly appear in After Effects and Premiere Pro.
  • Destination Publishing, a single-action solution in Adobe Media Encoder for rendering and delivering content to popular social platforms, will now support Facebook.
  • Adobe Anywhere, a workflow collaboration platform, can be deployed as either a multilocation streaming solution or a single-location collaboration-only version.

Primetime, Adobe’s multiscreen TV platform, is also getting an upgrade to support OTT and direct-to-consumer offerings. The upgrade includes:

  • Ability to deliver HTML5 content across mobile browsers and additional connected devices, extending its reach and monetization capabilities.
  • An instant-on capability that pre-fetches video content inside an app to start playback in less than a second, speeding the startup time for video-on-demand and live streams by 300 and 500 percent, respectively.
  • Support for Dolby AC-3 to enable high-impact, cinema-quality sound on virtually all desktops and connected devices.
  • Support for the OAUTH 2.0 protocol to make it easier for consumers to access their favorite pay-TV content. Pay-TV providers can enable frictionless TV Everywhere with home-based authentication and offer longer authentication sessions that require users to log in only once per device.
  • New support for OTT and TV Everywhere measurement — including a broad variety of user-engagement metrics — in Adobe Analytics, a tool that is integrated with the Primetime TVSDK.

IBC: iZotope announces RX Post Production Suite and RX 5 Audio Editor

Audio technology company iZotope, Inc. has unveiled its new RX Post Production Suite, a set of tools that enable professionals to edit, mix, and deliver their audio, as well as RX 5 Audio Editor, an update to the company’s RX platform.

The new RX Post Production Suite contains products aimed at each stage of the audio post production workflow including audio repair and editing, mix enhancement and final delivery. The RX Post Production Suite includes the RX 5 Advanced Audio Editor, RX Final Mix, RX Loudness Control, and Groove3, well as the customer’s choice of 50 free sound effects from Pro Sound Effects.

The new RX 5 Audio Editor and RX 5 Advanced Audio Editor are designed to repair and enhance common problematic production audio while speeding up workflows that currently require either multiple manual editing passes, or a non-intuitive collection of tools from different vendors. RX 5’s new Instant Process tool lets editors “paint out” unwanted sonic elements directly on the spectral display with a single mouse gesture. The new Module Chain allows users to define a custom chain of processing (e.g. De-click, De-noise, De-reverb, EQ Match, Leveler, Normalize) and then save that chain as a preset so that multiple processes can be recalled and applied in a single click for repetitive tasks.

For Pro Tools/RX 5 workflows, RX Connect has been enhanced to support individual Pro Tools clips and crossfades with any associated handles so that processed audio returns “in place” to the Pro Tools timeline.

RX 5 Advanced also includes a new De-plosive module that minimizes plosives from letters such as p, t, k, and b, in which strong blasts of air create a massive pressure change at the microphone element, impairing the sound. In addition, the Leveler module has been enhanced with breath and “ess” (sibilance) detection for increased accuracy when performing faster than realtime leveling.

ftrack 3.2 includes new API and customizable Workflows

Swedish company ftrack has launched ftrack 3.2, the newest version of its project management solution for creative industries. Along with the Actions functionality announced last April, ftrack 3.2 includes several customer-driven features that expand the uses of the software. These include Workflows functionality, which enables users to tailor ftrack to their industry, and a rebuilt API that allows for more flexibility and deeper tool customization. Later this year, ftrack will launch an ftrack mobile app that will allow users to track production on the go.

ftrack 3.2’s new Workflows functionality is designed to improve project structures by removing the rigidity of sequences, shots and tasks. Instead of the task groups of the previous version, ftrack 3.2 allows users to customize layouts to match the needs of the project. Users can rename each group to match the terminology of their domain, making ftrack relevant to mobileApp-myassignmentsa wider range of creative disciplines and markets, such as video games, motion graphics and architecture.

In addition, ftrack 3.2 includes a faster, more comprehensive open-source API targeted to developers. The 3.2 API has greater scope and covers more of the functionality contained in ftrack, while offering more avenues for developers to adjust performance. Fully documented and built around normal Python data structures, the new ftrack API is an out-of-the-box solution designed to simplify tool customization.

Coming in September, the ftrack mobile app will allow users to monitor ongoing projects when away from their workstations. Using their mobile devices, users will be able to check in on the status of tasks, log time, stop or start timers, receive notifications and contact others involved in the project.

Chaos Group shows V-Ray for Nuke at SIGGRAPH 2015

Chaos Group’s V-Ray for Nuke is a new tool for lighting and compositing that integrates production-quality ray-traced rendering into the company’sNuke,NukeX, and NukeStudio products. V-Ray for Nuke enables compositors to take advantage of V-Ray’s lighting, shading and rendering tools inside NUKE’s node-based workflow.

V-Ray forNuke brings the same technology used on Game of Thrones, Avengers: Age of Ultron, and other film, commercial and television projects to professional compositors.

Built on the same adaptive rendering core as V-Ray’s plugins for Autodesk 3ds Max and Maya, V-Ray for Nuke is designed for production pipelines. V-Ray forNuke gives compositors the ability to adjust lighting, materials and render elements up to final shot delivery. Full control of 3D scenes in Nuke lets compositors match 2D footage and 3D renders simultaneously, saving time for environments and set extension work. V-Ray for Nuke includes a range of features for rendering and geometry with 36 beauty, matte and utility render elements, as well as effects for lights, cameras, materials, and textures.

New Autodesk extensions, updated Shotgun at SIGGRAPH 2015

At SIGGRAPH 2015  Autodesk announced its 2016 M&E extensions, designed to accelerate design, sharing, review and iteration of 3D content across every stage of the creative pipeline. The Maya 2016 extension is a new text tool for creating branding, flying logos, title sequences and other projects that require 3D text. The 3ds Max 2016 extension includes geodesic voxel and heat map solvers to help artists create better skin weighting faster. New Max Creation Graph (MCG) animation controls provide procedural animation capabilities.

Creative Market, an online content marketplace acquired by Autodesk last year, is expanding its offerings with the debut of 3D content. The marketplace is currently home to nearly 9,000 shops selling more than 250,000 design assets to a community of more than one million members. Artists can search, purchase and license high-quality 3D content created by designers around the world or upload and sell original 3D models on the site.

Shotgun Software has announced a new set of features and updates designed to make it easier for teams to review, share and provide feedback on creative projects. Also at SIGGRAPH 2015, Autodesk has announced the latest extension releases for its Maya 2016 and 3ds Max 2016 3D modeling, animation, VFX and rendering software and a new 3D marketplace on Creative Market, the company’s online platform for purchasing and selling custom content developed by artists.

Shotgun’s upcoming Shotgun 6.3 release will include new review and approval features and an updated Client Review Site to streamline collaboration and communication within teams, across sites and with clients. Shotgun’s Pipeline Toolkit is also being updated with the Shotgun Panel, which will let artists communicate directly with other artists and see only the information relevant to their tasks directly inside creative tools like Autodesk Maya and The Foundry’s Nuke, along with a refreshed Workfiles tool to find and navigate to relevant files more quickly.

Shotgun 6.3 includes a new global view that allows users to easily access and manage media across all of a studio’s projects from a central location in Shotgun. Other improvements include new browsing options, playlists and a preference to launch media in RV, the desktop image/movie player.

 

The Foundry gets new owner

The Foundry has received a majority investment from private equity firm HgCapital, a top investor in European software. The Foundry will sit within HgCapital’s technology, media, and telecommunications sector. Under the terms of the deal, HgCapital will assume majority ownership from The Carlyle Group for an enterprise value of £200 million ($312 million USD).

With this deal, The Foundry remains one of the few independent companies solely focused on creative industries. The deal lets The Foundry pursue its strategy of prioritizing research and innovation and teaming with other companies to create collective solutions.

NAB: We are a lucky bunch of nerds

By William Rogers

I bolted awake at 5am this morning, Las Vegas time.

I’m still ticking on New York City’s clock, and I don’t think that I’ll be changing that any time this week. I’ve also completely refrained from gambling, drinking (besides a sip of wine at a Monday night dinner with OWC) and any other activities that would cause me to think about keeping what happened here, here.

Between running laps around the South Lower hall of the convention center, I had to stop and take my brain away from my calendar app to reflect on a thought that kept popping up in my head; I really, really love the people here at NAB.

I’m not necessarily talking about the cornerstone vendors and the keynote speakers, but more about the passionate people that are standing behind something that they truly pour their heart and soul into. After the vendor representatives and I would get past the product demos and the required reading, we’d get into a more human conversation and still keep it relative to our body of work.

I like that. I can’t stand fluff and disingenuousness. I can’t stand purposeless self-promotion. What I love is when I ask the right question, and I see people stand a few inches taller because they’re not slumping into their required schpiel.

We filmmakers work in an incredible field. It doesn’t matter what role we’re in, whether it be the grip throwing up the Kinos for an interview, or the online editor who meticulously scrutinizes the footage for the conform.

We’re a lucky bunch of nerds.

My Tuesday

LaCieLaCie showed off a bunch of new stuff. They’re pushing out two new Rugged drives, one spinning disk capable of RAID 0/1, and another with SSD and Thunderbolt tailored for speedy field transfers. I also got an extensive look at the 8big Rack Thunderbolt 2, which is a multi-multi Terabyte storage solution equipped with Thunderbolt 2, enterprise class drives, and 1330 MB/s speed for 4K editing.

I stopped by Small Tree, who provides Ethernet-based server solutions for in-house editing as well as mobile server storage. Small Tree provided their Titanium Z-5 shared storage system for Digiboyz Inc., who used Small Tree’s capabilities on Netflix’s Trailer Park Boys.

SwitchTelestream had a multitude of post-production software solutions on display, but I was directed to check out Switch. Switch is a media player with an elegant UI, but is meant for QC inspection, transcoding and file modifications. For post houses that need to view and modify a vast array of file types including transport streams, Switch is DPP/AMWA-certified software that provides a reliable alternative to open source software.

Facilis was debuting their own venture into the SSD world with Terrablock 24D/HA. The Hybrid Array has 8 onboard SSD drives for ultra-high performance partitions, alongside traditional SATA drives. The combination allows for space scalability inherent to spinning disk drives, while taking advantage of the speed of SSD drives.izotope

I made my way over to Izotope, who specializes in audio finishing plug-ins based on advance audio analyzing. Their software RX4, which plugs into DAWs as well as NLEs, was demonstrating several nifty ways to rescue seemingly lost audio—my favorite was a preset that was able to detect and eliminate GSM cell phone interference on their visual audio spectrum analysis.

For those not in the know, on-site media storage will eventually be a thing of the past, even for large HD(+) media workflows. Aframe Aframewas going to give me a demo of the usability of their online UI, but we got sidetracked discussing their future integration with Adobe Anywhere. Keep an eye out, because within the next few years, public customers will be able to upload all of their video assets to the cloud and live edit with no media stored on local discs.

CTRL+Console showed off their iPad app, which is used to control NLEs and other post software, like Adobe Lightroom. Meant as a keyboard replacement, you can turn your tablet (currently limited to iPad) into a touchscreen console without learning keyboard hotkeys.

Cinegy was kind enough to escort me to a breakout room for snacks and chilly water over a conversation about the post industry. Cinegy provides software technology for digital video processing, asset management, compression and playback in broadcast environments. This year, they were rolling out Version 10 of their software featuring 4K IP-based broadcast solutions Cinegy Multiviewer and Cinegy Route, as well as Cinegy Air PRO, Cinegy Type and a variety of other solutions.

I met up with T2 Computing, who designs and implements IT solutions for post-production facilities and media companies. T2 recently teamed up with Tekserve to overhaul their invoicing and PO management system.

I’d say it was a successful Tuesday. I tried to get into my hotel pool later that evening, but my efforts to aquatically relax were thwarted by a Las Vegas sandstorm. Instead, I kicked my feet up to read a few more chapters from my Kindle, which was exactly what I needed.

Will is an editor, artist and all around creative professional working as a Post Production Coordinator for DB Productions in NYC.

NAB: Exploring collaborative workflows on the exhibit floor

By Adrian Winter

It was back to the showroom floor for me today as I checked in on a number of exhibitors with an eye toward collaborative workflows.

My first stop was the Adobe booth to take in a demonstration of Adobe Anywhere — Adobe’s collaborative platform for Premiere, Prelude and After Effects.

The workflow is built around a number of users, working either in house or remotely, that can access and work with the same footage all stored in one place called a Collaboration Hub. This Continue reading

Sound developments at the NAB Show

Spotlighting Pro Sound Effects library, Genelec 7.1.4 Array, Avid Master Joystick Module and Sennheiser AVX wireless mic

By Mel Lambert

With a core theme of ”Crave More,” which is intended to reflect the passion of our media and entertainment communities, and with products from 1,700 exhibitors this year – including over 200 first-time companies – there were plenty of new developments to see and hear at the NAB Show, which continues in Las Vegas until Thursday afternoon.

In addition to unveiling Master Library 2.0, which adds more that 30,000 new sound effects, online access, annual updates and new subscription pricing, Pro Sound Effects demonstrated a Continue reading

ftrack 3.2 intros Nuke Studio integration, expands Actions framework

This summer, ftrack will release version 3.2 of its cloud-based project management platform for the creative, visual effects and animation industries.

ftrack 3.2 will include integration with Nuke Studio, a move that puts The Foundry’s entire file-based workflow into an asset-based workflow for ftrack users, thus eliminating the need to use the file system. Instead artists will get off-the-shelf access to creative project management.

Also, with the crew tab and chat feature, coordinators, producers and other users will be able to communicate with their fellow team members via text and video chat. The chat feature is intended to break down communication barriers and reduce bottlenecks so that information can go directly to relevant crew members.

In addition to the Nuke Studio integration, ftrack 3.2 will see upgrades to the platform’s Actions system, which allows users to integrate processes, automate repetitive tasks or create file system structures. Designed to increase efficiency, the improved Actions functionality will give developers the freedom to implement their own tools, customize parts of the UI and request additional information from the user before the Actions run (e.g., email addresses for report recipients).

AMD FirePro supports Avid Media Composer 8.4 for HD, 4K workflows

AMD has certified Avid Media Composer 8.4 for HD and 4K broadcast and digital content creation workflows powered by AMD FirePro professional graphics on Microsoft Windows and Mac Pro workstations. The new support for Avid Media Composer enables AMD FirePro professional graphics customers to take advantage of 4K display, media management and editing capabilities throughout the video production process.

Avid Media Composer nonlinear video editing software is used extensively by professional editors in moviemaking, television, broadcast and streaming media. AMD FirePro professional graphics enable Avid Media Composer to support editing of high volumes of disparate file-based media for accelerated high-res and HD workflows, real-time collaboration and well-managed media management.

 

NAB Day 1: Me, myself and Monday

By William Rogers

Let’s dive right into the craziness.

RED sat me down with the other members of the press in a comfortably dark theater, as they blasted my face with footage demoed from their new Weapon-sensor equipped cameras. There was a bit of awkwardness in the air shared between the RED representatives and the press members—RED admitted that they hadn’t done this sort of sleek, private reveal before at NAB.

Continue reading

NAB: Reporting from the Tech Summit on Cinema

By Tim Spitzer

What did I hear about digital cinema at the Technology Summit of Cinema during NAB? That the future of digital cinema is bright (pun totally intended)!

Projection at the Summit was provided by Barco on a single head 4K Laser projection system on a new RealD Silver Screen technology. Indeed it was bright. Barco’s largest laser projectors are approaching 60,000 lumens, and new screen coating technologies are making screens more uniform and higher gain. Continue reading

Walking the floor of NAB: The day my brain melted

By Adrian Winter

The floor show of NAB opened today, and I as well as other members of the Nice Shoes team were there at the bright and early to see what the exhibitors had to offer.

The first stop on the show floor was at the FilmLight booth where I got a demo of the Baselight for Nuke plug-in. One of the strengths of having integrated color and post inhouse at Nice Shoes is the ability to go back into the color suite once a spot is conformed and comped for a final grading pass. Continue reading

Autodesk upgrades Flame, Maya and Max to 2016 versions

Autodesk has released updated versions of its Maya and 3ds Max 3D animation software tools. Maya 2016 includes improved animation performance with a parallel evaluation system that takes advantage of the CPU and GPU to increase the speed of both playback and character rig manipulation. The look and feel of the tool has been updated, and Maya 2016 includes new capabilities in the Bifrost procedural effects platform that enable realistic liquid simulations. Continue reading

Apple upgrades Final Cut Pro X, Motion and Compressor

Apple has just announced Motion version 5.2, Final Cut Pro X version 10.2 and Compressor version 4.2. 

Both Motion and Final Cut Pro X now include 3D title capabilities. Users can create animated, customizable 3D text via cinematic templates with built-in backgrounds and animations. There is a large collection of text styles, and it’s possible to customize titles with hundreds of combinations of materials, lighting and edges for advanced 3D looks. Both Motion 5.2 and Final Cut Pro X 10.2 have additional controls to adjust environment, shadows and more, and both instantly convert 2D titles to 3D.

Among Motion 5.2’s many other new 3D title capabilities are flexible surface shading with options for texture maps, diffuse and specular reflection and bump mapping; real-world attributes such as paints, finishes and distress; and more than 90 built-in materials including metals, woods and plastics. Users can combine layers of materials to create unique looks, customize any material and save it as a new preset, and save any 3D title and access it instantly in Final Cut Pro X. (Similarly, Final Cut Pro X version 10.2 users can open any 3D title in Motion to add multiple lights, cameras and tracking.)

Motion version 5.2 supports Panasonic AVC-Ultra, Sony XAVC S and JVC H.264 Long GOP camera formats, and has 12 new generators, including Manga Lines, Sunburst and Spiral Graphics. Apple has made many other enhancements to improve performance and increase control, and to address issues with the prior version. For example, choosing a smooth option on an already smooth point no longer changes the curve, and double-clicking to add a new keyframe in the curve editor no longer changes interpolation of subsequent keyframes.

Motion-Materials_Display27-PF-PRINTBesides its new 3D title capabilities, Final Cut Pro X version 10.2 is capable of many new advanced effects, such as the ability to display up to four video scopes simultaneously; to apply super ellipse shape mask to any clip; and to apply draw mask to any clip, with options for linear, Bézier or B-spline smoothing. There are new shape and color mask controls for every effect, and Version 10.2 instantly displays the alpha channel for any effect mask. Apple has merged the color board into a new “color correction” effect, and it is possible to rearrange the processing order of the color correction effect. Users can save custom effects as presets for quick access.

Final Cut Pro X version 10.2 supports Panasonic AVC-Ultra, Sony XAVC S, and JVC H.264 Long GOP camera formats, with the ability to import Sony XAVC and XDCAM formats without a separate plug-in. Version 10.2 also has GPU-accelerated RED RAW processing with support for dual GPUs and for RED RAW anamorphic formats.

Additional features include “smart collections” at the event and library level, an import window that consolidates all options into a single sidebar, and GPU rendering when using “send to Compressor” (with support for dual GPUs). Final Cut Pro version 10.2 also improves upon the prior version with faster drawing of audio waveforms, which betters performance especially when editing over a network. Among a long list of other improvements: transform controls work correctly with photos in a secondary storyline, freeze frames copy media across multiple libraries, slow-motion video clips from iPhone appear in the browser with a badge, and MXF-wrapped AVC-Intra and uncompressed files export faster.

Finally, Compressor version 4.2 introduces new features that let users create an iTunes Store package for iTunes Store submission; add movie, trailer, closed captions and subtitles to an iTunes Store package; preview closed captions and subtitles right in the viewer; zoom in within the viewer to watch content with true pixel accuracy; and display and assign channels to QuickTime audio tracks prior to processing. Like the new version of Final Cut Pro X, Compressor version 4.2 offers GPU rendering when using “send to Compressor,” with support for dual GPUs. It also offers hardware-accelerated, multipass H.264 encoding on compatible systems, automatic bit-rate calculation to MPEG-4 and H.264 QuickTime movies, optional matrix stereo downmix when processing surround sound for QuickTime output, and CABAC entropy mode for multipass encoding. To address prior issues, Apple improved stability when using Apple AES3 audio format with ProRes 422 HQ. Also jobs submitted via Droplet now appear in the “active” and “completed” tabs.

Looking at the big picture

By Adrian Winter

NAB is a great place to try something new, as it draws in vendors and experts from every aspect of production and post production. My approach in attending the conference this year has been take in a wide variety of sessions — extending beyond the standard VFX and animation techniques that speak to my own day-to-day experience — in order to gain a better understanding of the challenges faced by other sides of this business: directors, DPs, editors, colorists and other key artists in the creative process. Continue reading

NAB: A trip of firsts

By William Rogers

I touched down in Denver on Sunday at about 9pm Mountain time, while finishing up a chapter in Slaughterhouse 5 on my Kindle. It was an important distraction, as my flight encountered probably the worst turbulence I’ve ever experienced. I’m incredibly thankful that I’m easily distracted by a good book.

This will be a trip of firsts for me: My first time to Las Vegas, my first time going further west than Denver, and my first time staying in a hotel room by myself. I’m not as well traveled in the Continue reading

D-Cinema Summit: standardization of immersive sound formats

By Mel Lambert

“Our goal is to develop an interoperative audio-creation workflow and a single DCP that can be used to render to whatever playback format – Dolby Atmos, Barco/Auro 3D, DTS:X/MDA – has been installed in the exhibition space,” stated Brian Vessa, chairman of SMPTE Technology Committee 25CSS, which is considering a common standardized method for delivering immersive audio to cinemas. Vessa, who also serves as executive director of Digital Audio Mastering at Sony Pictures Entertainment, was speaking at this past weekend’s joint SMPTE/NAB Technology Summit on Cinema during a session focused on immersive sound formats, Continue reading

Final Cut Pro X resurrected: Focus’ advanced workflow

By Daniel Restuccio

To many, Apple’s Final Cut Pro editing application died in June 2011 when they announced Final Cut X.  Derided as an odd version of iMovie, it lacked many of the features of Final Cut 7 and fell out of favor with many editors looking for an alternative to Avid Media Composer.

Nearly four years later Final Cut Pro 10.1.4 is fully resurrected and, for the makers of the Will Smith caper Focus, a godsend that provided a flexible, efficient and cost-effective workflow to post their feature movie shot on the Arri Alexa.

Less than two years since releasing the new MacPro “cylinder,” Apple claims that they have upgraded Final Cut Pro X to the level where it can be taken seriously again as a post production Continue reading