Author Archives: Amy

Avid Media Composer now supports ProRes RAW and DNx codecs

Avid has added native support in Media Composer for Apple’s ProRes RAW camera codec and support for ProRes playback and encoding on Windows. In addition, Apple will provide 64-bit decoders for DNxHR and DNxHD codecs within the Pro Video Formats package that is available from Apple as a free download for all users. These integrations will allow content creators and post companies to natively create high-quality ProRes content regardless of their operating system and save time during the creative storytelling process.

ProRes is a high-performance editing codec that provides multistream, high-quality images and low complexity for premium realtime editing. The codec, which will be available to Media Composer users on Windows, supports frame sizes ranging from SD and HD to 2K, 4K and beyond at full resolution with image-quality preservation and reduced storage rates.

In addition, Media Composer for macOS and Windows, which was completely redesigned for 2019, also will add native support for ProRes RAW. ProRes RAW applies ProRes compression to the RAW data from a camera sensor to provide the flexibility of RAW video with the performance of ProRes for editing today’s highest-resolution outputs.

Finally, Avid says the continued availability of Avid’s DNxHD and DNxHR decoders for macOS is a benefit to content creators using Apple and Avid products and will ensure the longevity of content creators’ DNx material encoded in MXF and QuickTime files.

 

Sony intros 4K camera with 6K full-frame sensor, auto-focus

At IBC 2019, Sony announced the PXW-FX9, its first XDCAM camera featuring an advanced 6K full-frame sensor and Fast Hybrid Auto Focus (AF) system. The new camera offers content creators greater creative freedom and flexibility to capture stunning images that truly resonate with audiences.

Building on the success of the PXW-FS7 and PXW-FS7M2, the FX9 combines high mobility with an advanced AF system for higher bokeh and slow-motion capabilities thanks to its newly developed sensor. The FX9 also inherits its color science and a dual base ISO from the Venice digital motion picture camera, creating the ultimate tool for documentaries, music videos, drama productions and event shooting.

The FX9 was designed in close collaboration with the creative community. It offers the versatility, portability and performance expected of an FS7 series “run and gun” style camera, while also offering high dynamic range and full-frame shooting features.

“With the new FX9, we are striking a balance between agility and creative performance. We’ve combined the cinematic appeal of full-frame with advanced professional filmmaking capabilities in a package that’s extremely portable and backed by the versatility of Sony E-mount,” says Sony’s Neal Manowitz.

The new Exmor R sensor offers wide dynamic range with high sensitivity, low noise and over 15 stops of latitude that can be recorded internally in 4K 4:2:2 10-bit. Oversampling of the full-frame 6K sensor’s readout allows pros to create high-quality 4K footage with bokeh effects through shallow depth of field, while wide-angle shooting opens new possibilities for content creators to express their creativity.

A dual base ISO of 800 and 4000 enables the image sensor’s characteristics to best capture scenes from broad daylight to the middle of the night. With S-Cinetone color science, the new sensor can create soft facial tones. The camera can also capture content up to five times slow-motion with full HD 120fps shooting played back at 24p.

The shallow depth of field available with a full-frame image sensor requires precise focus control, and the enhanced Fast Hybrid AF system, with customizable transition speeds and sensitivity settings, combines phase detection AF for fast, accurate subject tracking with contrast AF for exceptional focus accuracy. The dedicated 561-point phase-detection AF sensor covers approximately 94% in width and 96% in height of the imaging area, allowing consistently accurate, responsive tracking — even with fast-moving subjects while maintaining shallow depth of field.

Inspired by the high mobility run-and-gun style approach from the FS7 series of cameras, the FX9 offers content creators shooting flexibility thanks to a continuously variable Electronic Variable ND filter. This enables instant exposure level changes depending on the filming environment, such as moving from an inside space to outdoors or while filming in changing natural light conditions.

Additionally, the FX9’s image stabilization metadata can be imported to Sony’s Catalyst Browse/Prepare software to create stable visuals even in handheld mode. Sony is also working to encourage third-party nonlinear editing tools to adopt this functionality.

The FX9 will be available toward the end of 2019.

Adobe adds Sensei-powered Auto Reframe to Premiere

At IBC 2019, Adobe introduced a new reframing/reformatting feature for Premiere Pro called Auto Reframe. Powered by Adobe Sensei, the company’s AI/machine learning framework, Auto Reframe intelligently reframes and reformats video content for different aspect ratios, from square to vertical to cinematic 16:9 versions. Like the recently introduced Content-Aware Fill for After Effects, Auto Reframe uses AI and machine learning to accelerate manual production tasks without sacrificing creative control.

For anyone who needs to optimize content for different platforms, Auto Reframe will save valuable hours by automating the tedious task of manually reframing content every time a different video platform comes into play. It can be applied as an effect to individual clips or to whole sequences.

Auto Reframe will launch on Premiere Pro later this year. You can watch Adobe’s Victoria Nece talk about Auto Reframe and more from the IBC 2019 show floor.

An editor’s recap of EditFestLA

By Barry Goch

In late August, I attended my first American Cinema Editors’ EditFest on the Disney lot, and I didn’t know what to expect. However, I was very happy indeed to have spent the day learning from top-notch editors discussing our craft.

Joshua Miller from C&I Studios

The day started with a presentation by Joshua Miller from C&I Studios on DaVinci Resolve. Over the past few releases, Blackmagic has added many new editor-specific and -requested features.

The first panel, “From the Cutting Room to the Red Carpet: ACE Award Nominees Discuss Their Esteemed Work,” was moderated by Margot Nack, senior manager at Adobe. The panel included Heather Capps (Portlandia); Nena Erb, ACE (Insecure); Robert Fisher, ACE (Spider-Man: Into the Spider-Verse); Eric Kissack (The Good Place) and Cindy Mollo, ACE (Ozark). Like film school, we would watch a scene and then the editor of the scene would break it down and discuss their choices. For example, we watched a very dramatic scene from Ozark, then Mollo described how she amplified a real baby’s crying with sound design to layer on more tension. She also had the music in the scene start at a precise moment to guide the viewer’s emotional state.

The second panel, “Reality vs. Scripted Editing: Demystifying the Difference,” was moderated by Avid’s Matt Feury and featured panelists Maura Corey, ACE (Good Girls, America’s Got Talent); Tom Costantino, ACE (The Orville, Intervention); Jamie Nelsen, ACE (Black-ish, Project Runway) and Molly Shock, ACE (Naked and Afraid, RuPauls Drag Race All Stars). The consensus of the panel was that an editor can create stories from reality or from script. The panel also noted that an editor can be quickly pigeonholed by their credits — it’s often hard to look past the credits and discover the person. However, it’s way more important to be able to “gel” with an editor as a person, since the creative is going to spend many hours with the editor. As with the previous panel, we were also treated to short clips and behind-the-scenes discussions. For example, Shock told of how she crafted a dramatic scene of an improvised shelter getting washed away during a flood in the middle of a jungle at night — all while the participants were completely naked.

Joe Walker, ACE, and Bobbie O’Steen

The next panel was “Inside the Cutting Room with Bobbie O’Steen: A Conversation with Joe Walker, ACE.” O’Steen, who authored “The Invisible Cut” and “Cut to the Chase,” moderated a discussion with Walker, whose credits include Widows, Blade Runner 2049, Arrival, Sicario and 12 Years a Slave, in which she lead Walker in a wide-ranging conversation about his career, enlivened with clips from his films. In what could be called “evolution of a scene,” Walker broke down the casino lounge scene in Blade Runner 2049, from previs to dailies, and then talked about how the VFX evolved during the edit and how he shaped the scene to final.

The final panel, “The Lean Forward Moment: A Tribute to Norman Hollyn, ACE,” was moderated by Alan Heim, ACE, president of the Motion Picture Editors Guild, and featured Ashley Alizor, assistant editor; Reine-Claire Dousarkissian, associate professor of the practice of cinematic arts at USC; Saira Haider (Creed II), editor; and professor of the practice of cinema arts at USC, Thomas G. Miller, ACE.

I had the pleasure of interviewing Norm for postPerspective, and he was the kind of man you meet once and never forget — a kind and giving spirit who we lost too soon. The panelists each had a story about how wonderful Norm was and they honored his teaching by sharing a favorite scene with the audience and explaining how it impacted them through Norm’s teaching. Norm’s colleague at USC, Dousarkissian, chose a scene from the 1952 Noir film Sudden Fear, with Jack Palance and Joan Crawford. It’s amazing how much tension can be created by a simple wind-up toy.

I thoroughly enjoyed my experience at EditFest. So often we see VFX breakdowns, which are amazing things, but to see and hear how scenes and story beats are crafted by the best in the business was a treat. I’m looking forward to attending next year already.


Barry Goch is a finishing artist at LA’s The Foundation, as well as a UCLA Extension Instructor, Post Production. You can follow him on Twitter at @Gochya

Killer Tracks rebrands as Universal Production Music

Production music company Killer Tracks has rebranded as Universal Production Music. The new name strengthens alignment with parent company Universal Music Group.

As part of its rebrand, Universal Production Music has launched new US website. Using the new theme “Find Your Anthem,” the site provides intuitive tools for searching, sharing and collaborating, all of which are designed to help users discover unique tracks to tell their stories and make their projects stand out. New features include a “My Account” section that allows users to control access, download tracks, manage licenses and pay invoices.

“Customers will gain faster access to tracks, simplified licensing and more great music,” notes VP of repertoire Carl Peel. “At the same time, they can still speak directly with our music search specialists for help in finding that perfect track and building playlists. Our licensing experts will continue to provide guidance with questions related to rights and usage.”

Drawing on a roster of talent that includes top composers, producers and artists, Universal Production Music releases more than 30 albums of original music each month. It also offers more than 150 curated playlists organized by theme.

“We look forward to working closely with our colleagues in the US to share insights into emerging musical trends, develop innovative services and pursue co-production ventures,” says Jane Carter, managing director of Universal Production Music, UK. “Most importantly, our customers will enjoy an even wider selection of premium music to bring their projects to life.”

Whiskytree experiences growth, upgrades tools

Visual effects and content creation company Whiskytree has gone through a growth spurt that included a substantial increase in staff, a new physical space and new infrastructure.

Providing content for films, television, the Web, apps, game and VR or AR, Whiskytree’s team of artists, designers and technicians use applications such as Autodesk Maya, Side Effects Houdini, Autodesk Arnold, Gaffer and Foundry Nuke on Linux — along with custom tools — to create computer graphics and visual effects.

To help manage its growth and the increase in data that came with it, Whiskytree recently installed Panasas ActiveStor. The platform is used to store and manage Whiskytree’s computer graphics and visual effects workflows, including data-intensive rendering and realtime collaboration using extremely large data sets for movies, commercials and advertising; work for realtime render engines and games; and augmented reality and virtual reality applications.

“We recently tripled our employee count in a single month while simultaneously finalizing the build-out of our new facility and network infrastructure, all while working on a 700-shot feature film project [The Captain],” says Jonathan Harb, chief executive officer and owner of Whiskytree. “Panasas not only delivered the scalable performance that we required during this critical period, but also delivered a high level of support and expertise. This allowed us to add artists at the rapid pace we needed with an easy-to-work-with solution that didn’t require fine-tuning to maintain and improve our workflow and capacity in an uninterrupted fashion. We literally moved from our old location on a Friday, then began work in our new facility the following Monday morning, with no production downtime. The company’s ‘set it and forget it’ appliance resulted in overall smooth operations, even under the trying circumstances.”

In the past, Whiskytree operated a multi-vendor storage solution that was complex and time consuming to administer, modify and troubleshoot. With the office relocation and rapid team expansion, Whiskytree didn’t have time to build a new custom solution or spend a lot of time tuning. It also needed storage that would grow as project and facility needs change.

Projects from the studio include Thor: Ragnarok, Monster Hunt 2, Bolden, Mother, Star Wars: The Last Jedi, Downsizing, Warcraft and Rogue One: A Star Wars.

Colorist Chat: Technicolor’s Doug Delaney

Industry veteran Doug Delaney started his career in VFX before the days of digital, learning his craft from the top film timers and color scientists as well as effects supervisors.

Today he is a leading colorist and finisher at Technicolor, working on major movies including the recent Captain Marvel. We spoke to him to find out more about how he works.

NAME:Doug Delaney

TITLE:Senior Colorist

IN ADDITION TO CAPTAIN MARVEL, CANYOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We have just wrapped on Showtime’s The Loudest Voice,which documented Fox News’ Roger Ailes and starred Russell Crow, Naomi Watts and Sienna Miller.

I also just had the immense pleasure of working with DP Cameron Duncan on Nat Geo’s thriller The Hot Zone. For that show we actually worked together early on to establish two looks — one for laboratory scenes taking place in Washington, DC, and another for scenes in central Africa. These looks were then exported as LUTs for dailies so that the creative intent was established from the beginning of shooting and carried through to finishing.

And earlier this year I worked on Love, Death & Robots, which just received two Emmy nominations, so big congrats to that team!

ARE YOU SOMETIMES ASKED TO DO MORE THAN JUST COLOR ON PROJECTS?
Yes, these days I tend to think of “colorists” as finishing artists — meaning that our suites are typically the last stop for a project and where everything comes together.

The technology we have access to in our suites continues to develop, and therefore our capabilities have expanded — there is more we can do in our suites that previously would have needed to be handled by others. A perfect example is visual effects. Sometimes we get certain shots in from VFX vendors that are well-executed but need to be a bit more nuanced — say it’s a driving scene against a greenscreen, and the lighting outside the car feels off for the time of day it’s supposed to be in the scene. Whereas we used to have to kick it back to VFX to fix, I can now go in and use the alpha channels and mattes to color-correct that imbalance.

And what’s important about this new ability is that in today’s demanding schedules and deadlines, it allows us to work collaboratively in real time with the creative rather than in an iterative workflow that takes time we often don’t have.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The look development. That aspect can take on various conversations depending on the project. Sometimes it’s talking with filmmakers in preproduction, sometimes just when it gets to post, but ultimately, being part of the creative journey and how to deliver the best-looking show is what I love.

That and when the final playback happens in our room, when the filmmakers see for the first time all of the pieces of the puzzle come together with sound … it’s awesome.

ANY SUGGESTIONS FOR GETTING THE MOST OUT OF A PROJECT FROM A COLOR PERSPECTIVE?
Understanding that each project has a different relationship with the filmmaker, there needs to be transparency and agreement to the process amongst the director, DP, execs, etc. Whether a clear vision is established early on or they are open to further developing the look, a willingness to engage in an open dialogue is key.

Personally I love when I’m able to help develop the color pipeline in preproduction, as I find it often makes the post experience more seamless. For example, what aired on Strange Angel Season 2 was not far removed from dailies because we had established a LUT in advance and had worked with wardrobe, make-up and others to carry the look through. It doesn’t need to be complicated, but open communication and planning really can go a long way in creating a stunning visual identity and a seamless experience.

HOW DO YOU PREFER THE DP OR DIRECTOR TO DESCRIBE THE LOOK THEY WANT? PHYSICAL EXAMPLES, FILMS TO EMULATE, ETC.?
Physical examples — photo books, style sheets with examples of tones they like and things like that. But ultimately my role is to correctly interpret what it is that they like in what they are showing me and to discern if what they are looking for is a literal representation, or more of an inspiration to start from and massage. Again, the open communication and ability to develop strong working relationships — in which I’m able to discern when there is a direct ask versus a need versus an opportunity to do more and push the boundaries — is key to a successful project.

WHAT SYSTEM DO YOU WORK ON?
Baselight. I love the flexibility of the system and the support that the FilmLight team provides us, as we are constantly pushing the capabilities of the platform, and they continue to deliver.

WHERE CAN PEOPLE FIND YOU ON SOCIAL MEDIA
@colorist_douglasdelaney

Red adds Helium and Gemini sensor options to Ranger cameras

Red has added its Helium 8K S35 and Gemini 5K S35 sensors to the Red Ranger camera ecosystem. These two new options offer an alternative for creators who prefer an integrated, all-in-one system to the more modular Red DSMC2 camera.

The Ranger Helium 8K S35 and Ranger Gemini 5K S35 are available now via Red’s global network of resellers, through participating rental houses and directly through Red. They join the Red Ranger Monstro 8K VV sensor, which remains a rental house-only product.

All three sensor variants of the Red Ranger camera system include the same benefits of the compact, standardized camera body, weighing around 7.5 pounds (depending on battery). The system can also handle heavy-duty power sources to satisfy power-hungry configurations and boasts a large fan for quiet, more efficient temperature management.

The Red Ranger camera system consists of three SDI outputs (two mirrored and one independent), allowing two different looks to be output simultaneously; wide-input voltage (11.5V to 32V); 24V and 12V power outs (two of each); one 12V P-Tap; integrated 5-pin XLR stereo audio input (line/mic/+48V selectable); as well as genlock, timecode, USB, and control. Both V-Lock and Gold Mount battery options are supported.

As with all current Red cameras, the Ranger can simultaneously record Redcode RAW plus Apple ProRes or Avid DNxHD or DNxHR at up to 300 MB/s write speeds. It also features Red’s end-to-end color management and post workflow with the enhanced image processing pipeline (IPP2).

Ranger Helium and Ranger Gemini ship complete with:

  • New production top handle
  • Shimmed PL mount
  • New LCD/EVF Adaptor D with improved cable routing when used on the left side of the camera
  • New 24V AC power adaptor with 3-pin 24V XLR power cable, which can also be used with 24V block batteries
  • Lens mount shim pack
  • Compatible Hex and Torx tools

Additionally, Red plans to introduce Canon EF Mount versions of both Ranger Helium and Ranger Gemini later this year.

Pricing for the two new variants is $29,950 for Ranger Helium and $24,950 for Ranger Gemini.

Nugen’s new navigable alert solution for VisLM loudness metering tool

Nugen Audio will be at IBC with the latest updates to its VisLM loudness metering software. Targeting loudness metering, VisLM now offers a ‘Flag’ feature that builds upon the Alert functionality found in previous versions of the plug-in. This will allow users to navigate through True Peak and short-term/momentary loudness alerts, as well as manual flags for other points of interest. Included with the update is the latest maximum loudness range (LRA 18) for its Netflix preset that will benefit forward-thinking productions supplying content to the SVOD platform. The company is also rolling out navigable/visual alerts that further simplify operation.

VisLM offers a user interface design that is focused on the world’s standard loudness parameters, such as the newly implemented LRA 18 for Netflix productions. Using this solution, editors can have access to detailed historical information that enables them to hit the target every time. Additional loudness logging and timecode functions allow for analysis and proof of compliance.

Fred Raskin talks editing and Once Upon a Time… in Hollywood

By Amy Leland

Once Upon a Time… in Hollywood is marketed in a style similar to its predecessors — “the ninth film from Quentin Tarantino.” It is also the third film with Fred Raskin, ACE, as Tarantino’s editor. Having previously edited Django Unchained and The Hateful Eight, as well as working as assistant editor on the Kill Bill films, Raskin has had the opportunity to collaborate with a filmmaker who has always made it clear how much he values collaboration.

On top of this remarkable director/editor relationship, Raskin has also lent his editing hand to a slew of other incredibly popular films, including three entries in the Fast & Furious saga and both Guardians of the Galaxy films. I had the chance to talk with him about his start, his transition to editor and his work on Once Upon a Time… in Hollywood. A tribute to Hollywood’s golden age, the film stars Brad Pitt as the stunt double for a faded actor, played by Leonardo DiCaprio, as they try to find work in a changing industry.

Fred Raskin

How did you get your start as an editor?
I went to film school at NYU to become a director, but I had this realization about midway through that that I might not get a directing gig immediately upon graduation, so perhaps I should focus on a craft. Editing was always my favorite part of the process, and I think that of all the crafts, it’s the closest to directing. You’re crafting performances, you’re figuring out how you’re going to tell the story visually… and you can do all of this from the comfort of an air-conditioned room.

I told all of my friends in school, if you need an editor for your projects, please consider me. While continuing to make my own stuff, I also cut my friends’ projects. Maybe a month after I graduated, a friend of mine got a job as an assistant location manager on a low-budget movie shooting in New York. He said, “Hey, they need an apprentice editor on this movie. There’s no pay, but it’s probably good experience. Are you interested?” I said, “Sure.” The editor and I got along really well. He asked me if I was going to move out to LA, because that’s really where the work is. He then said, “When you get out to LA, one of my closest friends in the world is Rob Reiner’s editor, Bob Leighton. I’ll introduce the two of you.”

So that’s what I did, and this kind of ties into Once Upon a Time… in Hollywood, because when I made the move to LA, I called Bob Leighton, who invited me to lunch with his two assistants, Alan Bell and Danny Miller. We met at Musso & Frank. So the first meeting that I had was at this classic, old Hollywood restaurant. Cut to 23 years later, and I’m on the set of a movie that’s shooting at Musso & Frank. It’s a scene between Al Pacino and Leonardo DiCaprio, arguably the two greatest actors of their generations, and I’m editing it. I thought back to that meeting, and actually got kind of emotional.

So Bob’s assistants introduced me to people. That led to an internship, which led to a paying apprentice gig, which led to me getting into the union. I then spent nine years as an assistant editor before working my way up to editor.

When you were starting out, were there any particular filmmakers or editors who influenced the types of stories you wanted to tell?
Growing up, I was a big genre guy. I read Fangoria magazine and gravitated to horror, action and sci-fi. Those were the kinds of movies I made when I was in film school. So when I got out to LA, Bob Leighton got a pretty good sense as to what my tastes were, and he gave me the numbers of a couple of friends of his, Mark Goldblatt and Mark Helfrich, who are huge action/sci-fi editors. I spoke with them, and that was just a real thrill because I was so familiar with their work. Now we are all colleagues, and I pinch myself regularly.

 You have edited many action and VFX films. Has that presented particular challenges to your way of working as an editor?
The challenges, honestly, are more ones of time management because when you’re on a big visual effects movie, at a certain point in the schedule you’re spending two to four hours a day watching visual effects. Then you have to make adjustments to the edit to accommodate for how things look when the finished visual effects come in. It’s extremely time-consuming, and when you’re not only dealing with visual effects, but also making changes to the movie, you have to figure out a way to find time for all of this.

Every project has its own specific set of challenges. Yes, the big Marvel movies have a ton of visual effects, and you want to make sure that they look good. The upside is that Marvel has a lot of money, so when you want to experiment with a new visual effect or something, they’re usually able to support your ideas. You can come up with a concept while you’re sitting behind the Avid and actually get to see it become a reality. It’s very exciting.

Let’s talk about the world of Tarantino. A big part of his legacy was his longtime collaboration with editor Sally Menke, who tragically passed away. How were you then brought in? I’m assuming it has something to do with your assistant editor credit on Kill Bill?
Yes. I assisted Sally for seven years. There were a couple of movies that we worked on together, and then she brought me in for the Kill Bill movies. And that’s when I met Quentin. She taught me how an editing room is supposed to work. When she finished a scene, she would bring me and the other assistants into the room and get our thoughts. It was a welcoming, family-like environment, which I think Quentin really leaned into as well.

While he’s shooting, Quentin doesn’t come into the editing room. He comes in during post, but during production, he’s really focused on shooting the movie. On Kill Bill, I didn’t meet him until a few weeks after the shoot ended. He started coming in, and whenever he and Sally worked on a scene together, they would bring us in and get our thoughts. I learned pretty quickly that the more feedback you’re able to give, the more appreciated it will be. Quentin has said that at least part of the reason why he went with me on Django Unchained was because I was so open with my comments. Also, as the whole world knows, Quentin is a huge movie lover. We frequently would find ourselves talking about movies. He’d be walking through the hall, and we’d just strike up a conversation, and so I think he saw in me a kindred spirit. He really kept me in the family after Kill Bill.

I got my first big editing break right after Kill Bill ended. I cut a movie called Annapolis, which Justin Lin directed. I was no longer on Quentin’s crew, but we still crossed paths a lot. Over the years we’d just bump into each other at the New Beverly Cinema, the revival house that he now owns. We’d talk about whatever we’d seen lately. So he always kept me in mind. When he and Sally finished the rough cuts on Death Proof and Inglourious Basterds, he invited me to come to their small friends-and-family screenings, which was a tremendous honor.

On Django, you were working with a director who had the same collaborator in Sally Menke for such a long time. What was it like in those early days working on Django?
It was without question the most daunting experience that I have gone through in Hollywood. We’re talking about an incredibly talented editor, Sally, whose shoes I had to attempt to fill, and a filmmaker for whom I had the utmost respect.

Some of the western town stuff was shot at movie ranches just outside of LA, and we would do dailies screenings in a trailer there. I made sure that I sat near him with a list of screening notes. I really just took note of where he laughed. That was the most important thing. Whatever he laughed at, it meant that this was something that he liked. There was a PA on set when they went to New Orleans. I stayed in LA, but I asked her to write down where he laughs.

I’m a fan of his. When I went to see Reservoir Dogs, I remember walking out of the theater and thinking, “Well, that’s like the most exciting filmmaker that I’ve seen in quite some time.” Now I’m getting the chance to work with him. And I’ll say because of my fandom, I have a pretty good sense as to his style and his sense of humor. I think that that all helped me when I was in the process of putting the scenes together on Django. I was very confident in my work when I started showing him stuff on that movie.

Now, seven years later, you are on your third film with him. Have you found a different kind of rhythm working with him than you had on that first film?
I would say that a couple of little things have changed. I personally have gained some confidence in how I approach stuff with him. If there was something that I wasn’t sure was working, or that maybe I felt was extraneous, in Django, I might have had some hesitation about expressing it because I wouldn’t want to offend him. But now both of us are coming from the perspective of just wanting to make the best movie that we possibly can. I’m definitely more open than I might have been back then.

Once Upon a Time… in Hollywood has an interesting blend of styles and genres. The thing that stands out is that it is a period piece. Beyond that, you have the movies and TV shows within the movie that give you additional styles. And there is a “horror movie” scene.
Right, the Spahn Ranch sequence.

 That was so creepy! I really had that feeling the whole time of, “They can’t possibly kill off Brad Pitt’s character this early, can they?
That’s the idea. That’s what you’re supposed to be feeling.

When you are working with all of those overlapping styles, do you have to approach the work a different way?
The style of the films within the film was influenced by the movies of the era to some degree. There wasn’t anything stylistically that had us trying to make the movie itself feel like a movie from 1969. For example, Leonardo DiCaprio’s character, Rick Dalton, is playing the heavy on a western TV show called Lancer in the movie. Quentin referred to the Lancer stuff as, “Lancer is my third western, after Django and The Hateful Eight.” He didn’t direct that show as though it was a TV western from the late ’60s. He directed it like it was a Quentin Tarantino western from 2019. Quentin’s style is really all his own.

There are no rules when you’re working on a Quentin Tarantino movie because he knows everything that’s come before, and he is all about pushing the boundaries of what you can do — which is both tremendously exciting and a little scary, like is this going to work for everyone? The idea that we have a narrator who appears once in the first 10 minutes of the movie and then doesn’t appear again until the last 40 minutes, is that something that’s going to throw people off? His feeling is like, yeah, there are going to be some people out there who are going to feel that it’s weird, but they’re also going to understand it. That’s the most important thing. He’s a firm believer in doing whatever we need to do to tell the story as clearly and as concisely as possible. That voiceover narration serves that purpose. Weird or not.

You said before that he doesn’t come into the edit during production. What is your work process during production? Are you beginning the rough cut? And if so, are you sending him things, or are you really not collaborating with him on that process at all until post begins?
This movie was shot in LA, so for the first half of the shoot, we would do regular dailies screenings. I’d sit next to him and write down whatever he laughed at. That process that began on Django has continued. Then I’ll take those notes. Then I assemble the material as we’re shooting, but I don’t show him any of it. I’m not sending him cuts. He doesn’t want to see cuts. I don’t think he wants the distractions of needing to focus on editing.

On this movie, there were only two occasions when he did come into the editing room during production. The movie takes place over the course of three days, and at the end of the second day, the characters are watching Rick on the TV show The F.B.I., which was a real show and that episode was called “All the Streets Are Silent.” The character of Michael Murtaugh was played in the original episode by a young Burt Reynolds. They found a location that matched pretty perfectly and reshot only the shots that had Burt Reynolds in them. They reshot with Leonardo DiCaprio, as Rick Dalton, playing that character. He had to come into the editing room to see how it played and how it matched, and it matched remarkably well. I think that people watching the movie probably assume that Quentin shot the whole thing, or that we used some CG technology to get Leo into the shots. But no, they just figured out exactly the shots that they needed to shoot, and that was all the new material. The rest was from the original episode.

 The other time he came into the edit during production was the sequence in which Bruce Lee and Cliff have their fight. The whole dialogue scene that opens that sequence, it all plays out in one long take. So he was very excited to see how that shot played out. But one of the things that we had spoken about over the course of working together is when you do a long take, the most important thing is what that cut is going to be at the end of the long take. How can we make that cut the most impactful? In this case, the cut is to Cliff throwing Bruce Lee into the car. He wanted to watch the whole scene play out, and then see how that cut worked. When I showed it to him, I had my finger on the stop button so that after that cut, I would stop it so he wouldn’t see anything more and wouldn’t get tempted to get sucked into maybe giving notes. I reached to stop, but he was like, “No, no, no let it play out.” He watched the fight scene, and he was like, “That’s fantastic.” He was very happy.

Once you were in post, what were some of the particular challenges of this film?
One of the really important things is how integral sound was to the process of making this movie. First there were the movies and shows within the movie. When we’re watching the scenes from Bounty Law, the ‘50s Western that Rick starred in, it wasn’t just about the 4×3, black and white photography, but also how we treated the sound. Our sound editorial team and our sound mixing team did an amazing job of getting that stuff to sound like a 16-millimeter print. Like, they put just the right amount of warble into the dialogue, and it makes it feel very authentic. Also, all the Bounty Law stuff is mono, not this wide stereo thing that would not be appropriate for the material from that era.

And I mentioned the Spahn Ranch sequence, when for 20 minutes the movie turns into an all-out horror movie. One of Quentin’s rules for me when I’m putting my assembly together is that he generally does not want me cutting with music. He frequently has specific ideas in his head about what the music is going to be, and he doesn’t want to see something that’s not the way he imagined it. That’s going to take him out of it, and he won’t be able to enjoy the sequence.

When I was putting the Spahn Ranch sequence together, I knew that I had to make it suspenseful without having music to help me. So, I turned to our sound editors, Wylie Stateman and Leo Marcil, and said, “I want this to sound like The Texas Chain Saw Massacre, like I want to have low tones and creaking wood and metal wronks. Let’s just feel the sense of dread through this sequence.” They really came through.

And what ended up happening is, I don’t know if Quentin’s intention originally was to play it without music, but ultimately all the music in the scene comes from what Dakota Fanning’s character, Squeaky, is watching on the TV. Everything else is just sound effects, which were then mixed into the movie so beautifully by Mike and Chris Minkler. There’s just a terrific sense of dread to that sequence, and I credit the sound effects as much as I do the photography.

This film was cut on Avid. Have you always cut on Avid? Do you ever cut on anything else?
When I was in film school, I cut on film. If fact, I took the very first Avid class that NYU offered. That was my junior year, which was long before there were such things as film options or anything. It was really just kind of the basics, a basic Avid Media Composer.

I’ve worked on Final Cut Pro a few times. That’s really the only other nonlinear digital editing system that I’ve used. I’ve never actually used Premiere.

At this point my whole sound effects and music library is Avid-based, and I’m just used to using the Avid. I have a keyboard where all of my keys are mapped, and I find, at this point, that it’s very intuitive for me. I like working with it.

This movie was shot on film, and we printed dailies from the negative. But the negative was also scanned in at 4K, and then those 4K scans were down-converted to DNx115, which is an HD resolution on the Avid. So we were editing in HD, and we could do screenings from that material when we needed to. But we would also do screenings on film.

Wow, so even with your rough cuts, you were turning them around to film cuts again?
Yeah. Once production ended, and Quentin came into the editing room, when we refined a scene to his liking, I would immediately turn that over to my Avid assistant, Chris Tonick. He would generate lists from that cut and would turn it over to our film assistants, Bill Fletcher and Andrew Blustain. They would conform the film print to match the edit that we had in the Avid so that we were capable of screening the movie on film whenever we wanted to. There was always going to be a one- or two-day lag time, depending on when we finished cutting on the Avid. But we were able to get it up there pretty quickly.

Sometimes if you have something like opticals or titles, you wouldn’t be able to generate those for film quickly enough. So if we wanted to screen something immediately, we would have to do it digitally. But as long as we had a couple of days, we would be able to put it up on film, and we did end up doing one of our test screenings on 35 millimeter, which was really great. It added one more layer of authenticity to the movie, getting to see it projected on film.

For a project of this scope, how many assistants do you work with, and how do you like to work with those assistants?
Our team consists of post production supervisor Tina Anderson, who really oversees everything. She runs the editing room. She figures out what we’re going to need. She’s got this long list of items that she goes down every day, and makes sure that we are prepared for whatever is going to come our way. She’s really remarkable.

My first assistant Chris Tonick is the Avid assistant. He cut a handful of scenes during production, and I would occasionally ask him to do some sound work. But primarily during production, he was getting the dailies prepped — getting them into the Avid for me and laying out my bins the way I like them.

In post, we added an Avid second named Brit DeLillo, who would help Chris when we needed to do turnovers for sound or visual effects, music, all of those people.

Then we had our film crew, Bill Fletcher and Andrew Blustain. They were syncing dailies during production, and then they were conforming the film print during post.

Last, but certainly not least, we had Alana Feldman, our post PA, who made sure we had everything we needed.

And honestly, for everybody on the crew, their most important role beyond the work that they were hired to do, was to be an audience member for us whenever we finished a scene. That tradition I experienced as an assistant working under Sally is the tradition that we’ve continued. Whenever we finish a sequence, we bring the whole crew up and show them the scene. We want people to react. We want to hear how they’re responding. We want to know what’s working and what isn’t working. Being good audience members is actually a key part of the job.

L-R: Quentin Tarantino, post supervisor Tina Anderson, first assistant editor (Film) Bill Fletcher, Fred Raskin, 2nd assistant editor (Film) Andrew Blustain, 2nd assistant editor (Avid) Brit DeLillo, post assistant Alana Feldman, producer Shannon McIntosh, 1st assistant editor (Avid) Chris Tonick, assistant to producer Ryan Jaeger and producer David Heyman

When you’re looking for somebody to join your team as an assistant, what are you looking for?
There are a few things. One obvious thing, right off the bat, is someone who is personable. Is this someone I’m going to want to have lunch with every day for months on end? Generally, especially working on a Quentin Tarantino movie, somebody with a good knowledge of film history who has a love of movies is going to be appreciated in that environment.

The other thing that I would say honestly  — and this might sound funny — is having the ability to see the future. And I don’t mean that I need psychic film assistants. I mean they need to be able to figure out what we’re going to need later on down the line and be prepared for it.

If I turn over a sequence, they should be looking at it and realizing, oh, there are some visual effects in here that we’re going to have to address, so we have to alert the visual effects companies about this stuff, or at least ask me if it’s something that I want.

If there were somebody who thought to themselves, “I want a career like Fred Raskin’s. I want to edit these kinds of cool films,” what advice would you give them as they’re starting out?
I have three standard pieces of advice that I give to everyone. My experience, I think, is fairly unique. I’ve been incredibly fortunate to get to work with some of my favorite filmmakers. The way my story unfolded … not everybody is going to have the opportunities I’ve had.

But my standard pieces of advice are, number one — and I mentioned this earlier — be personable. You’re working with people you’re going to share space with for many months on end. You want to be the kind of person with whom they’re going to want to spend time. You want to be able to get along with everyone around you. And you know, sometimes you’ve got some big personalities to deal with, so you have to be the type who can navigate that.

Then I would say, watch everything you possibly can. Quentin is obviously an extreme example, but most filmmakers got into this business because they love movies. And so the more you know about movies, and the more you’re able to talk about movies, the more those filmmakers are going to respect you and want to work with you. This kind of goes hand in hand with being personable.

The other piece of advice — and I know this sounds like a no-brainer — if you’re going for an interview with a filmmaker, make sure you’ve familiarized yourself with that person’s work. Be able to talk with them about their movies. They’re going to appreciate that you took the time to explore their work. Everybody wants to talk about the work they’ve done, so if you’re able to engage them on that level, I think it’s going to reflect well on you.

Absolutely. That’s great advice.


Amy Leland is a film director and editor. Her short film, “Echoes”, is now available on Amazon Video. She also has a feature documentary in post, a feature screenplay in development, and a new doc in pre-production. She is an editor for CBS Sports Network and recently edited the feature “Sundown.” You can follow Amy on social media on Twitter at @amy-leland and Instagram at @la_directora.

Autodesk intros Bifrost for Maya at SIGGRAPH

At SIGGRAPH, Autodesk announced a new visual programming environment in Maya called Bifrost, which makes it possible for 3D artists and technical directors to create serious effects quickly and easily.

“Bifrost for Maya represents a major development milestone for Autodesk, giving artists powerful tools for building feature-quality VFX quickly,” says Chris Vienneau, senior director, Maya and Media & Entertainment Collection. “With visual programming at its core, Bifrost makes it possible for TDs to build custom effects that are reusable across shows. We’re also rolling out an array of ready-to-use graphs to make it easy for artists to get 90% of the way to a finished effect fast. Ultimately, we hope Bifrost empowers Maya artists to streamline the creation of anything from smoke, fire and fuzz to high-performance particle systems.”

Bifrost highlights include:

  • Ready-to-Use Graphs: Artists can quickly create state-of-the-art effects that meet today’s quality demands.
  • One Graph: In a single visual programming graph, users can combine nodes ranging from math operations to simulations.
  • Realistic Previews: Artists can see exactly how effects will look after lighting and rendering right in the Arnold Viewport in Maya.
  • Detailed Smoke, Fire and Explosions: New physically-based solvers for aerodynamics and combustion make it easy to create natural-looking fire effects.
  • The Material Point Method: The new MPM solver helps artists tackle realistic granular, cloth and fiber simulations.
  • High-Performance Particle System: A new particle system crafted entirely using visual programming adds power and scalability to particle workflows in Maya.
  • Artistic Effects with Volumes: Bifrost comes loaded with nodes that help artists convert between meshes, points and volumes to create artistic effects.
  • Flexible Instancing: High-performance, rendering-friendly instancing empowers users to create enormous complexity in their scenes.
  • Detailed Hair, Fur and Fuzz: Artists can now model things consisting of multiple fibers (or strands) procedurally.

Bifrost is available for download now and works with any version of Maya 2018 or later. It will also be included in the installer for Maya 2019.2 and later versions. Updates to Bifrost between Maya releases will be available for download from Autodesk AREA.

In addition to the release of Bifrost, Autodesk highlighted the latest versions of Shotgun, Arnold, Flame and 3ds Max. The company gave a tech preview of a new secure enterprise Shotgun that supports network segregation and customer-managed media isolation on AWS, making it possible for the largest studios to collaborate in a closed-network pipeline in the cloud. Shotgun Create, now out of beta, delivers a cloud-connected desktop experience, making it easier for artists and reviewers to see which tasks demand attention while providing a collaborative environment to review media and exchange feedback accurately and efficiently. Arnold 5.4 adds important updates to the GPU renderer, including OSL and OpenVDB support, while Flame 2020.1 introduces more uses of AI with new Sky Extraction tools and specialized image segmentation features. Also on display, the 3ds Max 2020.1 update features modernized procedural tools for 3D modeling.

Nvidia at SIGGRAPH with new RTX Studio laptops, more

By Mike McCarthy

Nvidia made a number of new announcements at the SIGGRAPH conference in LA this week.  While the company didn’t have any new GPU releases, Nvidia was showing off new implementations of its technology — combining AI image analysis with raytracing acceleration for an Apollo 11-themed interactive AR experience. Nvidia has a number of new 3D software partners supporting RTX raytracing through its Optix raytracing engine.  It allows programs like Blender Cycles, Keyshot, Substance, and Flame to further implement GPU acceleration, using RTX cores for raytracing and tensor cores for AI de-noising.

Nvidia was also showing off a number of new RTX Studio laptop models from manufacturers like HP, Dell, Lenovo and Boxx. These laptops all support Nvidia’s new unified Studio Driver, which, now on its third release, offers full, 10-bit color support for all cards, blurring the feature-set lines between the GeForce and Quadro products. Quadro variants still offer more frame buffer memory, but support for the Studio Drive makes the GeForce cards even more appealing to professionals on a tight budget.

Broader support for 10-bit color makes sense as we move toward more HDR content that requires the higher bit depth, even at the consumer level. And these new Studio Drivers also support both desktop and mobile GPUs, which will simplify eGPU solutions that utilize both on a single system. So if you are a professional with a modern Nvidia RTX GPU, you should definitely check out the new Studio Driver options.

Nvidia is also promoting its cloud-based AI image-generating program Gaugan, which you can check out for free here. It is a fun toy and there are a few potential uses in the professional world, especially for previz backgrounds and concept art.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Khronos releases OpenXR 1.0 for cross-platform AR/VR

The Khronos Group has ratified and released the OpenXR 1.0 specification, along with publicly available implementations. OpenXR is a unifying, royalty-free open standard that provides high-performance, cross-platform access to virtual reality (VR) and augmented reality (AR) — collectively known as XR — platforms and devices. The new specification can be found on the Khronos website and via GitHub.

“The feedback from the community on the provisional specification released in March has been invaluable to getting us to this significant milestone,” says Brent Insko, OpenXR working group chair and lead XR architect at Intel. “Our work continues as we now finalize a comprehensive test suite, integrate key game engine support, and plan the next set of features to evolve a truly vibrant, cross-platform standard for XR platforms and devices. Now is the time for software developers to start putting OpenXR to work.”

After gathering feedback from the XR community during the public review of the provisional specification, improvements were made to the OpenXR input subsystem, game engine editor support and loader. With this 1.0 release, the working group will evolve the standard while maintaining full backward compatibility from this point onward, giving software developers and hardware vendors a solid foundation upon which to deliver portable user experiences.

OpenXR implementations are shipping this week, including the Monado OpenXR open source implementation from Collabora, the OpenXR runtime for Windows Mixed Reality headsets from Microsoft, an Oculus OpenXR implementation for Rift and Oculus Quest support. Epic Games also plans to release OpenXR 1.0 support in Unreal Engine.

OptiTrack reveals new skeletal solver

OptiTrack has a new skeletal solver that brings artifact-free, realtime character animation to its optical motion capture systems.

Key features of OptiTrack skeletal solver include:

– Accurate human movement tracking in realtime
– Major advances in solve quality and artifact-free streaming of character data
– Compatible with any OptiTrack system, including those used for live-action camera tracking, virtual camera tracking and virtual reality
– Supports industry-standard tools, including Epic Games’ Unreal Engine, Unity Technologies’ Unity realtime platform and Autodesk MotionBuilder
– Extremely low latency (less than 10 milliseconds)

As a complement to its new skeletal solver, OptiTrack has introduced an equally high-performing finger-tracking solution created in partnership with Manus VR. Embedded with OptiTrack’s signature pulse Active technology, Inertial Measurement Units (IMU) and bend sensors, the gloves deliver accurate, continuous finger-tracking data in real time that is fully compatible with existing character animation and VR pipelines when used with OptiTrack systems.

AJA intros Ki Pro Go, Corvid 44 12G and more at NAB

AJA was at NAB this year showing the new Ki Pro Go H.264 multichannel HD/SD recorder/player, as well as 14 openGear converter cards featuring DashBoard software support, two new IP video transmitters that bridge HDMI and 3G-SDI signals to SMPTE ST 2110 and the Corvid 44 12G I/O card for AJA Developers. AJA also introduced updates featuring improvements for its FS-HDR HDR/WCG converter, desktop and mobile I/O products, AJA Control Room software, HDR Image Analyzer and the Helo recorder/streamer.

Ki Pro Go is a genlock-free, multichannel H.264 HD and SD recorder/player with a flexible architecture. This portable device allows users to record up to four channels of pristine HD and SD content from SDI and HDMI sources to off-the-shelf USB media via 4x USB 3.0 ports, with a fifth port for redundant recording. The Ki Pro Go will be available in June for $3,995.

A FS-HDR v3.0 firmware update features enhanced coloring tools and support for multichannel Dynamic LUTs, plus other improvements. The release includes a new integrated Colorfront Engine Film Mode offering a rich grading and look creation toolset with optional ACES colorspace, ASC color decision list controls and built-in look selection. It’s available in June as a free update.

Developed with Colorfront, the HDR Image Analyzer v1.1 firmware update features several new enhancements, including a new web UI that simplifies remote configuration and control from multiple machines, with updates over Ethernet offering the ability to download logs and screenshots. New remote desktop support provides facility-friendly control from desktops, laptops and tablets on any operating system. The update also adds new HDR monitoring and analysis tools. It’s available soon as a free update.

The Desktop Software v15.2 update offers new features and performance enhancements for AJA Kona and Io products. It offers psupport for Apple ProRes capture and playback across Windows, Linux and macOS in AJA Control Room, at up to 8K resolutions, while also adding new IP SMPTE ST 2110 workflows using AJA Io IP and updates for Kona IP, including ST 2110-40 ANC support. The free Desktop Software update will be available in May.

The Helo v4.0 firmware update introduces new features that allow users to customize their streaming service and improve monitoring and control. AV Mute makes it easy to personalize the viewing experience with custom service branding when muting audio and video streams, while Event Logging enables encoder activity monitoring for simpler troubleshooting. It’s available in May as a free update.

The new openGear converter cards combine the capabilities of AJA’s mini converters with openGear’s high-density architecture and support for DashBoard, enabling industry-standard configuration, monitoring and control in broadcast and live event environments over a PC or local network on Windows, macOS or Linux. New models include re-clocking SDI distribution amplifiers, single-mode 3G-SDI fiber converters plus Multi-Mode variants and an SDI audio embedder/ disembedder. The openGear cards are available now, with pricing dependent upon the model.

AJA’s new IPT-10G2-HDMI and IPT-10G2-SDI mini converters are single-channel IP video transmitters for bridging traditional HDMI and 3G-SDI signals to SMPTE ST 2110 for IP-based workflows. Both models feature dual 10 GigE SFP+ ports for facilities using SMPTE ST 2022-7 for redundancy in critical distribution and monitoring. They will be available soon for $1,295.

The Corvid 44 12G is an 8-lane PCIe 3.0 video and audio I/O card featuring support for 12G-SDI I/O in a low-profile design for workstations and servers and 8K/UltraHD2/4K/UltraHD high frame rate, deep color and HDR workflows. Corvid 44 12G also facilitates multichannel 12G-SDI I/O, enabling either 8K or multiple 4K streams of input or output. It is compatible across macOS, Windows and Linux and used in high-performance applications for imaging, post, broadcast and virtual production. Corvid 44 12G cards will be available soon.

Sony’s NAB updates — a cinematographer’s perspective

By Daniel Rodriguez

With its NAB offerings, Sony once again showed that they have a firm presence in nearly every stage of production, be it motion picture, broadcast media or short form. The company continues to keep up to date with the current demands while simultaneously preparing for the inevitable wave of change that seems to come faster and faster each year. While the introduction of new hardware was kept to a short list this year, many improvements to existing hardware and software were released to ensure Sony products — both new and existing — still have a firm presence in the future.

The ability to easily access, manipulate, share and stream media has always been a priority for Sony. This year at NAB, Sony continued to demonstrate its IP Live, SR Live, XDCAM Air and Media Backbone Hive platforms, which give users the opportunity to manage media all over the globe. IP Live allows users to access remote production, which contains core processing hardware while accessing it anywhere. This extends to 4K and HDR/SDR streaming as well, which is where SR Live comes into play. SR Live allows for a native 4K HDR signal to be processed into full HD and regular SDR signals, and a core improvement is the ability to adjust the curves during a live broadcast for any issues that may arise in converting HDR signals to SDR.

For other media, including XDCAM-based cameras, XDCAM Air allows for the wireless transfer and streaming of most media through QoS services, and turns almost any easily accessible camera with wireless capabilities into a streaming tool.

Media Backbone Hive allows users to access their media anywhere they want. Rather than just being an elaborate cloud service, Media Backbone Hive allows internal Adobe Cloud-based editing, accepts nearly every file type, allows a user to embed metadata and makes searching simple with keywords and phrases that are spoken in the media itself.

For the broadcast market, Sony introduced the Sony HDC-5500 4K HDR three-CMOS sensor camcorder which they are calling their “flagship” camera in this market. Offering 4K HDR and high frame rates, the camera also offers a global shutter — which is essential for dealing with strobing from lights — and can now capture fast action without the infamous rolling shutter blur. The camera allows for 4K output over 12G SDI, allowing for 4K monitoring and HDR, and as these outputs continue to be the norm, the introduction of the HDC-5500 will surely be a hit with users, especially with the addition of global shutter.

Sony is very much a company that likes to focus on the longevity of their previous releases… cameras especially. Sony’s FS7 is a camera that has excelled in its field since its introduction in 2014, and to this day is an extremely popular choice for short form, narrative and broadcast media. Like other Sony camera bodies, the FS7 allows for modular builds and add-ons, and this is where the new CBK-FS7BK ENG Build-Up Kit comes in. Sporting a shoulder mount and ENG viewfinder, the kit includes an extension in the back that allows for two wireless audio inputs, RAW output, streaming and file transfer via Wireless LAN or 4G/LTE connection, as well as QoS streaming (only through XDCAM Air) and timecode input. This CBK-FS7BK ENG Build-Up Kit turns the FS7 into an even more well-rounded workhorse.

The Sony Venice is Sony’s flagship Cinema camera, replacing the Sony F65, which is still brilliant and a popular camera. Having popped up as recently as last year’s Annihilation, the Venice takes a leap further in entering the full-frame, VistaVision market. Boasting top-of-the-line specs and a smaller, more modular build than the F65, the camera isn’t exactly a new release — it came out in November 2017 — but Sony has secured longevity in their flagship camera in a time when other camera manufacturers are just releasing their own VistaVision-sensored cameras and smaller alternatives.

Sony recently released a firmware update to the Venice that allows X-OCN XT — their highest form of compressed 16-bit RAW — two new imager modes, allowing the camera to sample 5.7K 16:9 in full frame and 6K 2.39:1 in full width, as well as 4K signal over 6G/12G SDI output and wireless remote control with the CBK-WA02. Since the Venice is smaller and able to be mounted on harder-to-reach mounts, wireless control is quickly becoming a feature that many camera assistants need. Newer anamorphic desqueeze modes for 1.25x, 1.3x, 1.5x and 1.8x have also been added, which is huge, since many older and newer lenses are constantly being created and revisited, such as the Technovision 1.5x — made famous by Vittorio Storaro on Apocalypse Now (1979) — and the Cooke Full Frame Anamorphics 1.8X. With VistaVision full frame now being an easily accessible way of filming, new forms of lensing are now becoming common, so systems like anamorphic are no longer limited to 1.3X and 2X. It’s reassuring to see Sony look out for storytellers who may want to employ less common anamorphic desqueeze sizes.

As larger resolutions and higher frame rates become the norm, Sony has introduced the new Sony SxS Pro X cards. A follow up to the hugely successful Sony SxS Pro+ cards, these new cards boost an incredible transfer speed of 10Gbps (1250Mbps) in 120GB and 240GB cards. This is a huge step up from the previous SxS Pro+ cards that offered a read speed of 3.5Gbps and a write speed of 2.8Gbps. Probably the most exciting part of these new cards being introduced is the corresponding SBAC-T40 card reader which guarantees a full 240GB card to be offloaded in 3.5 minutes.

Sony’s newest addition to the Venice camera is the Rialto extension system. Using the Venice’s modular build, the Rialto is a hardware extension that allows you to remove the main body’s sensor and install it into a smaller body unit which is then tethered either nine or 18 feet by cable back to the main body. Very reminiscent of the design of ARRI’s Alexa M unit, the Rialto goes further by being an extension of its main system rather than a singular system, which may bring its own issues. The Rialto allows users to reach spots where it may otherwise prove difficult using the actual Venice body. Its lightweight design allows users to mount it nearly anywhere. Where other camera bodies that are designed to be smaller end up heavy when outfitted with accessories such as batteries and wireless transmitters, the Rialto can easily be rigged to aerials, handhelds, and Steadicams. Though some may question why you wouldn’t just get a smaller body from another camera company, the big thing to consider is that the Rialto isn’t a solution to the size of the Venice body — which is already very small, especially compared to the previous F65 — but simply another tool to get the most out of the Venice system, especially considering you’re not sacrificing anything as far as features or frame rates. The Rialto is currently being used on James Cameron’s Avatar sequels, as its smaller body allows him to employ two simultaneously for true 3D recording whilst giving all the options of the Venice system.

With innovations in broadcast and motion picture production, there is a constant drive to push boundaries and make capture/distribution instant. Creating a huge network for distribution, streaming, capture, and storage has secured Sony not only as the powerhouse that it already is, but also ensures its presence in the ever-changing future.


Daniel Rodriguez is a New York based director and cinematographer. Having spent years working for such companies as Light Iron, Panavision and ARRI Rental, he currently works as a freelance cinematographer, filming narrative and commercial work throughout the five boroughs. 

 

NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.

Facilis Launches Hub shared storage line

Facilis Technology rolled out its new Hub Shared Storage line for media production workflows during the NAB show. Facilis Hub includes new hardware and an integrated disk-caching system for cloud and LTO backup and archive designed to provide block-level virtualization and multi-connectivity performance.

“Hub Shared Storage is an all-new product based on our Hub Server that launched in 2017. It’s the answer to our customers’ requests for a more compact server chassis, lower-cost hybrid (SSD and HDD) options and integrated cloud and LTO archive features,” says Jim McKenna, VP of sales and marketing at Facilis. “We deliver all of this with new, more powerful hardware, new drive capacity options and a new look to both the system and software interface.”

The Facilis shared storage network allows both block-mode Fibre Channel and Ethernet connectivity simultaneously with the ability to connect through either method with the same permissions, user accounts and desktop appearance. This expands user access, connection resiliency and network permissions. The system can be configured as a direct-attached drive or segmented into various-sized volumes that carry individual permissions for read and write access.

Facilis Object Cloud
Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for an annual fee. The Facilis Virtual Volume can display cloud, tape and spinning disk data in the same directory structure on the client desktop.

“A big problem for our customers is managing multiple interfaces for the various locations of their data. With Object Cloud, files in multiple locations reside in the same directory structure and are tracked by our FastTracker asset tracking in the same database as any active media asset,” says McKenna. “Object Cloud uses Object Storage technology to virtualize a Facilis volume with cloud and LTO locations. This gives access to files that exist entirely on disk, in the Cloud or on LTO, or even partially on disk and partially in the cloud.”

Every Facilis Hub Shared Storage server comes with unlimited seats in the Facilis FastTracker asset tracking application. The Object Cloud Software and Storage package is available for most Facilis servers running version 7.2 or higher.

Behind the Title: Weta Digital’s Paolo Emilio Selva

NAME: Paolo Emilio Selva 

COMPANY: Weta Digital

CAN YOU DESCRIBE YOUR COMPANY?
In the middle of Middle-earth, Weta Digital is a VFX company with more than a thousand artists and developers. While focusing on delivering amazing movies, Weta Digital also focuses on research and development for VFX. 

WHAT’S YOUR JOB TITLE?
Head of Software Engineering 

WHAT DOES THAT ENTAIL?
In the software engineering department, we write tools for artists and make sure their creative intent is maintained across the pipeline. We also make sure production isn’t disrupted across the facility.  

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Writing code, maybe? Yeah, I’m still writing code when I can, mostly fixing bugs and off-loading other developers from nasty issues, keeping them focused on the research and development and providing support.  

HOW DID YOU START YOUR CAREER?
I started my career as researcher in Human-Computer interfaces at a university in Rome. I liked to solve problems, and the VFX industry has lots of problems to be solved 😉 

HOW LONG HAVE YOU BEEN WORKING IN VFX?
Ten years  

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
I grew up with Pixar movies and lots of animated short movies. I also played video games. I was always fascinated by what was behind those things. I wanted to replicate them, and which I did by re-writing games or effects seen in movies.

 I started by using existing tools. Then, during high school — thanks to my older cousin — I found Basic and started writing my own tools. I found that I was able to control external devices with Basic and my Commodore64. I also started enjoying electronics and micro-controllers. All of this reached the acme with my thesis at university when I created a data-glove from scratch — from the hardware to the software — and started looking at example applications for it. This was in between 1999 and 2001, when I also started working at the Human-Computer Interaction Lab.  

WHAT’S YOUR FAVORITE PART OF THE JOB?
It’s challenging, in a good way, every day. And as problem solver, I like this part of my job. 

WHAT’S YOUR LEAST FAVORITE?
Sometimes too many meetings, but it’s important to communicate with every department and understand their needs. 

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably teaching and researching at university in Human-Computer Interaction. 

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Just to name some of them: War for the Planet of the Apes, Valerian, The BFG and Guardians of the Galaxy Vol. 2.          

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
I was lucky enough to be at Weta Digital when we worked on Avatar and The Jungle Book, which both won Oscars for Best Visual Effects, and also The Adventures of Tintin, where I was directly involved in the hair-rendering process and all the TopoClouds tools for the Pantaray pipeline.

WHAT TOOLS DO YOU USE DAY TO DAY?
Nowadays, it’s my email client, my phone and very little text-editor and C++ compilers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Mostly enjoy time with my wife, my cats, video games and the gym when I can.

Adobe Max 2018: Creative Cloud updates and more

By Mike McCarthy

I attended my first Adobe Max 2018 last week in Los Angeles. This huge conference takes over the LA convention center and overflows into the surrounding venues. It began on Monday morning with a two-and-a-half-hour keynote outlining the developments and features being released in the newest updates to Adobe’s Creative Cloud. This was followed by all sorts of smaller sessions and training labs for attendees to dig deeper into the new capabilities of the various tools and applications.

The South Hall was filled with booths from various hardware and software partners, with more available than any one person could possibly take in. Tuesday started off with some early morning hands-on labs, followed by a second keynote presentation about creative and career development. I got a front row seat to hear five different people, who are successful in their creative fields — including director Ron Howard — discuss their approach to work and life. The rest of the day was so packed with various briefings, meetings and interviews that I didn’t get to actually attend any of the classroom sessions.

By Wednesday, the event was beginning to wind down, but there was still a plethora of sessions and other options for attendees to split their time. I presented the workflow for my most recent project Grounds of Freedom at Nvidia’s booth in the community pavilion, and spent the rest of the time connecting with other hardware and software partners who had a presence there.

Adobe released updates for most of its creative applications concurrent with the event. Many of the most relevant updates to the video tools were previously announced at IBC in Amsterdam last month, so I won’t repeat those, but there are still a few new video ones, as well as many that are broader in scope in regards to media as a whole.

Adobe Premiere Rush
The biggest video-centric announcement is Adobe Premiere Rush, which offers simplified video editing workflows for mobile devices and PCs.  Currently releasing on iOS and Windows, with Android to follow in the future, it is a cloud-enabled application, with the option to offload much of the processing from the user device. Rush projects can be moved into Premiere Pro for finishing once you are back on the desktop.  It will also integrate with Team Projects for greater collaboration in larger organizations. It is free to start using, but most functionality will be limited to subscription users.

Let’s keep in mind that I am a finishing editor for feature films, so my first question (as a Razr-M user) was, “Who wants to edit video on their phone?” But what if the user shot the video on their phone? I don’t do that, but many people do, so I know this will be a valuable tool. This has me thinking about my own mentality toward video. I think if I was a sculptor I would be sculpting stone, while many people are sculpting with clay or silly putty. Because of that I would have trouble sculpting in clay and see little value in tools that are only able to sculpt clay. But there is probably benefit to being well versed in both.

I would have no trouble showing my son’s first-year video compilation to a prospective employer because it is just that good — I don’t make anything less than that. But there was no second-year video, even though I have the footage because that level of work takes way too much time. So I need to break free from that mentality, and get better at producing content that is “sufficient to tell a story” without being “technically and artistically flawless.” Learning to use Adobe Rush might be a good way for me to take a step in that direction. As a result, we may eventually see more videos in my articles as well. The current ones took me way too long to produce, but Adobe Rush should allow me to create content in a much shorter timeframe, if I am willing to compromise a bit on the precision and control offered by Premiere Pro and After Effects.

Rush allows up to four layers of video, with various effects and 32-bit Lumetri color controls, as well as AI-based audio filtering for noise reduction and de-reverb and lots of preset motion graphics templates for titling and such.  It should allow simple videos to be edited relatively easily, with good looking results, then shared directly to YouTube, Facebook and other platforms. While it doesn’t fit into my current workflow, I may need to create an entirely new “flow” for my personal videos. This seems like an interesting place to start, once they release an Android version and I get a new phone.

Photoshop Updates
There is a new version of Photoshop released nearly every year, and most of the time I can’t tell the difference between the new and the old. This year’s differences will probably be a lot more apparent to most users after a few minutes of use. The Undo command now works like other apps instead of being limited to toggling the last action. Transform operates very differently, in that they made proportional transform the default behavior instead of requiring users to hold Shift every time they scale. It allows the anchor point to be hidden to prevent people from moving the anchor instead of the image and the “commit changes” step at the end has been removed. All positive improvements, in my opinion, that might take a bit of getting used to for seasoned pros. There is also a new Framing Tool, which allows you to scale or crop any layer to a defined resolution. Maybe I am the only one, but I frequently find myself creating new documents in PS just so I can drag the new layer, that is preset to the resolution I need, back into my current document. For example, I need a 200x300px box in the middle of my HD frame — how else do you do that currently? This Framing tool should fill that hole in the features for more precise control over layer and object sizes and positions (As well as provide its easily adjustable non-destructive masking.).

They also showed off a very impressive AI-based auto selection of the subject or background.  It creates a standard selection that can be manually modified anywhere that the initial attempt didn’t give you what you were looking for.  Being someone who gives software demos, I don’t trust prepared demonstrations, so I wanted to try it for myself with a real-world asset. I opened up one of my source photos for my animation project and clicked the “Select Subject” button with no further input and got this result.  It needs some cleanup at the bottom, and refinement in the newly revamped “Select & Mask” tool, but this is a huge improvement over what I had to do on hundreds of layers earlier this year.  They also demonstrated a similar feature they are working on for video footage in Tuesday night’s Sneak previews.  Named “Project Fast Mask,” it automatically propagates masks of moving objects through video frames and, while not released yet, it looks promising.  Combined with the content-aware background fill for video that Jason Levine demonstrated in AE during the opening keynote, basic VFX work is going to get a lot easier.

There are also some smaller changes to the UI, allowing math expressions in the numerical value fields and making it easier to differentiate similarly named layers by showing the beginning and end of the name if it gets abbreviated.  They also added a function to distribute layers spatially based on the space between them, which accounts for their varying sizes, compared to the current solution which just evenly distributes based on their reference anchor point.

In other news, Photoshop is coming to iPad, and while that doesn’t affect me personally, I can see how this could be a big deal for some people. They have offered various trimmed down Photoshop editing applications for iOS in the past, but this new release is supposed to be based on the same underlying code as the desktop version and will eventually replicate all functionality, once they finish adapting the UI for touchscreens.

New Apps
Adobe also showed off Project Gemini, which is a sketch and painting tool for iPad that sits somewhere between Photoshop and Illustrator. (Hence the name, I assume) This doesn’t have much direct application to video workflows besides being able to record time-lapses of a sketch, which should make it easier to create those “white board illustration” videos that are becoming more popular.

Project Aero is a tool for creating AR experiences, and I can envision Premiere and After Effects being critical pieces in the puzzle for creating the visual assets that Aero will be placing into the augmented reality space.  This one is the hardest for me to fully conceptualize. I know Adobe is creating a lot of supporting infrastructure behind the scenes to enable the delivery of AR content in the future, but I haven’t yet been able to wrap my mind around a vision of what that future will be like.  VR I get, but AR is more complicated because of its interface with the real world and due to the variety of forms in which it can be experienced by users.  Similar to how web design is complicated by the need to support people on various browsers and cell phones, AR needs to support a variety of use cases and delivery platforms.  But Adobe is working on the tools to make that a reality, and Project Aero is the first public step in that larger process.

Community Pavilion
Adobe’s partner companies in the Community Pavilion were showing off a number of new products.  Dell has a new 49″ IPS monitor, the U4919DW, which is basically the resolution and desktop space of two 27-inch QHD displays without the seam (5120×1440 to be exact). HP was displaying their recently released ZBook Studio x360 convertible laptop workstation, (which I will be posting a review of soon), as well as their Zbook X2 tablet and the rest of their Z workstations.  NVidia was exhibiting their new Turing-based cards with 8K Red decoding acceleration, ray tracing in Adobe Dimension and other GPU accelerated tasks.  AMD was demoing 4K Red playback on a MacBookPro with an eGPU solution, and CPU based ray-tracing on their Ryzen systems.  The other booths spanned the gamut from GoPro cameras and server storage devices to paper stock products for designers.  I even won a Thunderbolt 3 docking station at Intel’s booth. (Although in the next drawing they gave away a brand new Dell Precision 5530 2-in-1 convertible laptop workstation.)   Microsoft also garnered quite a bit of attention when they gave away 30 MS Surface tablets near the end of the show.  There was lots to see and learn everywhere I looked.

The Significance of MAX
Adobe MAX is quite a significant event, especially now that I have been in the industry long enough to start to see the evolution of certain trends — things are not as static as we may expect.  I have attended NAB for the last 12 years, and the focus of that show has shifted significantly away from my primary professional focus. (No Red, Ncidia, or Apple booths, among many other changes)  This was the first year that I had the thought “I should have gone to Sundance,” and a number of other people I know had the same impression. Adobe Max is similar, although I have been a little slower to catch on to that change.  It has been happening for over ten years, but has grown dramatically in size and significance recently.  If I still lived in LA, I probably would have started attending sooner, but it was hardly on my radar until three weeks ago.  Now that I have seen it in person, I probably won’t miss it in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB NY: A DP’s perspective

By Barbie Leung

At this year’s NAB New York show, my third, I was able to wander the aisles in search of tools that fit into my world of cinematography. Here are just a few things that caught my eye…

Blackmagic, which had large booth at the entrance to the hall, was giving demos of its Resolve 15, among other tools. Panasonic also had a strong presence mid-floor, with an emphasis on the EVA-1 cameras. As usual, B&H attracted a lot of attention, as did Arri, which brought a couple of Arri Trinity rigs to demo.

During the HDR Video Essentials session, colorist Juan Salvo of TheColourSpace, talked about the emerging HDR 10+ standard proposed by Samsung and Amazon Video. Also mentioned was the trend of consumer displays getting brighter every year and that impact on content creation and content grading. Salvo pointed out the affordability of LG’s C7 OLEDs (about 700 Nits) for use as client monitors, while Flanders Scientific (which had a booth at the show) remains the expensive standard for grading. It was interesting to note that LG, while being the show’s Official Display Partner, was conspicuously absent from the floor.

Many of the panels and presentations unsurprisingly focused on content monetization — how to monetize faster and cheaper. Amazon Web Service’s stage sessions emphasized various AWS Elemental technologies, including automating the creation of video highlight clips for content like sports videos using facial recognition algorithms to generate closed captioning, and improving the streaming experience onboard airplanes. The latter will ultimately make content delivery a streamlined enough process for airlines that it would enable advertisers to enter this currently untapped space.

Editor Janis Vogel, a board member of the Blue Collar Post Collective, spoke at the #galsngear “Making Waves” panel, and noted the progression toward remote work in her field. She highlighted the fact that DaVinci Resolve, which had already made it possible for color work to be done remotely, is now also making it possible for editors to collaborate remotely. The ability to work remotely gives professionals the choice to work outside of the expensive-to-live-in major markets, which is highly desirable given that producers are trying to make more and more content while keeping budgets low.

Speaking at the same panel, director of photography/camera operator Selene Richholt spoke to the fact that crews are being monetized with content producers either asking production and post pros to provide standard service at substandard rates, or more services without paying more.

On a more exciting note, she cited recent 9×16 projects that she has shot with the camera mounted vertically (as opposed to shooting 16×9 and cropping in) in order to take full advantage of lens properties. She looks forward to the trend of more projects that can mix aspects ratios and push aesthetics.

Well, that’s it for this year. I’m already looking forward to next year.

 


Barbie Leung is a New York-based cinematographer and camera operator working in film, music video and branded content. Her work has played Sundance, the Tribeca Film Festival, Outfest and Newfest. She is also the DCP mastering technician at the Tribeca Film Festival.

Report: Sound for Film & TV conference focuses on collaboration

By Mel Lambert

The 5th annual Sound for Film & TV conference was once again held at Sony Pictures Studios in Culver City, in cooperation with Motion Picture Sound Editors and Cinema Audio Society and Mix Magazine. The one-day event featured a keynote address from veteran sound designer Scott Gershin, together with a broad cross section of panel discussions on virtually all aspects of contemporary sound and post production. Co-sponsors included Audionamix, Sound Particles, Tonsturm, Avid, Yamaha-Steinberg, iZotope, Meyer Sound, Dolby Labs, RSPE, Formosa Group and Westlake Audio, and attracted some 650 attendees.

With film credits that include Pacific Rim and The Book of Life, keynote speaker Gershin focused on advances in immersive sound and virtual reality experiences. Having recently joined Sound Lab at Keywords Studios, the sound designer and supervisor emphasized that “a single sound can set a scene,” ranging from a subtle footstep to an echo-laden yell of terror. “I like to use audio to create a foreign landscape, and produce immersive experiences,” he says, stressing that “dialog forms the center of attention, with music that shapes a scene emotionally and sound effects that glue the viewer into the scene.” In summary he concluded, “It is our role to develop a credible world with sound.”

The Sound of Streaming Content — The Cloverfield Paradox
Avid-sponsored panels within the Cary Grant Theater included an overview of OTT techniques titled “The Sound of Streaming Content,” which was moderated by Ozzie Sutherland, a production sound technology specialist with Netflix. Focusing on sound design and re-recording of the recent Netflix/Paramount Pictures sci-fi film mystery The Cloverfield Paradox from director Julius Onah, the panel included supervising sound editor/re-recording mixer Will Files, co-supervising sound editor/sound designer Robert Stambler and supervising dialog editor/re-recording mixer Lindsey Alvarez. Files and Stambler have collaborated on several projects with director J. J. Abrams through Abram’s Bad Robot production company, including Star Trek: Into Darkness (2013), Star Wars: The Force Awakens (2015) and 10 Cloverfield Lane (2016), as well as Venom (2018).

The Sound of Streaming Content panel: (L-R) Ozzie Sutherland, Will Files, Robert Stambler and Lindsey Alvarez

“Our biggest challenge,” Files readily acknowledges, “was the small crew we had on the project; initially, it was just Robby [Stambler] and me for six months. Then Star Wars: The Force Awakens came along, and we got busy!” “Yes,” confirmed Stambler, “we spent between 16 and 18 months on post production for The Cloverfield Paradox, which gave us plenty of time to think about sound; it was an enlightening experience, since everything happens off-screen.” While orbiting a planet on the brink of war, the film, starring Gugu Mbatha-Raw, David Oyelowo and Daniel Brühl, follows a team of scientists trying to solve an energy crisis that culminates in a dark alternate reality.

Having screened a pivotal scene from the film in which the spaceship’s crew discovers the effects of interdimensional travel while hearing strange sounds in a corridor, Alvarez explained how the complex dialog elements came into play, “That ‘Woman in The Wall’ scene involved a lot of Mandarin-language lines, 50% of which were re-written to modify the story lines and then added in ADR.” “We also used deep, layered sounds,” Stambler said, “to emphasize the screams,” produced by an astronaut from another dimension that had become fused with the ship’s hull. Continued Stambler, “We wanted to emphasize the mystery as the crew removes a cover panel: What is behind the wall? Is there really a woman behind the wall?” “We also designed happy parts of the ship and angry parts,” Files added. “Dependent on where we were on the ship, we emphasized that dominant flavor.”

Files explained that the theatrical mix for The Cloverfield Paradox in Dolby Atmos immersive surround took place at producer Abrams’ Bad Robot screening theater, with a temporary Avid S6 M40 console. Files also mixed the first Atmos film, Brave, back in 2013. “J. J. [Abrams] was busy at the time,” Files said, “but wanted to be around and involved,” as the soundtrack took shape. “We also had a sound-editorial suite close by,” Stambler noted. “We used several Futz elements from the Mission Control scenes as Atmos Objects,” added Alvarez.

“But then we received a request from Netflix for a near-field Atmos mix,” that could be used for over-the-top streaming, recalled Files. “So we lowered the overall speaker levels, and monitored on smaller speakers to ensure that we could hear the dialog elements clearly. Our Atmos balance also translated seamlessly to 5.1- and 7.1-channel delivery formats.”

“I like mixing in Native Atmos because you can make final decisions with creative talent in the room,” Files concluded. “You then know that everything will work in 5.1 and 7.1. If you upmix to Atmos from 7.1, for example, the creatives have often left by the time you get to the Atmos mix.”

The Sound and Music of Director Damien Chazelle’s First Man
The series of “Composers Lounge” presentations held in the Anthony Quinn Theater, sponsored by SoundWorks Collection and moderated by Glenn Kiser from The Dolby Institute, included “The Sound and Music of First Man” with sound designer/supervising sound editor/SFX re-recording mixer Ai-Ling Lee, supervising sound editor Mildred latrou Morgan, SFX re-recording mixer Frank Montaño, dialog/music re-recording mixer Jon Taylor, composer Justin Hurwitz and picture editor Tom Cross. First Man takes a close look at the life of the astronaut Neil Armstrong, and the space mission that led him to become the first man to walk on the Moon in July 1969. It stars Ryan Gosling, Claire Foy and Jason Clarke.

Having worked with the film’s director, Damien Chazelle, on two previous outings — La La Land (2016) and Whiplash (2014) — Cross advised that he likes to have sound available on his Avid workstation as soon as possible. “I had some rough music for the big action scenes,” he said, “together with effects recordings from Ai-Ling [Lee].” The latter included some of the SpaceX rockets, plus recordings of space suits and other NASA artifacts. “This gave me a sound bed for my first cut,” the picture editor continued. “I sent that temp track to Ai-Ling for her sound design and SFX, and to Milly [latrou Morgan] for dialog editorial.”

A key theme for the film was its documentary style, Taylor recalled, “That guided the shape of the soundtrack and the dialog pre-dubs. They had a cutting room next to the Hitchcock Theater [at Universal Studios, used for pre-dub mixes and finals] so that we could monitor progress.” There were no Temp Mixes on this project.

“We had a lot of close-up scenes to support Damien’s emotional feel, and used sound to build out the film,” Cross noted. “Damien watched a lot of NASA footage shot on 16 mm film, and wanted to make our film [immersive] and personal, using Neil Armstrong as a popular icon. In essence, we were telling the story as if we had taken a 16 mm camera into a capsule and shot the astronauts into space. And with an Atmos soundtrack!”

“We pre-scored the soundtrack against animatics in March 2017,” commented Hurwitz. “Damien [Chazelle] wanted to storyboard to music and use that as a basis for the first cut. I developed some themes on a piano and then full orchestral mock-ups for picture editorial. We then re-scored the film after we had a locked picture.” “We developed a grounded, gritty feel to support the documentary style that was not too polished,” Lee continued. “For the scenes on Earth we went for real-sounding backgrounds, Foley and effects. We also narrowed the mix field to complement the narrow image but, in contrast, opened it up for the set pieces to surround the audience.”

“The dialog had to sound how the film looked,” Morgan stressed. “To create that real-world environment I often used the mix channel for dialog in busy scenes like mission control, instead of the [individual] lavalier mics with their cleaner output. We also miked everybody in Mission Control – maybe 24 tracks in all.” “And we secured as many authentic sound recordings as we could,” Lee added. “In order to emphasize the emotional feel of being inside Neil Armstrong’s head space, we added surreal and surprising sounds like an elephant roar, lion growl or animal stampede to these cockpit sequences. We also used distortion and over-modulation to add ‘grit’ and realism.”

“It was a Native Atmos mix,” advised Montaño. “We used Atmos to reflect what the picture showed us, but not in a gimmicky way.” “During the rocket launch scenes,” Lee offered, “we also used the Atmos full-range surround channels to place many of the full-bodied, bombastic rocket roars and explosions around the audience.” “But we wanted to honor the documentary style,” Taylor added, “by keeping the music within the front LCR loudspeakers, and not coming too far out into the surrounds.”

“A Star Is Born” panel: (L-R) Steve Morrow, Dean Zupancic and Nick Baxter

The Sound of Director Bradley Cooper’s A Star Is Born
A subsequent panel discussion in the “Composers Lounge” series, again moderated by Kiser, focused on “The Sound of A Star Is Born,” with production sound mixer Steve Morrow, music production mixer Nick Baxter and re-recording mixer Dean Zupancic. The film is a retelling of the classic tale of a musician – Jackson Maine, played by Cooper – who helps a struggling singer find fame, even as age and alcoholism send his own career into a downward spiral. Morrow re-counted that the director’s costar, Lady Gaga, insisted that all vocals be recorded live.

“We arranged to record scenes during concerts at the Stagecoach 2017 Festival,” the production mixer explained. “But because these were new songs that would not be heard in the film until 18 months later, [to prevent unauthorized bootlegs] we had to keep the sound out of the PA system, and feed a pre-recorded band mix to on-stage wedges or in-ear monitors.” “We had just a handful of minutes before Willie Nelson was scheduled to take the stage,” Baxter added, “and so we had to work quickly” in front of an audience of 45,000 fans. “We rolled on the equipment, hooked up the microphones, connected the monitors and went for it!”

To recreate the sound of real-world concerts, Baxter made impulse-response recordings of each venue – in stereo as well as 5.1- and 7.1- channel formats. “To make the soundtrack sound totally live,” Morrow continued, “at Coachella Festival we also captured the IR sound echoing off nearby mountains.” Other scenes were shot during Lady Gaga’s “Joanne” Tour in August 2017 while on a stop in Los Angeles, and others in the Palm Springs Convention Center, where Cooper’s character is seen performing at a pharmaceutical convention.

“For scenes filmed at the Glastonbury Festival in the UK in front of 110,000 people,” Morrow recalled, “we had been allocated just 10 minutes to record parts for two original songs — ‘Maybe It’s Time’ and ‘Black Eyes’ — ahead of Kris Kristofferson’s set. But then we were told that, because the concert was running late, we only had three minutes. So we focused on securing 30 seconds of guitar and vocals for each song.”

During a scene shot in a parking lot outside a food market where Lady Gaga’s character sings acapella, Morrow advised that he had four microphones on the actors: “Two booms, top and bottom, for Bradley Cooper’s voice, and lavalier mikes; we used the boom track when Lady Gaga (as Ally) belted out. I always had my hand on the gain knob! That was a key scene because it established for the audience that Ally can sing.”

Zupancic noted that first-time director Cooper was intimately involved in all aspects of post production, just as he was in production. “Bradley Cooper is a student of film,” he said. “He worked closely with supervising sound editor Alan Robert Murray on the music and SFX collaboration.” The high-energy Atmos soundtrack was realized at Warner Bros Studio Facilities’ post production facility in Burbank; additional re-recording mixers included Michael Minkler, Matthew Iadarola and Jason King, who also handled SFX editing.

An Avid session called “Monitoring and Control Solutions for Post Production with Immersive Audio” featured the company’s senior product specialist, Jeff Komar, explaining how Pro Tools with an S6 Controller and an MTRX interface can manage complex immersive audio projects, while a MIX Panel entitled “Mixing Dialog: The Audio Pipeline,” moderated by Karol Urban from Cinema Audio Society, brought together re-recording mixers Gary Bourgeois and Mathew Waters with production mixer Phil Palmer and sound supervisor Andrew DeCristofaro. “The Business of Immersive,” moderated by Gadget Hopkins, EVP with Westlake Pro, addressed immersive audio technologies, including Dolby Atmos, DTS and Auro 3D; other key topics included outfitting a post facility, new distribution paradigms and ROI while future-proofing a stage.

A companion “Parade of Carts & Bags,” presented by Cinema Audio Society in the Barbra Streisand Scoring Stage, enabled production sound mixers to show off their highly customized methods of managing the tools of their trade, from large soundstage productions to reality TV and documentaries.

Finally, within the Atmos-equipped William Holden Theater, the regular “Sound Reel Showcase,” sponsored by Formosa Group, presented eight-minute reels from films likely to be in consideration for a Best Sound Oscar, MPSE Golden Reel and CAS Awards, including A Quiet Place (Paramount) introduced by Erik Aadahl, Black Panther introduced by Steve Boeddecker, Deadpool 2 introduced by Martyn Zub, Mile 22 introduced by Dror Mohar, Venom introduced by Will Files, Goosebumps 2 introduced by Sean McCormack, Operation Finale introduced by Scott Hecker, and Jane introduced by Josh Johnson.

Main image: The Sound of First Man panel — Ai-Ling Lee (left), Mildred latrou Morgan & Tom Cross.

All photos copyright of Mel Lambert


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

 

GoPro introduces new Hero7 camera lineup

GoPro’s new Hero7 lineup includes the company’s flagship Hero7 Black, which comes with a timelapse video mode, live streaming and improved video stabilization. The new video stabilization, HyperSmooth, allows users to capture professional-looking, gimbal-like stabilized video without  a motorized gimbal. HyperSmooth also works underwater and in high-shock and wind situations where gimbals fail.

With Hero7 Black, GoPro is also introducing a new form of video called TimeWarp. TimeWarp Video applies a high-speed, “magic-carpet-ride” effect, transforming longer experiences into short, flowing videos. Hero7 Black is the first GoPro to live stream, enabling users to automatically share in realtime to Facebook, Twitch, YouTube, Vimeo and other platforms internationally.

Other Hero7 Black features:

  • SuperPhoto – Intelligent scene analyzation for professional-looking photos via automatically applied HDR, Local Tone Mapping and Multi-Frame Noise Reduction
  • Portrait Mode – Native vertical-capture for easy sharing to Instagram Stories, Snapchat and others
  • Enhanced Audio – Re-engineered audio captures increased dynamic range, new microphone membrane reduces unwanted vibrations during mounted situations
  • Intuitive Touch Interface – 2-inch touch display with simplified user interface enables native vertical (portrait) use of camera
  • Face, Smile + Scene Detection – Hero7 Black recognizes faces, expressions and scene-types to enhance automatic QuikStory edits on the GoPro app
  • Short Clips – Restricts video recording to 15- or 30-second clips for faster transfer to phone, editing and sharing.
  • High Image Quality – 4K/60 video and 12MP photos
  • Ultra Slo-Mo – 8x slow motion in 1080p240
  • Waterproof – Waterproof without a housing to 33ft (10m)
  • Voice Control – Verbal commands are hands-free in 14 languages
  • Auto Transfer to Phone – Photos and videos move automatically from camera to phone when connected to the GoPro app for on-the-go sharing
  • GPS Performance Stickers – Users can track speed, distance and elevation, then highlight them by adding stickers to videos in the GoPro app

The Hero7 Black is available now on pre-order for $399.

Panavision, Sim, Saban Capital agree to merge

Saban Capital Acquisition Corp., a publicly traded special purpose acquisition company, Panavision and Sim Video International have agreed to combine their businesses to create a premier global provider of end-to-end production and post production services to the entertainment industry. Under the terms of the business combination agreement, Panavision and Sim will become wholly owned subsidiaries of Saban Capital Acquisition Corp. Upon completion, Saban Capital Acquisition Corp. will change its name to Panavision Holdings Inc. and is expected to continue to trade on the Nasdaq stock exchange. Kim Snyder, president and chief executive officer of Panavision, will serve as chairman and chief executive officer. Bill Roberts, chief financial officer of Panavision, will serve in that role for the combined company.

Panavision designs, manufactures and provides high-precision optics and camera technology for the entertainment industry and is a leading global provider of production equipment and services. Sim is a leading provider of production and post production solutions with facilities in Los Angeles, Vancouver, Atlanta, New York and Toronto.

“This acquisition will leverage the best of Panavision’s and Sim’s resources by providing comprehensive products and services to best address the ever-adapting needs of content creators globally,” says Snyder.

“We’re combining the talent and integrated services of Sim with two of the biggest names in the business, Panavision and Saban,” adds James Haggarty, president and CEO of Sim. “The resulting scale of the new combined enterprise will better serve our clients and help shape the content-creation landscape.”

The respective boards of directors of Saban Capital Acquisition Corp., Panavision and Sim have unanimously approved the merger with completion subject to Saban Capital Acquisition Corp. stockholder approval, certain regulatory approvals and other customary closing conditions. The parties expect that the process will be completed in the first quarter of 2019.

Quantum upgrades Xcellis scale-out storage with StoreNext 6.2, NVMe tech

Quantum has made enhancements to its Xcellisscale-out storage appliance portfolio with an upgrade to StorNext 6.2 and the introduction of NVMe storage. StorNext 6.2 bolsters performance for 4K and 8K video while enhancing integration with cloud-based workflows and global collaborative environments. NVMe storage significantly accelerates ingest and other aspects of media workflows.

Quantum’s Xcellis scale-out appliances provide high performance for increasingly demanding applications and higher resolution content. Adding NVMe storage to the Xcellis appliances offers ultra-fast performance: 22 GB/s single-client, uncached streaming bandwidth. Excelero’s NVMesh technology in combination with StorNext ensures all data is accessible by multiple clients in a global namespace, making it easy to access and cost-effective to share Flash-based resources.

Xcellis provides cross-protocol locking for shared access across SAN, NFS and SMB, helping users share content across both Fibre Channel and Ethernet.

With StorNext 6.2, Quantum now offers an S3 interface to Xcellis appliances, allowing them to serve as targets for applications designed to write to RESTful interfaces. This allows pros to use Xcellis as either a gateway to the cloud or as an S3 target for web-based applications.

Xcellis environments can now be managed with a new cloud monitoring tool that enables Quantum’s support team to monitor critical customer environmental factors, speed time to resolution and ultimately increase uptime. When combined with Xcellis Web Services — a suite of services that lets users set policies and adjust system configuration — overall system management is streamlined.

Available with StorNext 6.2, enhanced FlexSync replication capabilities enable users to create local or remote replicas of multitier file system content and metadata. With the ability to protect data for both high-performance systems and massive archives, users now have more flexibility to protect a single directory or an entire file system.

StorNext 6.2 lets administrators provide defined and enforceable quotas and implement quality of service levels for specific users, and it simplifies reporting of used storage capacity. These new features make it easier for administrators to manage large-scale media archives efficiently.

The new S3 interface and NVMe storage option are available today. The other StorNext features and capabilities will be available by December 2018.

 

Colorfront supports HDR, UHD, partners again with AJA

By Molly Hill

Colorfront released new products and updated current product support as part of NAB 2018, expanding their partnership with AJA. Both companies had demos of the new HDR Image Analyzer for UHD, HDR and WCG analysis. It can handle 4K, HDR and 60fps in realtime and shows information in various view modes including parade, pixel picker, color gamut and audio.

Other software updates include support for new cameras in On-Set Dailies and Express Dailies, as well as the inclusion of HDR analysis tools. QC Player and Transkoder 2018 were also released, with the latter now optimized for HDR and UHD.

Colorfront also demonstrated its tone-mapping capabilities (SDR/HDR) right in the Transkoder software, without the FS-HDR hardware (which is meant more for broadcast). Static (one light) or dynamic (per shot) mapping is available in either direction. Customization is available for different color gamuts, as well as peak brightness on a sliding scale, so it’s not limited to a pre-set LUT. Even just the static mapping for SDR-to-HDR looked great, with mostly faithful color reproduction.

The only issues were some slight hue shifts from blue to green, and clipping in some of the highlights in the HDR version, despite detail being available in the original SDR. Overall, it’s an impressive system that can save time and money for low-budget films when there isn’t the budget to hire a colorist to do a second pass.

Samsung’s 360 Round for 3D video

Samsung showed an enhanced Samsung 360 Round camera solution at NAB, with updates to its live streaming and post production software. The new solution gives professional video creators the tools they need — from capture to post — to tell immersive 360-degree and 3D stories for film and broadcast.

“At Samsung, we’ve been innovating in the VR technology space for many years, including introducing the 360 Round camera with its ruggedized design, superior low light and live streaming capabilities late last year,” says Eric McCarty of Samsung Electronics America.

The Samsung 360 Round offers realtime 3D video to PCs using the 360 Round’s bundled software so video creators can now view live video on their mobile devices using the 360 Round live preview app. In addition, the 360 Round live preview app allows creators to remotely control the camera settings, via Wi-Fi router, from afar. The updated 360 Round PC software now provides dual monitor support, which allows the editor to make adjustments and show the results on a separate monitor dedicated to the director.

Limiting luminance levels to 16-135, noise reduction and sharpness adjustments, as well as a hardware IR filter make it possible to get a clear shot in almost no light. The 360 Round also offers advanced stabilization software and the ability to color-correct on the fly, with an intuitive, easy-to-use histogram. In addition, users can set up profiles for each shot and save the camera settings, cutting down on the time required to prep each shot.

The 360 Round comes with Samsung’s advanced Stitching software, which weaves together video from each of the 360 Round’s 17 lenses. Creators can stitch, preview and broadcast in one step on a PC without the need for additional software. The 360 Round also enables fine-tuning of seamlines during a live production, such as moving them away from objects in realtime and calibrating individual stitchlines to fix misalignments. In addition, a new local warping feature allows for individual seamline calibrations in post, without requiring a global adjustment to all seamlines, giving creators quick and easy, fine-grain control of the final visuals.

The 360 Round delivers realtime 4K x 4K (3D) streaming with minimal latency. SDI capture card support enables live streaming through multiple cameras and broadcasting equipment with no additional encoding/decoding required. The newest update further streamlines the switching workflow for live productions with audio over SDI, giving producers less complex events (one producer managing audio and video switching) and a single switching source as the production transitions from camera to camera.

Additional new features:

  • Ability to record, stream and save RAW files simultaneously, making the process of creating dailies and managing live productions easier. Creators can now save the RAW files to make further improvements to live production recordings and create a higher quality post version to distribute as VOD.
  • Live streaming support for HLS over HTTP, which adds another transport streaming protocol in addition to the RTMP and RTSP protocols. HLS over HTTP eliminates the need to modify some restrictive enterprise firewall policies and is a more resilient protocol in unreliable networks.
  • Ability to upload direct (via 360 Round software) to Samsung VR creator account, as well as Facebook and YouTube, once the files are exported.

Blackmagic releases Resolve 15, with integrated VFX and motion graphics

Blackmagic has released Resolve 15, a massive update that fully integrates visual effects and motion graphics, making it the first solution to combine professional offline and online editing, color correction, audio post production, multi-user collaboration and visual effects together in one software tool. Resolve 15 adds an entirely new Fusion page with over 250 tools for compositing, paint, particles, animated titles and more. In addition, the solution includes a major update to Fairlight audio, along with over 100 new features and improvements that professional editors and colorists have asked for.

DaVinci Resolve 15 combines four high-end applications into different pages in one single piece of software. The edit page has all the tools professional editors need for both offline and online editing, the color page features advanced color correction tools, the Fairlight audio page is designed specifically for audio post production and the new Fusion page gives visual effects and motion graphics artists everything they need to create feature film-quality effects and animations. A single click moves the user instantly between editing, color, effects and audio, giving individual users creative flexibility to learn and explore different toolsets. The workflow also enables collaboration, which speeds up post by eliminating the need to import, export or translate projects between different software applications or to conform when changes are made. Everything is in the same software application.

The free version of Resolve 15 can be used for professional work and has more features than most paid applications. Resolve 15 Studio, which adds multi-user collaboration, 3D, VR, additional filters and effects, unlimited network rendering and other advanced features such as temporal and spatial noise reduction, is available to own for $299. There are no annual subscription fees or ongoing licensing costs. Resolve 15 Studio costs less than other cloud-based software subscriptions and does not require an internet connection once the software has been activated. That means users won’t lose work in the middle of a job if there is no internet connection.

“DaVinci Resolve 15 is a huge and exciting leap forward for post production because it’s the world’s first solution to combine editing, color, audio and now visual effects into a single software application,” says Grant Petty, CEO of Blackmagic Design. “We’ve listened to the incredible feedback we get from customers and have worked really hard to innovate as quickly as possible. DaVinci Resolve 15 gives customers unlimited creative power to do things they’ve never been able to do before. It’s finally possible to bring teams of editors, colorists, sound engineers and VFX artists together so they can collaborate on the same project at the same time, all in the same software application!”

Resolve 15 Overview

Resolve 15 features an entirely new Fusion page for feature-film-quality visual effects and motion graphics animation. Fusion was previously only available as a standalone application, but it is now built into Resolve 15. The new Fusion page gives customers a true 3D workspace with over 250 tools for compositing, vector paint, particles, keying, rotoscoping, text animation, tracking, stabilization and more. The addition of Fusion to Resolve will be completed over the next 12-18 months, but users can get started using Fusion now to complete nearly all of their visual effects and motion graphics work. The standalone version of Fusion is still available for those who need it.

In addition to bringing Fusion into Resolve 15, Blackmagic has also added support for Apple Metal, multiple GPUs and CUDA acceleration, making Fusion in Resolve faster than ever. To add visual effects or motion graphics, users simply select a clip in the timeline on the Edit page and then click on the Fusion page where they can use Fusion’s dedicated node-based interface, which is optimized for visual effects and motion graphics. Compositions created in the standalone version of Fusion can also be copied and pasted into Resolve 15 projects.

Resolve 15 also features a huge update to the Fairlight audio page. The Fairlight page now has a complete ADR toolset, static and variable audio retiming with pitch correction, audio normalization, 3D panners, audio and video scrollers, a fixed playhead with scrolling timeline, shared sound libraries, support for legacy Fairlight projects and built-in cross platform plugins such as reverb, hum removal, vocal channel and de-esser. With Resolve 15, FairlightFX plugins run natively on Mac, Windows and Linux, so users no longer have to worry about audio plugins when moving between the platforms.

Professional editors will find new features in Resolve 15 specifically designed to make cutting, trimming, organizing and working with large projects even better. Load times have been improved so that large projects with hundreds of timelines and thousands of clips now open instantly. New stacked timelines and timeline tabs let editors see multiple timelines at once, so they can quickly cut, paste, copy and compare scenes between timelines. There are also new markers with on-screen annotations, subtitle and closed captioning tools, auto save with versioning, improved keyboard customization tools, new 2D and 3D Fusion title templates, image stabilization on the Edit page, a floating timecode window, improved organization and metadata tools, Netflix render presets with IMF support and much more.

Colorists get an entirely new LUT browser for quickly previewing and applying LUTs, along with new shared nodes that are linked so when one is changed they all change. Multiple playheads allow users to quickly reference different shots in a program. Expanded HDR support includes GPU accelerated Dolby Vision metadata analysis and native HDR 10+ grading controls. The new ResolveFX lets users quickly patch blemishes or remove unwanted elements in a shot using smart fill technology, and allows for dust and scratch removal, lens and aperture diffraction effects and more.

For the ultimate high-speed workflow, users can add a Resolve Micro Panel, Resolve Mini Panel or a Resolve Advanced Panel. All controls are placed near natural hand positions. Smooth, high-resolution weighted trackballs and precision engineered knobs and dials provide the right amount of resistance to accurately adjust settings. The Resolve control panels give colorists and editors fluid, hands-on control over multiple parameters at the same time, allowing them to create looks that are simply impossible with a standard mouse.

In addition, Blackmagic also introduced new Fairlight audio consoles for audio post production that will be available later this year. The new Fairlight consoles will be available in two-, three- and five- bay configurations.

Availability and Price

The public beta of Resolve 15 is available today as a free download from the Blackmagic website for all current Resolve and Resolve Studio customers. Resolve Studio is available for $299 from Blackmagic resellers.

The Fairlight consoles will be available later this year and with prices starting at $21,995 for the Fairlight 2 Bay console. The Fairlight consoles will be available from Blackmagic resellers.

NAB: AJA intros HDR Image Analyzer, Kona 1, Kona HDMI

AJA Video Systems is exhibiting a tech preview of its new waveform, histogram, vectorscope and Nit level HDR monitoring solution at NAB. The HDR Image Analyzer simplifies monitoring and analysis of 4K/UltraHD/2K/HD, HDR and WCG content in production, post, quality control and mastering. AJA has also announced two new Kona cards, as well as Desktop Software v14.2. Kona HDMI is a PCIe card for multi-channel HD and single-channel 4K HDMI capture for live production, streaming, gaming, VR and post production. Kona1 is a PCIe card for single-channel HD/SD 3G-SDI capture/playback. Desktop Software v14.2 adds support for Kona 1 and Kona HDMI, plus new improvements for AJA Kona, Io and T-TAP products.

HDR Image Analyzer
A waveform, histogram, vectorscope and Nit level HDR monitoring solution, the HDR Image Analyzer combines AJA’s video and audio I/O with HDR analysis tools from Colorfront in a compact 1RU chassis. The HDR Image Analyzer is a flexible solution for monitoring and analyzing HDR formats including Perceptual Quantizer, Hybrid Log Gamma and Rec.2020 for 4K/UltraHD workflows.

The HDR Image Analyzer is the second technology collaboration between AJA and Colorfront, following the integration of Colorfront Engine into AJA’s FS-HDR. Colorfront has exclusively licensed its Colorfront HDR Image Analyzer software to AJA for the HDR Image Analyzer.

Key features include:

— Precise, high-quality UltraHD UI for native-resolution picture display
— Advanced out-of-gamut and out-of-brightness detection with error intolerance
— Support for SDR (Rec.709), ST2084/PQ and HLG analysis
— CIE graph, Vectorscope, Waveform, Histogram
— Out-of-gamut false color mode to easily spot out-of-gamut/out-of-brightness pixels
— Data analyzer with pixel picker
— Up to 4K/UltraHD 60p over 4x 3G-SDI inputs
— SDI auto-signal detection
— File base error logging with timecode
— Display and color processing look up table (LUT) support
— Line mode to focus a region of interest onto a single horizontal or vertical line
— Loop-through output to broadcast monitors
— Still store
— Nit levels and phase metering
— Built-in support for color spaces from ARRI, Canon, Panasonic, RED and Sony

“As 4K/UltraHD, HDR/WCG productions become more common, quality control is key to ensuring a pristine picture for audiences, and our new HDR Image Analyzer gives professionals an affordable and versatile set of tools to monitor and analyze HDR productions from start to finish, allowing them to deliver more engaging visuals for viewers,” says Rashby.

Adds Aron Jazberenyi, managing director of Colorfront, “Colorfront’s comprehensive UHD HDR software toolset optimizes the superlative performance of AJA video and audio I/O hardware, to deliver a powerful new solution for the critical task of HDR quality control.”

HDR Image Analyzer is being demonstrated as a technology preview only at NAB 2018.

Kona HDMI
An HDMI video capture solution, Kona HDMI supports a range of workflows, including live streaming, events, production, broadcast, editorial, VFX, vlogging, video game capture/streaming and more. Kona HDMI is highly flexible, designed for four simultaneous channels of HD capture with popular streaming and switching applications including Telestream Wirecast and vMix.

Additionally, Kona HDMI offers capture of one channel of UltraHD up to 60p over HDMI 2.0, using AJA Control Room software, for file compatibility with most NLE and effects packages. It is also compatible with other popular third-party solutions for live streaming, projection mapping and VR workflows. Developers use the platform to build multi-channel HDMI ingest systems and leverage VL42 compatibility on Linux. Features include: four full-size HDMI ports; the ability to easily switch between one channel of UltraHD or four channels of 2K/HD; and embedded HDMI audio in, up to eight embedded channels per input.

Kona 1
Designed for broadcast, post production and ProAV, as well as OEM developers, Kona 1 is a cost-efficient single-channel 3G-SDI 2K/HD 60p I/O PCIe card. Kona 1 offers serial control and reference/LTC, and features standard application plug-ins, as well as AJA SDK support. Kona 1 supports 3G-SDI capture, monitoring and/or playback with software applications from AJA, Adobe, Avid, Apple, Telestream and more. Kona 1 enables simultaneous monitoring during capture (pass-through) and includes: full-size SDI ports supporting 3G-SDI formats, embedded 16-channel SDI audio in/out, Genlock with reference/ LTC input and RS-422.

Desktop Software v14.2
Desktop Software v14.2 introduces support for Kona HDMI and Kona 1, as well as a new SMPTE ST 2110 IP video mode for Kona IP, with support for AJA Control Room, Adobe Premiere Pro CC, part of the Adobe Creative Cloud, and Avid Media Composer. The free software update also brings 10GigE support for 2K/HD video and audio over IP (uncompressed SMPTE 2022-6/7) to the new Thunderbolt 3-equipped Io IP and Avid DNxIP, as well as additional enhancements to other Kona, Io and T-TAP products, including HDR capture with Io 4K Plus. Io 4K Plus and DNxIV users also benefit from a new feature allowing all eight analog audio channels to be configured for either output, input or a 4-In/4-Out mode for full 7.1 ingest/monitoring, or I/O for stereo plus VO and discrete tracks.

“Speed, compatibility and reliability are key to delivering high-quality video I/O for our customers. Kona HDMI and Kona 1 give video professionals and enthusiasts new options to work more efficiently using their favorite tools, and with the reliability and support AJA products offer,” says Nick Rashby, president of AJA.

Kona HDMI will be available this June for $895, and Kona 1 will be available in May for $595. Both are available for pre-order now. Desktop Software v14.2 will also be available in May, as a free download from AJA’s support page.

Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.

Dell intros new Precision workstations, Dell Canvas and more

To celebrate the 20th anniversary of Dell Precision workstations, Dell announced additions to its Dell Precision fixed workstation portfolio, a special anniversary edition of its Dell Precision 5520 mobile workstation and the official availability of Dell Canvas, the new workspace device for digital creation.

Dell is showcasing its next-generation, fixed workstations at SIGGRAPH, including the Dell Precision 5820 Tower, Precision 7820 Tower, Precision 7920 Tower and Precision 7920 Rack, completely redesigned inside and out.

The three new Dell Precision towers combine a brand-new flexible chassis with the latest Intel Xeon processors, next-generation Radeon Pro graphics and highest-performing Nvidia Quadro professional graphics cards. Certified for professional software applications, the new towers are configured to complete the most complex projects, including virtual reality. Dell’s Reliable Memory Technology (RMT) Pro ensures memory challenges don’t kill your workflow, and Dell Precision Optimizer (DPO) tailors performance for your unique hardware and software combination.

The fully-customizable configuration options deliver the flexibility to tackle virtually any workload, including:

  • AI: The latest Intel Xeon processors are an excellent choice for artificial intelligence (AI), with agile performance across a variety of workloads, including machine learning (ML) and deep learning (DL) inference and training. If you’re just starting AI workloads, the new Dell Precision tower workstations allow you to use software optimized to your existing Intel infrastructure.
  • VR: The Nvidia Quadro GP100 powers the development and deployment of cognitive technologies like DL and ML applications. Additional Nvidia Pascal GPU options like HBM2 memory, and NVLink technologies allow professional users to create complex designs in computer-aided engineering (CAE) and experience life-like VR environments.
  • Editing and playback: Radeon Pro SSG Graphics with HBM2 memory and 2TB of SSD onboard allows real-time 8K video editing and playback, high-performance computing of massive datasets, and rendering of large projects.

The Dell Precision 7920 Rack is ideal for secure, remote workers and delivers the same power and scalability as the highest-performing tower workstation in a 2U form factor.  The Dell Precision 5820, 7820, 7920 towers and 7920 Rack will be available for order beginning October 3.

“Looking back at 20 years of Dell Precision workstations, you get a sense of how the capabilities of our workstations, combined with certified and optimized software and the creativity of our awesome customers, have achieved incredible things,” said Rahul Tikoo, vice president and general manager for Dell Precision workstations. “As great as those achievements are, this new lineup of Dell Precision workstations enables our customers to be ready for the next big technology revolution that is challenging business models and disrupting industries.”

Dell Canvas

Dell has also announced its highly-anticipated Dell Canvas, available now. Dell Canvas is a new workspace designed to make digital creative more natural. It features a 27” QHD touch screen that sits horizontally on your desk and can be powered by your current PC ecosystem and the latest Windows 10 Creator’s Update. Additionally, a digital pen provides precise tactile accuracy and the totem offers diverse menu and shortcut interaction.

For the 20th anniversary of Dell Precision, Dell is introducing a limited-edition anniversary model of its award-winning mobile workstation, the Dell Precision 5520. The Dell Precision 5520 Anniversary Edition is Dell’s thinnest, lightest, and smallest mobile workstation, available for a limited time, in hard-anodized aluminum, with a brushed metallic finish in a brand-new Abyss color with anti-finger print coating. The device is available now with two high-end configuration options.

Quick Look: Jaunt One’s 360 camera

By Claudio Santos

To those who have been following the virtual reality market from the beginning, one very interesting phenomenon is how the hardware development seems to have outpaced both the content creation and the software development. The industry has been in a constant state of excitement over the release of new and improved hardware that pushes the capabilities of the medium, and content creators are still scrambling to experiment and learn how to use the new technologies.

One of the products of this tech boom is the Jaunt One camera. It is a 360 camera that was developed with the explicit focus of addressing the many production complexities that plague real life field shooting. What do I mean by that? Well, the camera quickly disassembles and allows you to replace a broken camera module. After all, when you’re across the world and the elephant that is standing in your shot decides to play with the camera, it is quite useful to be able to quickly swap parts instead of having to replace the whole camera or sending it in for repair from the middle of the jungle.

Another of the main selling points of the Jaunt One camera is the streamlined cloud finishing service they provide. It takes the content creator all the way from shooting on set through stitching, editing, onlining and preparing the different deliverables for all the different publishing platforms available. The pipeline is also flexible enough to allow you to bring your footage in and out of the service at any point so you can pick and choose what services you want to use. You could, for example, do your own stitching in Nuke, AVP or any other software and use the Jaunt cloud service to edit and online these stitched videos.

The Jaunt One camera takes a few important details into consideration, such as the synchronization of all of the shutters in the lenses. This prevents stitching abnormalities in fast moving objects that are captured in different moments in time by adjacent lenses.

The camera doesn’t have an internal ambisonics microphone but the cloud service supports ambisonic recordings made in a dual system or Dolby Atmos. It was interesting to notice that one of the toolset apps they released was the Jaunt Slate, a tool that allows for easy slating on all the cameras (without having to run around the camera like a child, clapping repeatedly) and is meant to automatize the synchronization of the separate audio recordings in post.

The Jaunt One camera shows that the market is maturing past its initial DIY stage and the demand for reliable, robust solutions for higher budget productions is now significant enough to attract developers such as Jaunt. Let’s hope tools such as these encourage more and more filmmakers to produce new content in VR.

JVC GY-LS300CH camera offering 4K 4:2:2 recording, 60p output

JVC has announced version 4.0 of the firmware for its GY-LS300CH 4KCAM Super 35 handheld camcorder. The new firmware increases color resolution to 4:2:2 (8-bit) for 4K recording at 24/25/30p onboard to SDXC media cards. In addition, the IP remote function now allows remote control and image viewing in 4K. When using 4K 4:2:2 recording mode, the video output from the HDMI/SDI terminals is HD.

The GY-LS300CH also now has the ability to output Ultra HD (3840 x 2160) video at 60/50p via its HDMI 2.0b port. Through JVC’s partnership with Atomos, the GY-LS300CH integrates with the new Ninja Inferno and Shogun Inferno monitor recorders, triggering recording from the camera’s start/stop operation. Plus, when the camera is set to J-Log1 gamma recording mode, the Atomos units will record the HDR footage and display it on their integrated, 7-inch monitors.

“The upgrades included in our Version 4.0 firmware provide performance enhancements for high raster recording and IP remote capability in 4K, adding even more content creation flexibility to the GY-LS300CH,” says Craig Yanagi, product marketing manager at JVC. “Seamless integration with the new Ninja Inferno will help deliver 60p to our customers and allow them to produce outstanding footage for a variety of 4K and UHD productions.”

Designed for cinematographers, documentarians and broadcast production departments, the GY-LS300CH features JVC’s 4K Super 35 CMOS sensor and a Micro Four Thirds (MFT) lens mount. With its “Variable Scan Mapping” technology, the GY-LS300CH adjusts the sensor to provide native support for MFT, PL, EF and other lenses, which connect to the camera via third-party adapters. Other features include Prime Zoom, which allows shooters using fixed-focal (prime) lenses to zoom in and out without loss of resolution or depth, and a built-in HD streaming engine with Wi-Fi and 4G LTE connectivity for live HD transmission directly to hardware decoders as well as JVCVideocloud, Facebook Live and other CDNs.

The Version 4.0 firmware upgrade is free of charge for all current GY-LS300CH owners and will be available in late May.

Bluefish444 releases IngeSTore 1.1, adds edit-while-record capability

Bluefish444 was at NAB with Version 1.1 of its IngeSTore multichannel capture software, which is now available free from the Bluefish444 website. Compatible with all Bluefish444 video cards, IngeSTore captures multiple simultaneous channels of 3G/HD/SD-SDI to popular media files for archive, edit, encoding or analysis. IngeSTore improves efficiency in the digitization workflow by enabling multiple simultaneous recordings from VTRs, cameras and any other SDI source.

The new version of IngeSTore software also adds “Edit-While-Record” functionality and additional support for shared storage including Avid. Bluefish444 has partnered with Drastic Technologies to bring additional CODEC options to IngeSTore v1.1 including XDCAM, DNxHD, JPEG 2000, AVCi and more. Uncompressed, DV, DVCPro and DVCPro HD codecs will be made available free to Bluefish444 customers in the IngeSTore update.

The Edit-While-Record functionality allows editors access captured files while they are still being recorded to disk. Content creation tools such as Avid Media Composer, Adobe Premiere Pro CC, and Assimilate Scratch can output SDI and HDMI with Bluefish444 video cards while IngeSTore is recording and the files are growing in size and length.

Latest Autodesk Flame family updates and more

Autodesk was at NAB talking up new versions of its tools for media and entertainment, including the Autodesk Flame Family 2018 Update 1 for VFX, the Arnold 5.0 renderer, Maya 2017 Update 3 for 3D animation, performance updates for Shotgun production tracking and review software and 3DS Max 2018 software for 3D modeling.

The Autodesk Flame 2018 Update 1 includes new action and batch paint improvements such as 16-bit floating point (FP) depth support, scene detect and conform enhancements.

The Autodesk Maya 2017 Update 3 includes enhancements to character creation tools such as interactive grooming with XGen, an all-new UV workflow, and updates to the motion graphics toolset that includes a live link with Adobe After Effects and more.

Arnold 5.0 is offering several updates including better sampling, new standard surface, standard hair and standard volume shaders, Open Shading Language (OSL) support, light path expressions, refactored shading API and a VR camera.

— Shotgun updates accelerate multi-region performance and make media uploads and downloads faster regardless of location.

— Autodesk 3ds Max 2018 offers Arnold 5.0 rendering via a new MAXtoA 1.0 plug-in, customizable workspaces, smart asset creation tools, Bézier motion path animation, and a cloud-based large model viewer (LMV) that integrates with Autodesk Forge.

The Flame Family 2018 Update 1, Maya 2017 Update 3 and 3DS Max 2018 are all available now via Autodesk e-stores and Autodesk resellers. Arnold 5.0 and Shotgun are both available via their respective websites.

Boris FX merges with GenArts

Boris FX, maker of Boris Continuum Complete, has inked a deal to acquire visual effects plug-in developer GenArts, whose high-end plug-in line includes Sapphire. Sapphire has been used in at least one of each year’s VFX Oscar-nominated films since 1996. This acquisition follows the 2015 addition of Imagineer Systems, developer of Academy Award-winning planar tracking tool Mocha. Sapphire will continue to be developed and sold in its current form alongside Boris Continuum Complete (BCC) and Mocha Pro.

“We are excited to announce this strategic merger and welcome the Sapphire team to the Boris FX/Imagineer group,” says owner Boris Yamnitsky. “This acquisition makes Boris FX uniquely positioned to serve editors and effects artists with the industry’s leading tools for motion graphics, broadcast design, visual effects, image restoration, motion tracking and finishing — all under one roof. Sapphire’s suite of creative plug-ins has been used to design many of the last decades’ most memorable film images. Sapphire perfectly complements BCC and mocha as essential tools for professional VFX and we look forward to serving Sapphire’s extremely accomplished users.”

“Equally impressive is the team behind the technology,” continues Yamnitsky. “Key GenArts staff from engineering, sales, marketing and support will join our Boston office to ensure the smoothest transition for customers. Our shared goal is to serve our combined customer base with useful new tools and the highest quality training and technical support.”

 

 

NAB: The making of Jon Favreau’s ‘The Jungle Book’

By Bob Hoffman

While crowds lined up above the south hall at NAB to experience the unveiling of the new Lytro camera, across the hall a packed theatre conference room geeked-out as the curtain was slightly pulled back during a panel on the making of director Jon Favreau’s cinematic wonder, The Jungle Book.   Moderated by ICG Magazine editor David Geffner, Oscar-winning VFX supervisor Rob Legato, ASC, along with Jungle Book producer Brigham Taylor and Technicolor master colorist Mike Sowa enchanted the packed room with stories of the making and finishing of the hit film.

Legato first started developing his concepts for “virtual production” techniques on Martin Scorsese’s The Aviator, and shortly thereafter, with James Cameron and his hit Avatar. During the panel, Legato took the audience through a set of short demo clips of various scenes in the film while providing background on the production processes used by cinematographer Bill Pope, ASC, and Favreau to capture the live-action component of the film. Legato pointedly explained that his process is informed by a very traditional analog approach. The development of his thinking is based on a commitment to giving the filmmaking team tools and methodologies that allow them to work within their own particular comfort zones.

They may be working in a virtual environment, but Favreau’s wonderful touch is brilliantly demonstrated by the performance of 12-year-old Neel Sethi on his theatrical debut feature. Geffner noted more than once that “the emotional stakes are so well done you get involved emotionally” — without any notion of the technical complexity underlying the narrative.  One other area noted by Legato and Sowa was the myriad of theatrical-HDR deliverables for The Jungle Book, including up to 14-foot lamberts for the 3D presentation.  This film, and presentation, was just another clear indicator that HDR is a clear differentiator that audiences are clamoring for.

Bob Hoffman works at Technicolor in Hollywood.

Pixspan at NAB with 4K storage workflow solutions powered by Nvidia

During the NAB Show, Pixspan was demonstrating new storage workflows for full-quality 4K images powered by the Nvidia Quadro M6000. Addressing the challenges that higher resolutions and increasing amounts of data present for storage and network infrastructures, Pixspan is offering a solution that reduces storage requirements by 50-80 percent, in turn supporting 4K workflows on equipment designed for 2K while enabling data access times that are two to four times faster.

Pixspan software and the Nvidia Quadro M6000 GPU together deliver bit-accurate video decoding at up to 1.3GBs per second — enough to handle 4K digital intermediates or 4K/6K camera RAW files in realtime. Pixspan’s solution is based on its bit-exact compression technology, where each image is compressed into a smaller data file while retaining all the information from the original image, demonstrating how the processing power of the Quadro M6000 can be put to new uses in imaging storage and networking to save time and help users  meet tight deadlines.

Colorist Society International launches for color pros

Kevin Shaw

Kevin Shaw

At the opening of NAB, motion picture and television colorists Jim Wicks and Kevin Shaw announced Colorist Society International (CSI), the first the first professional association for colorists devoted exclusively to furthering and honoring the professional achievements of the colorist community. A non-profit organization, CSI represents professional colorists and promotes the creative art and science of color grading, restoration and finishing by advancing the craft, education, and public awareness of the art and science of color grading and color correction.

The Colorist Society International is a paid membership organization that will seek to increase the entertainment value of film and digital projects by attaining artistic pre-eminence and scientific achievement in the creative art of color; and to bring into close alliance those color artists who desire to advance the prestige and dignity of the color profession as educational and cultural resource rather than a labor union or guild.

“The colorist community has been growing for quite some time,” says Shaw. “We believe that a society by, for, and about colorists is long overdue. Current representation for colorists is fragmented and we feel that the industry would be better served with the coherent voice of the Colorist Society International”

Jim Wicks

Jim Wicks

Wicks added, “The notion of a colorist society is not farfetched. In much the same way, directors, cinematographers, and editors — the artists that we work closely with — have their own professional associations, each with similar mission statements and objectives.”

Membership is open to professional colorists, editor/colorists, DITs, telecine operators, color timers, finishers, and color scientists. Corporate sponsors and members from alliance organizations, such as cinematographers, directors, producers, are also welcome.

NAB 2016: My pick for this year’s gamechanger is Lytro

By Isaac Spedding

There has been a lot of buzz around what the gamechanger was at this year’s NAB show. What was released that will really change the way we all work? I was present for the conference session where an eloquent Jon Karafin, head of Light Field Video, explained that Lytro has created a camera system that essentially captures every aspect of your shot and allows you to recreate it in any way, at any position you want, using light field technology.

Typically, with game changing technology comes uncertainty from the established industry, and that was made clear during the rushed Q+A session, where several people (after congratulating the Lytro team) nervously asked if they had thought about the fate of positions in the industry which the technology would make redundant. Jon’s reply was that core positions won’t change, however, the way in which they operate will. The mob of eager filmmakers, producers and young scientists that queued to meet him (I was one of them) was another sign that the technology is incredibly interesting and exciting for many.

Lytro2“It’s a birth of a new technology that very well could replace the way that Hollywood makes films.” These are words from Robert Stromberg (DGA), CCO and founder of The Virtual Reality Company, in the preview video for Lytros’ debut film Life, which will be screened on Tuesday to an audience of 500 lucky attendees. Karafin and Jason Rosenthal, CEO at Lytro, will provide a Lytro Cinema demonstration and breakdown of the short film.

Lytro Cinema is my pick for the NAB 2016 game changing technology and it looks like it will not only advance capture, but also change post production methodology and open up new roles, possibilities and challenges for everyone in the industry.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.

Sony’s new PXW-FS5 camera, the FS7’s little brother

By Robert Loughlin

IBC is an incredibly exciting time of year for gearheads like me, but simultaneously frustrating if you making  it over to Amsterdam to see the tech in person. So when I was asked if I wanted to see what Sony was going to display at IBC before the trade show, I jumped at the chance.

I was treated to a great breakfast in the Sony Clubhouse, at the top of their building on Madison Avenue, surrounded by startling views of Manhattan and Long Island to the East. After a few minutes of chitchatting with the other writers, we were invited into a conference room to see what Sony had to show. They started by outlining what they believed their strengths were, and where they see themselves moving in the near future.

They stressed that they have tools for all corners of the market, from the F65 to the A7, and that these tools have been used in all ranges of environmental conditions — from extreme cold to scorching heat. Sony was very proud of the fact that they had a tool for almost any application you could think of. Sony’s director of digital imaging, Francois Gauthier, explained that if you started with the question, “What is my deliverable?” — meaning cinema, TV or web — Sony would have a solution for you. Yet, despite that broad range of product coverage, Sony felt that there was a missing piece in there, particularly between the FS7 and their cheaper A7 series of DSLRs. That’s where the PXW-FS5 comes in.

FS5-FS7The FS5
The FS5 is a brand-new camera that struck me as the FS7’s little brother. It sports a native 4K Super 35mm sensor, and we were told it’s the same 12 million-pixel Exmor sensor as the FS7. It records XAVC-L as well as AVCHD codecs, in S-Log 3, to dual SD card slots. The FS5 can also record high frame rates for both realtime recording and overcranking. The sensor itself is rated at EI 3200 with a dynamic range of about 14 stops. Internal recording is 8-bit 420 (at 4K — HD is 10-bit 4:2:2), but you can go out to an external recorder to get 10-bit 4K over the HDMI 2.0 port in the back. The camera also has one SDI port, but that only supports HD. You can record proxies simultaneously to the second SD card slot (though only when recording XAVC-L), and either have both slots sync up, or have individual record triggers for each. There is a 2K sensor crop mode, as well, that will let you either extend your lens, or use lenses designed for smaller image formats (like 16mm).

Controls-on-FS5

Controls on the side of the FS5

Product manager Juan Martinez stressed the power of the electronics inside, clocking boot time at less than five seconds, and mentioned that it is incredibly efficient (about two hours on the BP-U30, the smallest capacity). Additionally, he added that the camera doesn’t need to reboot if you’re changing recording formats. You just set it and you’re done.

The camera also has a new “Advanced Auto Focus” technology that can use facial recognition to track a subject. In addition to focus tools, the FS5 also has something called “Clear Image Zoom.” Clear Image Zoom is a way to blow up your picture — virtually extending the length of your lens — by first maximizing the optical zoom of the glass, then cleanly enlarging the image digitally. You can do this up to 2x, but it can be paired with the 2K sensor crop to get even more length out of your lens. The FS5 also has a built-in variable ND tool. There’s a dial on the side of the camera that lets you adjust iris to 1/100th of a stop, allowing the operator to do smooth iris pulls. Additionally, the camera has a silver knob on the front that allows you to assign up to three custom ND/iris values that you can quickly switch between.

In terms of design, it looks almost identical to the FS7, just shrunken down a bit. It has similar lines, but has the footprint and depth of the Canon C1/3/500, just a bit shorter. It’s a tiny camera. In like fashion, it’s also incredibly light. It weighs about two pounds — the magnesium body has something to do with that. It’s something I can easily hold in my hand all day. Its size and weight certainly make using this camera on gimbals and medium-sized drones very attractive. The remote operation applications become even more attractive with the FS5’s built in wireless streaming capability. You can stream the image to a computer, wireless streaming hardware (like Teradek), or your smartphone with Sony’s app. However, you can get higher bit-rates out of the stream by going over the Ethernet port on the back. Both Ethernet and wireless streaming are 720p. With the wireless capability, you can also connect to an FTP, enabling you to push media directly to a server from the field (provided you have the uplink available).

It’s also designed to work really well in your hand. The camera comes with a side grip that’s very repositionable with an easily reachable release lever. Just release the lever, and the grip is free to rotate. The grip fit perfectly in my palm, with controls either just under where my fingers naturally fell or within easy reach. The buttons included the standard remote buttons, like zoom and start/stop, but also a user definable button and a corresponding joystick, for quick access to menus.

Handgrip

Top: handgrip in hand, Bottom: button map

Top: the handgrip in hand, Bottom: button map

The grip is mounted very close to the camera body, in order to optimize the center of gravity while holding it. The camera is small and light enough that while holding it this way without the top handle and LCD viewfinder it’s reminiscent of holding a Handicam. However, if you have a long lens, or a similar setup where the center of gravity alters significantly, and need to move the grip up, you can remove it and mount an ARRI rosette plate (sold separately).

The FS5, without top handle or LCD viewfinder

The FS5, without top handle or LCD viewfinder

The camera also comes with a top handle that has GPS built-in, mounting points for the LCD viewfinder, an XLR input, and a Multi Interface hot-shoe mount. The handle also has its own stereo microphone built into the front, but the camera itself can only record two channels of audio.

Sony has positioned this camera to fall between DSLRs and the FS7. The MSRP is $6,699 for the body only, or $7,299 with a kit lens (18-105mm). The actual street prices will be lower than that, so the FS5 should fit comfortably between the two. Sony envisions this as their “grab and go” camera, ideal for remote documentary and unscripted TV or even web series. The camera is small, light and maneuverable enough to certainly be that. They wanted a camera that would be unintimidating to a non-professional, and I think they achieved that. However, without things like genlock timecode, and its E-mount lens mount, this camera is less ideal for cinema applications. There are other cameras around the same price point that are better suited for cinema (Blackmagic, RED Scarlet), so that’s totally fine. This camera definitely has its DNA deeply rooted in the camcorder days of yore, and will feel right at home with someone shooting and producing content for documentaries and TV. They showed a brief clip of footage, and it looked sharp with rich colors. I still tend to favor the color coming out of the Canon C series over the FS5, but it’s still solid footage. Projected availability is November 2015. For a full breakdown of specs, visit www.sony.com/fs5.

Sony PSZ-RA6T

Sony PSZ-RA6T

However, that wasn’t all Sony showed. The FS5 is pretty neat, but I was much more excited for the other thing Sony brought out. Tucked away in a corner of the room where they had put an FS5 in a “studio” set-up was a little download station. Centered around a MacBook Pro, the simple station had a Thunderbolt card reader and offload drive. The PSZ-RA drive is a brand new product from Sony, and I’m almost more excited about this little piece of hardware than I am about the new camera. It’s a small, two disk RAID that comes in 4TB and 6TB options. It’s similar to G-Tech’s popular G-RAIDs, with one notable exception. This thing is ruggedized. Imagine a LaCie Rugged the size and shape of a G-RAID (but without that awful orange — this is Sony-gray). The disks inside are buffered; it’s rated to be dropped from about a foot and can safely be tilted four inches in any direction. It supports RAID-0, -1 and JBOD. To me, set at RAID-1, it’s the perfect on-set shuttle drive. It even has a handle on top!

Overall, I saw a couple of really exciting things from Sony, and while I think a lot of people are really going to like the FS5, I’m dying to get the PSZ-RA drives on set.

Post production professional, specializing in dailies workflows as an Outpost Technician at Light Iron New York, and all-around tech-head.

IBC: Adobe upgrades Creative Cloud and Primetime

Adobe is adding new features to Adobe Creative Cloud, including support for Ultra HD (UHD), color-technology improvements and new touch workflows. In addition, Adobe Primetime, one of eight solutions inside Adobe Marketing Cloud, will extend its delivery and monetization capabilities for HTML5 video and offer new tools for pay-TV providers that make TV Everywhere authentication easier and more streamlined.

New video technology coming soon to Creative Cloud allows tools that will streamline workflows for broadcasters and media companies. They are:

  • Comprehensive native format support for editing 4K-to-8K footage in Premiere Pro CC.
  • Continued color advancements with support for High Dynamic Range (HDR) workflows in Premiere Pro CC.
  • Improved color fidelity and color adjustments in After Effects CC, as well as deeper support for ARRI RAW, Rec. 2020 and other Ultra HD and HDR formats.
  • A touch environment with Premiere Pro CC, After Effects CC and Character Animator optimized for Microsoft Surface Pro, Windows 8 tablets or Apple trackpad devices.
  • Remix, a new feature in Audition CC that adjusts the duration of a song to match video content. Remix automatically rearranges music to any duration while maintaining musicality and structure, creating custom tracks to fit storytelling needs.
  • Updated support for Creative Cloud Libraries across CC desktop video tools, powered by Adobe CreativeSync. Now, assets will instantly appear in After Effects and Premiere Pro.
  • Destination Publishing, a single-action solution in Adobe Media Encoder for rendering and delivering content to popular social platforms, will now support Facebook.
  • Adobe Anywhere, a workflow collaboration platform, can be deployed as either a multilocation streaming solution or a single-location collaboration-only version.

Primetime, Adobe’s multiscreen TV platform, is also getting an upgrade to support OTT and direct-to-consumer offerings. The upgrade includes:

  • Ability to deliver HTML5 content across mobile browsers and additional connected devices, extending its reach and monetization capabilities.
  • An instant-on capability that pre-fetches video content inside an app to start playback in less than a second, speeding the startup time for video-on-demand and live streams by 300 and 500 percent, respectively.
  • Support for Dolby AC-3 to enable high-impact, cinema-quality sound on virtually all desktops and connected devices.
  • Support for the OAUTH 2.0 protocol to make it easier for consumers to access their favorite pay-TV content. Pay-TV providers can enable frictionless TV Everywhere with home-based authentication and offer longer authentication sessions that require users to log in only once per device.
  • New support for OTT and TV Everywhere measurement — including a broad variety of user-engagement metrics — in Adobe Analytics, a tool that is integrated with the Primetime TVSDK.

IBC: iZotope announces RX Post Production Suite and RX 5 Audio Editor

Audio technology company iZotope, Inc. has unveiled its new RX Post Production Suite, a set of tools that enable professionals to edit, mix, and deliver their audio, as well as RX 5 Audio Editor, an update to the company’s RX platform.

The new RX Post Production Suite contains products aimed at each stage of the audio post production workflow including audio repair and editing, mix enhancement and final delivery. The RX Post Production Suite includes the RX 5 Advanced Audio Editor, RX Final Mix, RX Loudness Control, and Groove3, well as the customer’s choice of 50 free sound effects from Pro Sound Effects.

The new RX 5 Audio Editor and RX 5 Advanced Audio Editor are designed to repair and enhance common problematic production audio while speeding up workflows that currently require either multiple manual editing passes, or a non-intuitive collection of tools from different vendors. RX 5’s new Instant Process tool lets editors “paint out” unwanted sonic elements directly on the spectral display with a single mouse gesture. The new Module Chain allows users to define a custom chain of processing (e.g. De-click, De-noise, De-reverb, EQ Match, Leveler, Normalize) and then save that chain as a preset so that multiple processes can be recalled and applied in a single click for repetitive tasks.

For Pro Tools/RX 5 workflows, RX Connect has been enhanced to support individual Pro Tools clips and crossfades with any associated handles so that processed audio returns “in place” to the Pro Tools timeline.

RX 5 Advanced also includes a new De-plosive module that minimizes plosives from letters such as p, t, k, and b, in which strong blasts of air create a massive pressure change at the microphone element, impairing the sound. In addition, the Leveler module has been enhanced with breath and “ess” (sibilance) detection for increased accuracy when performing faster than realtime leveling.

ftrack 3.2 includes new API and customizable Workflows

Swedish company ftrack has launched ftrack 3.2, the newest version of its project management solution for creative industries. Along with the Actions functionality announced last April, ftrack 3.2 includes several customer-driven features that expand the uses of the software. These include Workflows functionality, which enables users to tailor ftrack to their industry, and a rebuilt API that allows for more flexibility and deeper tool customization. Later this year, ftrack will launch an ftrack mobile app that will allow users to track production on the go.

ftrack 3.2’s new Workflows functionality is designed to improve project structures by removing the rigidity of sequences, shots and tasks. Instead of the task groups of the previous version, ftrack 3.2 allows users to customize layouts to match the needs of the project. Users can rename each group to match the terminology of their domain, making ftrack relevant to mobileApp-myassignmentsa wider range of creative disciplines and markets, such as video games, motion graphics and architecture.

In addition, ftrack 3.2 includes a faster, more comprehensive open-source API targeted to developers. The 3.2 API has greater scope and covers more of the functionality contained in ftrack, while offering more avenues for developers to adjust performance. Fully documented and built around normal Python data structures, the new ftrack API is an out-of-the-box solution designed to simplify tool customization.

Coming in September, the ftrack mobile app will allow users to monitor ongoing projects when away from their workstations. Using their mobile devices, users will be able to check in on the status of tasks, log time, stop or start timers, receive notifications and contact others involved in the project.

Chaos Group shows V-Ray for Nuke at SIGGRAPH 2015

Chaos Group’s V-Ray for Nuke is a new tool for lighting and compositing that integrates production-quality ray-traced rendering into the company’sNuke,NukeX, and NukeStudio products. V-Ray for Nuke enables compositors to take advantage of V-Ray’s lighting, shading and rendering tools inside NUKE’s node-based workflow.

V-Ray forNuke brings the same technology used on Game of Thrones, Avengers: Age of Ultron, and other film, commercial and television projects to professional compositors.

Built on the same adaptive rendering core as V-Ray’s plugins for Autodesk 3ds Max and Maya, V-Ray for Nuke is designed for production pipelines. V-Ray forNuke gives compositors the ability to adjust lighting, materials and render elements up to final shot delivery. Full control of 3D scenes in Nuke lets compositors match 2D footage and 3D renders simultaneously, saving time for environments and set extension work. V-Ray for Nuke includes a range of features for rendering and geometry with 36 beauty, matte and utility render elements, as well as effects for lights, cameras, materials, and textures.

New Autodesk extensions, updated Shotgun at SIGGRAPH 2015

At SIGGRAPH 2015  Autodesk announced its 2016 M&E extensions, designed to accelerate design, sharing, review and iteration of 3D content across every stage of the creative pipeline. The Maya 2016 extension is a new text tool for creating branding, flying logos, title sequences and other projects that require 3D text. The 3ds Max 2016 extension includes geodesic voxel and heat map solvers to help artists create better skin weighting faster. New Max Creation Graph (MCG) animation controls provide procedural animation capabilities.

Creative Market, an online content marketplace acquired by Autodesk last year, is expanding its offerings with the debut of 3D content. The marketplace is currently home to nearly 9,000 shops selling more than 250,000 design assets to a community of more than one million members. Artists can search, purchase and license high-quality 3D content created by designers around the world or upload and sell original 3D models on the site.

Shotgun Software has announced a new set of features and updates designed to make it easier for teams to review, share and provide feedback on creative projects. Also at SIGGRAPH 2015, Autodesk has announced the latest extension releases for its Maya 2016 and 3ds Max 2016 3D modeling, animation, VFX and rendering software and a new 3D marketplace on Creative Market, the company’s online platform for purchasing and selling custom content developed by artists.

Shotgun’s upcoming Shotgun 6.3 release will include new review and approval features and an updated Client Review Site to streamline collaboration and communication within teams, across sites and with clients. Shotgun’s Pipeline Toolkit is also being updated with the Shotgun Panel, which will let artists communicate directly with other artists and see only the information relevant to their tasks directly inside creative tools like Autodesk Maya and The Foundry’s Nuke, along with a refreshed Workfiles tool to find and navigate to relevant files more quickly.

Shotgun 6.3 includes a new global view that allows users to easily access and manage media across all of a studio’s projects from a central location in Shotgun. Other improvements include new browsing options, playlists and a preference to launch media in RV, the desktop image/movie player.

 

The Foundry gets new owner

The Foundry has received a majority investment from private equity firm HgCapital, a top investor in European software. The Foundry will sit within HgCapital’s technology, media, and telecommunications sector. Under the terms of the deal, HgCapital will assume majority ownership from The Carlyle Group for an enterprise value of £200 million ($312 million USD).

With this deal, The Foundry remains one of the few independent companies solely focused on creative industries. The deal lets The Foundry pursue its strategy of prioritizing research and innovation and teaming with other companies to create collective solutions.

NAB: We are a lucky bunch of nerds

By William Rogers

I bolted awake at 5am this morning, Las Vegas time.

I’m still ticking on New York City’s clock, and I don’t think that I’ll be changing that any time this week. I’ve also completely refrained from gambling, drinking (besides a sip of wine at a Monday night dinner with OWC) and any other activities that would cause me to think about keeping what happened here, here.

Between running laps around the South Lower hall of the convention center, I had to stop and take my brain away from my calendar app to reflect on a thought that kept popping up in my head; I really, really love the people here at NAB.

I’m not necessarily talking about the cornerstone vendors and the keynote speakers, but more about the passionate people that are standing behind something that they truly pour their heart and soul into. After the vendor representatives and I would get past the product demos and the required reading, we’d get into a more human conversation and still keep it relative to our body of work.

I like that. I can’t stand fluff and disingenuousness. I can’t stand purposeless self-promotion. What I love is when I ask the right question, and I see people stand a few inches taller because they’re not slumping into their required schpiel.

We filmmakers work in an incredible field. It doesn’t matter what role we’re in, whether it be the grip throwing up the Kinos for an interview, or the online editor who meticulously scrutinizes the footage for the conform.

We’re a lucky bunch of nerds.

My Tuesday

LaCieLaCie showed off a bunch of new stuff. They’re pushing out two new Rugged drives, one spinning disk capable of RAID 0/1, and another with SSD and Thunderbolt tailored for speedy field transfers. I also got an extensive look at the 8big Rack Thunderbolt 2, which is a multi-multi Terabyte storage solution equipped with Thunderbolt 2, enterprise class drives, and 1330 MB/s speed for 4K editing.

I stopped by Small Tree, who provides Ethernet-based server solutions for in-house editing as well as mobile server storage. Small Tree provided their Titanium Z-5 shared storage system for Digiboyz Inc., who used Small Tree’s capabilities on Netflix’s Trailer Park Boys.

SwitchTelestream had a multitude of post-production software solutions on display, but I was directed to check out Switch. Switch is a media player with an elegant UI, but is meant for QC inspection, transcoding and file modifications. For post houses that need to view and modify a vast array of file types including transport streams, Switch is DPP/AMWA-certified software that provides a reliable alternative to open source software.

Facilis was debuting their own venture into the SSD world with Terrablock 24D/HA. The Hybrid Array has 8 onboard SSD drives for ultra-high performance partitions, alongside traditional SATA drives. The combination allows for space scalability inherent to spinning disk drives, while taking advantage of the speed of SSD drives.izotope

I made my way over to Izotope, who specializes in audio finishing plug-ins based on advance audio analyzing. Their software RX4, which plugs into DAWs as well as NLEs, was demonstrating several nifty ways to rescue seemingly lost audio—my favorite was a preset that was able to detect and eliminate GSM cell phone interference on their visual audio spectrum analysis.

For those not in the know, on-site media storage will eventually be a thing of the past, even for large HD(+) media workflows. Aframe Aframewas going to give me a demo of the usability of their online UI, but we got sidetracked discussing their future integration with Adobe Anywhere. Keep an eye out, because within the next few years, public customers will be able to upload all of their video assets to the cloud and live edit with no media stored on local discs.

CTRL+Console showed off their iPad app, which is used to control NLEs and other post software, like Adobe Lightroom. Meant as a keyboard replacement, you can turn your tablet (currently limited to iPad) into a touchscreen console without learning keyboard hotkeys.

Cinegy was kind enough to escort me to a breakout room for snacks and chilly water over a conversation about the post industry. Cinegy provides software technology for digital video processing, asset management, compression and playback in broadcast environments. This year, they were rolling out Version 10 of their software featuring 4K IP-based broadcast solutions Cinegy Multiviewer and Cinegy Route, as well as Cinegy Air PRO, Cinegy Type and a variety of other solutions.

I met up with T2 Computing, who designs and implements IT solutions for post-production facilities and media companies. T2 recently teamed up with Tekserve to overhaul their invoicing and PO management system.

I’d say it was a successful Tuesday. I tried to get into my hotel pool later that evening, but my efforts to aquatically relax were thwarted by a Las Vegas sandstorm. Instead, I kicked my feet up to read a few more chapters from my Kindle, which was exactly what I needed.

Will is an editor, artist and all around creative professional working as a Post Production Coordinator for DB Productions in NYC.

NAB: Exploring collaborative workflows on the exhibit floor

By Adrian Winter

It was back to the showroom floor for me today as I checked in on a number of exhibitors with an eye toward collaborative workflows.

My first stop was the Adobe booth to take in a demonstration of Adobe Anywhere — Adobe’s collaborative platform for Premiere, Prelude and After Effects.

The workflow is built around a number of users, working either in house or remotely, that can access and work with the same footage all stored in one place called a Collaboration Hub. This Continue reading

Sound developments at the NAB Show

Spotlighting Pro Sound Effects library, Genelec 7.1.4 Array, Avid Master Joystick Module and Sennheiser AVX wireless mic

By Mel Lambert

With a core theme of ”Crave More,” which is intended to reflect the passion of our media and entertainment communities, and with products from 1,700 exhibitors this year – including over 200 first-time companies – there were plenty of new developments to see and hear at the NAB Show, which continues in Las Vegas until Thursday afternoon.

In addition to unveiling Master Library 2.0, which adds more that 30,000 new sound effects, online access, annual updates and new subscription pricing, Pro Sound Effects demonstrated a Continue reading