Arraiy 4.11.19

Category Archives: Blog

NAB 2019: An engineer’s perspective

By John Ferder

Last week I attended my 22nd NAB, and I’ve got the Ross lapel pin to prove it! This was a unique NAB for me. I attended my first 20 NABs with my former employer, and most of those had me setting up the booth visits for the entire contingent of my co-workers and making sure that the vendors knew we were at each booth and were ready to go. Thursday was my “free day” to go wandering and looking at the equipment, cables, connectors, test gear, etc., that I was looking for.

This year, I’m part of a new project, so I went with a shopping list and a rough schedule with the vendors we needed to see. While I didn’t get everywhere I wanted to go, the three days were very full and very rewarding.

Beck Video IP panel

Sessions and Panels
I also got the opportunity to attend the technical sessions on Saturday and Sunday. I spent my time at the BEITC in the North Hall and the SMPTE Future of Cinema Conference in the South Hall. Beck TV gave an interesting presentation on constructing IP-based facilities of the future. While SMPTE ST2110 has been completed and issued, there are still implementation issues, as NMOS is still being developed. Today’s systems are and will for the time being be hybrid facilities. The decision to be made is whether the facility will be built on an IP routing switcher core with gateways to SDI, or on an SDI routing switcher core with gateways to IP.

Although more expensive, building around an IP core would be more efficient and future-proof. Fiber infrastructure design, test equipment and finding engineers who are proficient in both IP and broadcast (the “Purple Squirrels”) are large challenges as well.

A lot of attention was also paid to cloud production and distribution, both in the BEITC and the FoCC. One such presentation, at the FoCC, was on VFX in the cloud with an eye toward the development of 5G. Nathaniel Bonini of BeBop Technology reported that BeBop has a new virtual studio partnership with Avid, and that the cloud allows tasks to be performed in a “massively parallel” way. He expects that 5G mobile technology will facilitate virtualization of the network.

VFX in the Cloud panel

Ralf Schaefer, of the Fraunhofer Heinrich-Hertz Institute, expressed his belief that all devices will be attached to the cloud via 5G, resulting in no cables and no mobile storage media. 5G for AR/VR distribution will render the scene in the network and transmit it directly to the viewer. Denise Muyco of StratusCore provided a link to a virtual workplace: https://bit.ly/2RW2Vxz. She felt that 5G would assist in the speed of the collaboration process between artist and client, making it nearly “friction-free.” While there are always security concerns, 5G would also help the prosumer creators to provide more content.

Chris Healer of The Molecule stated that 5G should help to compress VFX and production workflows, enable cloud computing to work better and perhaps provide realtime feedback for more perfect scene shots, showing line composites of VR renders to production crews in remote locations.

The Floor
I was very impressed with a number of manufacturers this year. Ross Video demonstrated new capabilities of Inception and OverDrive. Ross also showed its new Furio SkyDolly three-wheel rail camera system. In addition, 12G single-link capability was announced for Acuity, Ultrix and other products.

ARRI AMIRA (Photo by Cotch Diaz)

ARRI showed a cinematic multicam system built using the AMIRA camera with a DTS FCA fiber camera adapter back and a base station controllable by Sony RCP1500 or Skaarhoj RCP. The Sony panel will make broadcast-centric people comfortable, but I was very impressed with the versatility of the Skaarhoj RCP. The system is available using either EF, PL, or B4 mount lenses.

During the show, I learned from one of the manufacturers that one of my favorite OLED evaluation monitors is going to be discontinued. This was bad news for the new project I’ve embarked on. Then we came across the Plura booth in the North Hall. Plura as showing a new OLED monitor, the PRM-224-3G. It is a 24.5-inch diagonal OLED, featuring two 3G/HD/SD-SDI and three analog inputs, built-in waveform monitors and vectorscopes, LKFS audio measurement, PQ and HLG, 10-bit color depth, 608/708 closed caption monitoring, and more for a very attractive price.

Sony showed the new HDC-3100/3500 3xCMOS HD cameras with global shutter. These have an upgrade program to UHD/HDR with and optional processor board and signal format software, and a 12G-SDI extension kit as well. There is an optional single-mode fiber connector kit to extend the maximum distance between camera and CCU to 10 kilometers. The CCUs work with the established 1000/1500 series of remote control panels and master setup units.

Sony’s HDC-3100/3500 3xCMOS HD camera

Canon showed its new line of 4K UHD lenses. One of my favorite lenses has been the HJ14ex4.3B HD wide-angle portable lens, which I have installed in many of the studios I’ve worked in. They showed the CJ14ex4.3B at NAB, and I even more impressed with it. The 96.3-degree horizontal angle of view is stunning, and the minimization of chromatic aberration is carried over and perhaps improved from the HJ version. It features correction data that support the BT.2020 wide color gamut. It works with the existing zoom and focus demand controllers for earlier lenses, so it’s  easily integrated into existing facilities.

Foot Traffic
The official total of registered attendees was 91,460, down from 92,912 in 2018. The Evertz booth was actually easy to walk through at 10a.m. on Monday, which I found surprising given the breadth of new interesting products and technologies. Evertz had to show this year. The South Hall had the big crowds, but Wednesday seemed emptier than usual, almost like a Thursday.

The NAB announced that next year’s exhibition will begin on Sunday and end on Wednesday. That change might boost overall attendance, but I wonder how adversely it will affect the attendance at the conference sessions themselves.

I still enjoy attending NAB every year, seeing the new technologies and meeting with colleagues and former co-workers and clients. I hope that next year’s NAB will be even better than this year’s.

Main Image: Barbie Leung.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

NAB NY: A DP’s perspective

By Barbie Leung

At this year’s NAB New York show, my third, I was able to wander the aisles in search of tools that fit into my world of cinematography. Here are just a few things that caught my eye…

Blackmagic, which had large booth at the entrance to the hall, was giving demos of its Resolve 15, among other tools. Panasonic also had a strong presence mid-floor, with an emphasis on the EVA-1 cameras. As usual, B&H attracted a lot of attention, as did Arri, which brought a couple of Arri Trinity rigs to demo.

During the HDR Video Essentials session, colorist Juan Salvo of TheColourSpace, talked about the emerging HDR 10+ standard proposed by Samsung and Amazon Video. Also mentioned was the trend of consumer displays getting brighter every year and that impact on content creation and content grading. Salvo pointed out the affordability of LG’s C7 OLEDs (about 700 Nits) for use as client monitors, while Flanders Scientific (which had a booth at the show) remains the expensive standard for grading. It was interesting to note that LG, while being the show’s Official Display Partner, was conspicuously absent from the floor.

Many of the panels and presentations unsurprisingly focused on content monetization — how to monetize faster and cheaper. Amazon Web Service’s stage sessions emphasized various AWS Elemental technologies, including automating the creation of video highlight clips for content like sports videos using facial recognition algorithms to generate closed captioning, and improving the streaming experience onboard airplanes. The latter will ultimately make content delivery a streamlined enough process for airlines that it would enable advertisers to enter this currently untapped space.

Editor Janis Vogel, a board member of the Blue Collar Post Collective, spoke at the #galsngear “Making Waves” panel, and noted the progression toward remote work in her field. She highlighted the fact that DaVinci Resolve, which had already made it possible for color work to be done remotely, is now also making it possible for editors to collaborate remotely. The ability to work remotely gives professionals the choice to work outside of the expensive-to-live-in major markets, which is highly desirable given that producers are trying to make more and more content while keeping budgets low.

Speaking at the same panel, director of photography/camera operator Selene Richholt spoke to the fact that crews are being monetized with content producers either asking production and post pros to provide standard service at substandard rates, or more services without paying more.

On a more exciting note, she cited recent 9×16 projects that she has shot with the camera mounted vertically (as opposed to shooting 16×9 and cropping in) in order to take full advantage of lens properties. She looks forward to the trend of more projects that can mix aspects ratios and push aesthetics.

Well, that’s it for this year. I’m already looking forward to next year.

 


Barbie Leung is a New York-based cinematographer and camera operator working in film, music video and branded content. Her work has played Sundance, the Tribeca Film Festival, Outfest and Newfest. She is also the DCP mastering technician at the Tribeca Film Festival.

Arraiy 4.11.19

I was an IBC virgin

By Martina Nilgitsalanont

I recently had the opportunity to attend the IBC show in Amsterdam. My husband, Mike Nuget, was asked to demonstrate workflow and features of FilmLight’s Baselight software, and since I was in between projects — I’m an assistant editor on Showtime’s Billions and will start on Season 4 in early October — we turned his business trip into a bit of a vacation as well.

Although I’ve worked in television for quite some time, this was my first trip to an industry convention, and what an eye opener it was! The breadth and scope of the exhibit halls, the vendors, the attendees and all the fun tech equipment that gets used in the film and television industry took my breath away (dancing robotic cameras??!!). My husband attempted to prepare me for it before we left the states, but I think you have to experience it to fully appreciate it.

Since I edit on Media Composer, I stopped by Avid’s booth to see what new features they were showing off, and while I saw some great new additions, I was most tickled when one of the questions I asked stumped the coders. They took a note of what I was asking of the feature, and let me know, “We’ll work on that.” I’ll be keeping an eye out!

Of course, I spent some time over at the FilmLight booth. It was great chatting with the folks there and getting to see some of Baselight’s new features. And since Mike was giving a demonstration of the software, I got to attend some of the other demos as well. It was a real eye opener as to how much time and effort goes into color correction, whether it’s on a 30-second commercial, documentary or feature film.

Another booth I stopped by was Cinedeck, over at the Launchpad. I got a demo of their CineXtools, and I was blown away. How many times do we receive a finished master (file) that we find errors in? With this software, instead of making the fixes and re-exporting (and QCing) a brand-new file, you can insert the fixes and be done! You can remap audio tracks if they’re incorrect, or even fix an incorrect closed caption. This is, I’m sure, a pretty watered down explanation of some of the things the CineX software is capable of, but I was floored by what I was shown. How more finishing houses aren’t aware of this is beyond me. It seems like it would be a huge time saver for the operator(s) that need to make the fixes.

Amsterdam!
Since we went spent the week before the convention in Amsterdam, Mike and I got to do some sightseeing. One of our first stops was the Van Gogh Museum, which was very enlightening and had an impressive collection of his work. We took a canal cruise at night, which offered a unique vantage point of the city. And while the city is beautiful during the day, it’s simply magical at night —whether by boat or simply strolling through the streets— with the warm glow from living rooms and streetlights reflected in the water below.

One of my favorite things was a food tour in the Jordaan district, where we were introduced to a fantastic shop called Jwo Lekkernijen. They sell assorted cheeses, delectable deli meats, fresh breads and treats. Our prime focus while in Amsterdam was to taste the cheese, so we made a point of revisiting later in the week so that we could delight in some of the best sandwiches EVER.

I could go on and on about all our wanderings (Red Light District? Been there. Done that. Royal Palace? Check.), but I’ll keep it short and say that Amsterdam is definitely a city that should be explored fully. It’s a vibrant and multicultural metropolis, full of warm and friendly people, eager to show off and share their heritage with you.  I’m so glad I tagged along!


Presenting at IBC vs. NAB

By Mike Nuget

I have been lucky enough to attend NAB a few times over the years, both as an onlooker and as a presenter. In 2004, I went to NAB for the first time as an assistant online editor, mainly just tagging along with my boss. It was awesome! It was very overwhelming and, for the most part, completely over my head.  I loved seeing things demonstrated live by industry leaders. I felt I was finally a part of this crazy industry that I was new to. It was sort of a rite of passage.

Twelve years later, Avid asked me to present on the main stage. Knowing that I would be one of the demo artists that other people would sit down and watch — as I had done just 12 years earlier — was beyond anything I thought I would do back when I first started. The demo showed the Avid and FilmLight collaboration between the Media Composer and the Baselight color system. Two of my favorite systems to work on. (Watch Mike’s presentation here.)

Thanks to my friend and now former co-worker Matt Schneider, who also presented alongside of me, I had developed a very good relationship with the Avid developers and some of the people who run the Avid booth at NAB. And at the same time, the Filmlight team was quickly being put on my speed dial and that relationship strengthened as well.

This past NAB, Avid once again asked me to come back and present on the main stage about Avid Symphony Color and FilmLight’s Baselight Editions plug-in for Avid, but this time I would get to represent myself and my new freelance career change — I had just left my job at Technicolor-Postworks in New York a few weeks prior. I thought that since I was now a full-time freelancer this might be the last time I would ever do this kind of thing. That was until this past July, when I got an email from the FilmLight team asking me to present at IBC in Amsterdam. I was ecstatic.

Preparing for IBC was similar enough as far as my demo, but I was definitely more nervous than I was at NAB. I think it was two reasons: First, presenting in front of many different people in an international setting. Even though I am from the melting pot of NYC, it is a different and interesting feeling being surrounded by so many different nationalities all day long, and pretty much being the minority. On a personal note, I loved it. My wife and I love traveling, and to us this was an exciting chance to be around people from other cultures. On a business level, I guess I was a little afraid that my fast-talking New Yorker side would lose some people, and I didn’t want that to happen.

The second thing was that this was the first time that I was presenting strictly for FilmLight and not Avid. I have been an Avid guy for over 15 years. It’s my home, it’s my most comfortable system, and I feel like I know it inside and out. I discovered Baselight in 2012, so to be presenting in front of FilmLight people, who might have been using their systems for much longer, was a little intimidating.

When I walked into the room, they had setup a full-on production, along with spotlights, three cameras, a projector… the nerves rushed once again. The demo was standing room only. Sometimes when you are doing presentations, time seems to fly by, so I am not sure I remember every minute of the 50-minute presentation, but I do remember at one point within the first few minutes my voice actually trembled, which internally I thought was funny, because I do not tend to get nervous. So instead of fighting it, I actually just said out loud “Sorry guys, I’m a little nervous here,” then took a deep breath, gathered myself, and fell right into my routine.

I spent the rest of the day watching the other FilmLight demos and running around the convention again saying hello to some new vendors and goodbye to those I had already seen, as Sunday was my last day at the show.

That night I got to hang out with the entire Filmlight staff for dinner and some drinks. These guys are hilarious, what a great tight-knit family vibe they have. At one point they even started to label each other, the uncle, the crazy brother, the funny cousin. I can’t thank them enough for being so kind and welcoming. I kind of felt like a part of the family for a few days, and it was tremendously enjoyable and appreciated.

Overall, IBC felt similar enough to NAB, but with a nice international twist. I definitely got lost more since the layout is much more confusing than NAB’s. There are 14 halls!

I will say that the “relaxing areas” at IBC are much better than NAB’s! There is a sandy beach to sit on, a beautiful canal to sit by while having a Heineken (of course) and the food trucks were much, much better.

I do hope I get to come back one day!


Mike Nuget (known to most as just “Nuget”) is a NYC-based colorist and finishing editor. He recently decided to branch out on his own and become a freelancer after 13 years with Technicolor-Postworks. He has honed a skill set across multiple platforms, including FilmLight’s Baselight, Blackmagic’s Resolve, Avid and more. 


IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.


Riding the digital storage bus at the HPA Tech Retreat

By Tom Coughlin

At the 2018 HPA Tech Retreat in Palm Desert there were many panels that spoke to the changing requirements for digital storage to support today’s diverse video workflows. While at the show, I happened to snap a picture of the Maxx Digital bus — these guys supply video storage and RAID. I liked this picture because it had the logos of a number of companies with digital storage products serving the media and entertainment industry. So, this blog will ride the storage bus to see where digital storage in M&E is going.

Director of photography Bill Bennett, ASC, and senior scientist for RealD Tony Davis gave an interesting talk about why it can be beneficial to capture content at high frame rates, even if it will ultimately be shown at much lower frame rate. They also offered some interesting statics about Ang Lee’s 2016 technically groundbreaking movie, Billy Lynn’s Long Halftime Walk, which was shot in in 3D at 4K resolution and 120 frames per second.

The image above is a slide from the talk describing the size of the data generated in creating this movie. Single Sony F65 frames with 6:1 compression were 5.2MB in size with 7.5TB of average footage per day over 49 days. They reported that 104-512GB cards were used to capture and transfer the content and the total raw negative size (including test materials) was 404TB. This was stored on 1.5PB of hard disk storage. The actual size of the racks used for storage and processing wasn’t all that big. The photo below shows the setup in Ang Lee’s apartment.

Bennett and Davis went on to describe the advantages of shooting at high frame rates. Shooting at high frame rates gives greater on-set flexibility since no motion data is lost during shooting, so things can be fixed in post more easily. Even when shown at lower resolution in order to get conventional cinematic aesthetics, a synthetic shutter can be created with different motion sense in different parts of the frame to create effective cinematic effects using models for particle motion, rotary motion and speed ramps.

During Gary Demos’s talk on Parametric Appearance Compensation he discussed the Academy Color Encoding System (ACES) implementation and testing. He presented an interesting slide on a single master HDR architecture shown below. A master will be an important element in an overall video workflow that can be part of an archival package, probably using the SMPTE (and now ISO) Archive eXchange Format (AXF) standard and also used in a SMPTE Interoperable Mastering Format (IMF) delivery package.

The Demo Area
At the HPA Retreat exhibits area we found several interesting storage items. Microsoft had on exhibit one of it’s Data Boxes, that allow shipping up to 100 TB of data to its Azure cloud. The Microsoft Azure Data Box joins Amazon’s Snowball and Google’s similar bulk ingest box. Like the AWS Snowball, the Azure Data Box includes an e-paper display that also functions as a shipping label. Microsoft did early testing of their Data Box with Oceaneering International, which performs offline sub-sea oil industry inspection and uploaded their data to Azure using Data Box.

ATTO was showing its Direct2GPU technology that allowed direct transfer from storage to GPU memory for video processing without needing to pass through a system CPU. ATTO is a manufacturer of HBA and other connectivity solutions for moving data, and developing smarter connectors that can reduce overall system overhead.

Henry Gu’s GIC company was showing its digital video processor with automatic QC, and IMF tool set enabling conversion of any file type to IMF and transcoding to any file format and playback of all file types including 4K/UHD. He was doing his demonstration using a DDN storage array (right).

Digital storage is a crucial element in modern professional media workflows. Digital storage enables higher frame rate, HDR video recording and processing to create a variety of display formats. Digital storage also enables uploading bulk content to the cloud and implementing QC and IMF processes. Even SMPTE standards for AXF, IMF and others are dependent upon digital storage and memory technology in order to make them useful. In a very real sense, in the M&E industry, we are all riding the digital storage bus.


Dr. Tom Coughlin, president of Coughlin Associates, is a storage analyst and consultant. Coughlin has six patents to his credit and is active with SNIA, SMPTE, IEEE and other pro organizations. Additionally, Coughlin is the founder and organizer of the annual Storage Visions Conference as well as the Creative Storage Conference.
.


HPA Tech Retreat — Color flow in the desert

By Jesse Korosi

I recently had the opportunity to attend the HPA Tech Retreat in Palm Desert, California, not far from Palm Springs. If you work in post but aren’t familiar with this event, I would highly recommend attending. Once a year, many of the top technologists working in television and feature films get together to share ideas, creativity and innovations in technology. It is a place where the most highly credited talent come to learn alongside those that are just beginning their career.

This year, a full day was dedicated to “workflow.” As the director of workflow at Sim, an end-to-end service provider for content creators working in film and TV, this was right up my alley. This year, I was honored to be a presenter on the topic of color flow.

Color flow is a term I like to use when describing how color values created on set translate into each department that needs access to them throughout post. In the past, this process had been very standardized, but over the last few years it has become much more complex.

I kicked off the presentation by showing everyone an example of an offline edit playing back through a projector. Each shot had a slight variance in luminance, had color shifts, extended to legal changes, etc. During offline editing, the editor should not be distracted by color shifts like these. It’s also not uncommon to have executives come into the room to see the cut. The last thing you want is the questioning of VFX shots because they are seeing these color anomalies. The shots coming back from the visual effects team will have the original dailies color baked into them and need to blend into the edit.

So why does this offline edit often look this way? The first thing to really hone in on is the number of options now available for color transforms. If you show people who aren’t involved in this process day to day a Log C image, compared to a graded image, they will tell you, “You applied a LUT, no big deal.” But it’s a misconception to think that if you give all of the departments that require access to this color the same LUT, they are going to see the same thing. Unfortunately, that’s not the case!

Traditionally, LUTs consisted of a few different formats, but now camera manufacturers and software developers have started creating their own color formats, each having their own bit depths, ranges and other attributes to further complicate matters. You can no longer simply use the blanket term LUT, because that is often not a clear definition of what is now being used.

What makes this tricky is that each of these formats is only compatible within certain software or hardware. For example, Panasonic has created its own color transform called VLTs. This color file cannot be put into a Red camera or an Arri. Only certain software can read it. Continue down the line through the plethora of other color transform options available and each can only be used by certain software/departments across the post process.

Aside from all of these competing formats, we also have an ease-of-use issue. A great example to highlight on this issue would be a DP coming to me and saying (something I hear often), “I would like to create a set of six LUTs. I will write on the camera report the names of the ones I monitored with on set, and then you can apply it within the dailies process.”

For about 50 percent of the jobs we do, we deliver DPX or EXR frames to the VFX facility, along with the appropriate color files they need. However, we give the other 50 percent the master media, and along with doing their own conversion to DPX, this vendor is now on the hook to find out which of those LUTs the DP used on set, go with which shots. This is a manual process for the majority of jobs using this workflow. For my presentation, I broke down why this is not a realistic request to put on vendors, which often leads to them simply not using the LUTs.

Workarounds
For my presentation, I broke down how to get around this LUT issue by staying within CDL compatibility. I also spoke about how to manage these files in post, while the onset crew uses equivalent LUTs. This led to the discussion of how you should be prepping your color flow at the top of each job, as well as a few case studies on real-world jobs. One of those jobs was a BLG workflow providing secondaries on set that could track through into VFX and to the final colorist, while also giving the final colorist the ability to re-time shots when we needed to do a reprint without the need to re-render new MXFs to be relinked in the Avid.

After a deep dive into competing formats, compatibility, ease of use, and a few case studies, the big take away I wanted to leave the audience with was this:
– Ensure a workflow call happens, ideally covering color flow with your on set DIT or DP, dailies vendor, VFX and DI representative
– Ensure a color flow pipeline test runs before day one of the shoot
– Allow enough time to react to issues
– When you aren’t sure how a certain department will get their color, ask!


Jesse Korosi is director of workflow at Sim.


Sight Sound & Story 2017: TV editing and Dylan Tichenor, ACE

By Amy Leland

This year, I was asked to live tweet from Sight Sound & Story on behalf of Blue Collar Post Collective. As part of their mission to make post events as accessible to members of our industry as possible, they often attend events like this one and provide live blogging, tweeting and recaps of the events for their members via their Facebook group. What follows are the recaps that I posted to that group after the event and massaged a bit for the sake of postPerspective.

TV is the New Black
Panelists included Kabir Akhtar, ACE, Suzy Elmiger, ACE, Julius Ramsay and moderator Michael Berenbaum, ACE.

While I haven’t made it a professional priority to break into scripted TV editing because my focus is on being a filmmaker, with editing as “just” a day job, I still love this panel, and every year it makes me reconsider that goal. This year’s was especially lively because two of the panelists, Kabir Akhtar and Julius Ramsay, have known each other from very early on in their careers and each had hilarious war stories to share.

Kabir Akhtar

The panelists were asked how they got into scripted TV editing, and if they had any advice for the audience who might want to do the same. One thing they all agreed on is that a good editor is a good editor. They said having experience in the exact same genre is less important than understanding how to interpret the style and tone of a show correctly. They also all agreed that people who hire editors often don’t get that. There is a real danger of being pigeonholed in our industry. If you start out editing a lot of reality TV and want to crossover to scripted you’ll almost definitely have to take a steep pay cut and start lower down on the ladder. There is still the problem in the industry of people assuming if you’ve cut comedy but not drama, you can’t cut drama. The same can be said for film versus TV and half-hour versus hour, etc. They all emphasized the importance of figuring out what kind of work you want to do, and pursuing that. Don’t just rush headlong into all kinds of work. Find as much focus as you can. Akhtar said, “You’re better off at the bottom of a ladder you want to climb than high up on one that doesn’t interest you.”

They all also said to seek out the people doing the kind of work you want to do, because those are the people who can help you. Ramsay said the most important networking tool is a membership to IMDB Pro. This gives you contact information for people you might want to find. He said the first time someone contacts him unsolicited he will probably ignore it, but if they contact him more than once, and it’s obvious that it’s a real attempt at personal contact with him, he will most likely agree to meet with that person.

Next they discussed the skills needed to be a successful editor. They agreed that while being a fast editor with strong technical knowledge of the tools isn’t by itself enough to be a successful editor, it is an important part of being one. If you have people in the room with you, the faster and more dexterously you can do what they are asking, the better the process will be for everyone.

There was agreement that, for the most part, they don’t look at things like script notes and circle takes. As an editor, you aren’t hired just for your technical skills, but for your point of view. Use it. Don’t let someone decide for you what the good takes are. You have to look at all of the footage and decide for yourself. They said what can feel like a great take on the set may not be a great take in the context of the cut. However, it is important to understand why something was a circle take for the director. That may be an important aspect of the scene that needs to be included, even if it isn’t on that take.

The panel also spoke about the importance of sound. They’ve all met editors who aren’t as skilled at hearing and creating good sound. That can be the difference between a passable editor and a great editor. They said that a great assistant editor needs to be able to do at least some decent sound mixing, since most producers expect even first cuts to sound good, and that task is often given to the assistant. They all keep collections of music and sound to use as scratch tracks as they cut. This way they don’t have to wait until the sound mix to start hearing how it will all come together.

The entire TV is the New Black panel.

All agreed that the best assistant editors are those who are hungry and want to work. Having a strong artistic sense and drive are more important to them than specific credits or experience. They want someone they know will help them make the show the best. In return, they have all given assistants opportunities that have led to them rising to editor positions.

When talking about changes and notes, they discussed needing that flexibility to show other options, even if you really believe in the choices you’ve made. But they all agreed the best feeling was when you’ve been asked to show other things, and in the long run, the producer or director comes back to what you had in the first place. They said when people give notes, they are pointing out the problems. Be very wary when they start telling you the solutions or how to fix the problems.

Check out the entire panel here. The TV panel begins at about 20:00.

Inside the Cutting Room
This panel focused on editor Dylan Tichenor, ACE, and was moderated by Bobbie O’Steen .

Of all of the Sight Sound & Story panels, this is by far the hardest to summarize effectively. Bobbie O’Steen is a film historian. Her preparation for interviews like this is incredibly deep and detailed. Her subject is always someone with an impressive list of credits. Dylan Tichenor has been Paul Thomas Anderson’s editor for most of his films. He has also edited such films as Brokeback Mountain, The Royal Tenenbaums and Zero Dark Thirty.

With that in mind, I will share some of the observations I wrote down while listening raptly to what was said. From the first moment, we got a great story. Tichenor’s grandfather worked as a film projector salesman. He described the first time he became aware of the concept of editing. When he was nine years old, he unspooled a film reel from an Orson Welles movie that his grandfather had left at the house and looked carefully at all of the frames. He noticed that between a frame of a wide shot and a frame of a close-up, there was a black line. And that was his first understanding of film having “cuts.” He also described an early love for classic films because of those reels his grandfather kept around, especially Murnau’s Nosferatu.

Much of what was discussed was his longtime collaboration with P.T. Anderson. In discussing Anderson’s influences, they described the blend of Martin Scorsese’s long tracking shots with Robert Altman’s complex tapestry of ensemble casts. Through his editing work on those films, Tichenor saw how Anderson wove those two things together. The greatest challenges were combining those long takes with coverage, and answering the question, “Whose story are we telling?” To illustrate this, he showed the party scene in Boogie Nights in which Scotty first meets Dirk Diggler.

Dylan Tichenor and Bobbi O’Steen.

For those complex tapestries of characters, there are frequent transitions from one person’s storyline to another’s. Tichenor said it’s important to transition with the heart and not just the head. You have to find the emotional resonance that connects those storylines.

He echoed the sentiment from one of the other panels (this will be covered in my next recap) about not simply using the director’s circle takes. He agreed with the importance of understanding what they were and what the director saw in them on set, but in the cut, it was important to include that important element, not necessarily to use that specific take.

O’Steen brought up the frequent criticism of Magnolia — that the film is too long. While Tichenor agreed that it was a valid criticism, he stood by the film as one that took chances and had something to say. More importantly, it asked something of the audience. When a movie doesn’t take chances and asks the audience to work a little, it’s like eating cotton candy. When the audience exerts effort in watching the story, that effort leads to catharsis.

In discussing The Royal Tenenbaums, they talked about the challenge of overlapping dialogue, illustrated by a scene between Gene Hackman and Danny Glover. Of course, what the director and actors want is to have freedom on the set, and let the overlapping dialogue flow. As an editor this can be a nightmare. In discussions with actors and directors, it can help to remind them that sometimes that overlapping dialogue can create situations where a take can’t be used. They can be robbed of a great performance by that overlap.

O’Steen described Wes Anderson as a mathematical editor. Tichenor agreed, and showed a clip with a montage of flashbacks from Tenenbaums. He said that Wes Anderson insisted that each shot in the montage be exactly the same duration. In editing, what Tichenor found was that those moments of breaking away from the mathematical formula, of working slightly against the best of the music, were what gave it emotional life.

Tichenor described Brokeback Mountain as the best screenplay adaptation of a short story he had ever seen. He talked about a point during the editing when they all felt it just wasn’t working, specifically Heath Ledger’s character wasn’t resonating emotionally the way he should be. Eventually they realized the problem was that Ledger’s natural warmth and affectionate nature were coming through too much in his performance. He had moments of touching someone on the arm or the shoulder, or doing something else gentle and demonstrative.

He went back through and cut out every one of those moments he could find, which he admitted meant in some cases leaving “bad” cuts in the film. To be fair, in some cases that difference was maybe half a second of action and the cuts were not as bad as he feared, but the result was that the character suddenly felt cold and isolated in a way that was necessary. Tichenor also referred back to Nosferatu and how the editing of that film had inspired him. He pointed to the scene in which Jack comes to visit Ennis; he mimicked an editing trick from that film to create a moment of rush and surprise as Ennis ran down the stairs to meet him.

Dylan Tichenor

One thing he pointed out was that it can feel more vulnerable to cut a scene with a slower pace than an action scene. In an action scene, the cuts become almost a mosaic, blending into one another in a way that helps to make each cut a bit more anonymous. In a slower scene, each cut stands out more and draws more attention.

When P.T. Anderson and Tichenor came together again to collaborate on There Will Be Blood, they approached it very differently from Boogie Nights and Magnolia. Instead of the parallel narratives of that ensemble tapestry, this was a much more focused and, often, operatic, story. They decided to approach it, in both shooting and editing, like a horror film. This meant framing shots in an almost gothic way, which allowed for building tension without frequent cutting. He showed an example of this in a clip of Daniel and his adopted son H.W. having Sunday dinner with the family to discuss buying their land.

He also talked about the need to humanize Daniel and make him more relatable and sympathetic. The best path to this was through the character of H.W. Showing how Daniel cared for the boy illuminated a different side to this otherwise potentially brutal character. He asked Anderson for additional shots of him to incorporate into scenes. This even led to additional scenes between the two being added to the story.

After talking about this film, though there were still so many more that could be discussed, the panel sadly ran out of time. One thing that was abundantly clear was that there is a reason Tichenor has worked with some of the finest filmmakers. His passion for and knowledge of film flowed through every moment of this wonderful chat. He is the editor for many films that should be considered modern classics. Undoubtedly between the depth of preparation O’Steen is known for, and the deep well of material his career provided, they could have gone on much longer without running dry of inspirational and entertaining stories to share.

Check out the entire panel here. The interview begins at about 02:17:30.

———————————
Amy Leland is a film director and editor. Her short film, Echoes, is now available on Amazon Video. Her feature doc, Ambassador of Rhythm, is in post. She also has a feature screenplay in development and a new doc in pre-production. She is also an editor for CBS Sports Network. Find out more about Amy on her site http://amyleland.net and follow her on social media on Twitter at @amy-leland and Instagram at @la_directora.


What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

The VFX Industry: Where are the women?

By Jennie Zeiher

As anyone in the visual effects industry would know, Marvel’s Victoria Alonso was honored earlier this year with the Visual Effects Society Visionary Award. Victoria is an almighty trailblazer, one of whom us ladies can admire, aspire to and want to be.

Her acceptance speech was an important reminder to us of the imbalance of the sexes in our industry. During her speech, Victoria stated: “Tonight there were 476 of you nominated. Forty-three of which are women. We can do better.”

Over the years, I’ve had countless conversations with industry people — executives, supervisors and producers — about why there are fewer women in artist and supervisory roles. A recent article in the NY Times suggested that female VFX supervisors made up only five percent of the 250 top-grossing films of 2014. Pretty dismal.

I’ve always worked in male-dominated industries, so I’m possibly a bit blasé about it. I studied IT and worked as a network engineer in the late ‘90s, before moving to the United States where I worked on 4K digital media projects with technologists and scientists. One of a handful of women, I was always just one of the boys. To me it was the norm.

Moving into VFX about 10 years ago, I realized this industry was no different. From my viewpoint, I see about 1/8 ratio of female to male artists. The same is true from what I’ve seen through our affiliated training courses. Sadly, I’ve heard of some facilities that have no women in artist roles at all!

Most of the females in our industry work in other disciplines. At my workplace, Australia’s Rising Sun Pictures, half of our executive members are women (myself included), and women generally outweigh men in indirect overhead roles (HR, finance, administration and management), as well as production management.

Women bring unique qualities to the workplace: they’re team players, hard working, generous and empathetic. Copious reports have found that companies that have women on their board of directors and in leadership positions perform better than those that don’t. So in our industry, why do we see such a male-dominated artist, technical and supervisory workforce?

By no means am I undervaluing the women in those other disciplines (we could not have functioning businesses without them), I’m just merely trying to understand why there aren’t more women inclined to pursue artistic jobs and, ultimately, supervision roles.

I can’t yet say that one of the talented female artists I’ve had the pleasure of working with over the years has risen to the ranks of being a VFX supervisor… and that’s not to say that they couldn’t have, just that they didn’t, or haven’t yet. This is something that disappoints me deeply. I consider myself a (liberal) feminist. Someone who, in a leadership position, wants to enable other women to become the best they can be and to be equal among their male counterparts.
So, why? Where are the women?

Men and Women Are Wired Differently
A study by LiveScience suggests men and women really are wired differently. It says,  “Male brains have more connections within hemispheres to optimize motor skills, whereas female brains are more connected between hemispheres to combine analytical and intuitive thinking.”

Apparently this difference is at its greatest during the adolescent years (13-17 years), however with age these differences get smaller. So, during the peak of an adolescent girl’s education, she’s more inclined to be analytical and intuitive. Is that a direct correlation to them not choosing a technical vocation? But then again I would have thought that STEM/STEAM careers would be something of interest to girls if they’re brains are wired to be analytical?

This would also explain women having better organizational and management skills and therefore seeking out more “indirectly” associated roles.

Lean Out
For those women already in our industry, are they too afraid to seek out higher positions? Women are often more self-critical and self-doubting. Men will promote themselves and dive right in, even if they’re less capable. I have experienced this first hand and didn’t actual recognize it in myself until I read Sheryl Sandberg’s Lean In.

Or, is it just simply that we’re in a “boys club” — that these career opportunities are not being presented to our female artists, and that we’d prefer to promote men over women?

The Star Wars Factor
Possibly one of the real reasons that there is a lack of women in our industry is what I call “The Star Wars factor.” For the most part, my male counterparts grew up watching (and being inspired by) Star Wars and Star Trek, whereas, personally, I was more inclined to watch Girls Just Want to Have Fun and Footloose. Did these adolescent boys want to be Luke or Han, or George for that matter? Were they so inspired by John Dykstra’s lightsabers that they wanted to do THAT when they grew up? And if this is true, maybe Jyn, Rae and Captain Marvel —and our own Captain Marvel, Victoria Alonso — will spur on a new generation of women in the industry. Maybe it’s a combination of all of these factors. Maybe it’s none.

I’m very interested in exploring this further. To address the problem, we need to ask ourselves why, so please share your thoughts and experiences — you can find me at jz@vfxjz.com. At least now the conversation has started.

One More Thing!
I am very proud that one of my female colleagues, Alana Newell (pictured with her fellow nominees), was nominated for a VES Award this year for Outstanding Compositing in a Photoreal Feature for X-Men: Apocalypse. She was one of the few, but hopefully as time goes by that will change.

Main Image: The woman of Rising Sun Pictures.
——–

Jennie Zeiher is head of sales & business development at Adelaide, Australia’s Rising Sun Pictures.