NBCUni 7.26

Category Archives: post production

Embracing production in the cloud

By Igor Boshoer

Cloud technology is set to revolutionize film production. That is if studios can be convinced. But since this century-old industry is reluctant to change, these new technologies and promising innovation trends are integrating at a slower pace.

Tried-and-true production methods are steadily becoming outdated. Bringing innovation, a cloud platform offers accessibility to both small and large studios. In the not-so-distant future, what may now be merely a competitive edge will become industry standard practices. But until then, some studios are apprehensive. And the reasons are mostly myth.

The Need for Transition
Core video production applications, computing, storage and other IT services are moving to the cloud at a rapid pace. A variety of industries and businesses — not just film — are being challenged by new customer expectations, which are heavily influenced by consumer applications powered by the cloud.

In visual effects, film and XR, application vendors such as Autodesk, Avere and Aspera are all updating their software to support these cloud workflows. Studios are recognizing that more focus should be placed on creating high-quality content, and far less on in-house software development and infrastructure maintenance. But to grow the topline and stand apart from the competition, it’s imperative for our industry to be proactive and re-imagine the workflow. Cloud providers offer a much faster pace with this innovation than what a studio can internally provide.

In the grand scheme of things, the industry wants to make studio operations more efficient, cost-effective and quantifiable to better serve their customers. And by taking advantage of cloud-based services, studios can increase agility, while decreasing their cost and risk.

Common Misconceptions of the Cloud
Many believe the cloud to be insecure. But there are many very successful and striving startups, even in the finance and healthcare industries. Our industry’s MPAA regulations are much less stringently regulated than their industry’s HIPPA compliance. To the contrary, the cloud providers offer vastly stronger securities than a studio’s very own internal security measures.

Some studios are reluctant because the transfer of mass amounts of data into a cloud platform can prove challenging. But there are still ways to speed up these transfers, including the use of caching and custom UDP-based transport protocols. While this reluctance is valid, it’s still entirely manageable.

Studios also assume that cloud technology is expensive. It is… however, when you truly break down the costs to maintain infrastructure — adding internal storage, hardware setup, multi-year equipment leases, not to mention the ongoing support team — it, in fact, proves more expensive. While the cloud appears costly, it actually saves money and lets you quantify the cost of production. Moreover, studios can scale resources as production demands fluctuate, instead of relying on the typical static, in-house model.

How the Cloud Better Serves Customers
While some are still apprehensive of cloud-based integration, studios that have shifted production pipelines to cloud-based platforms — and embraced it — are finding positive results and success. The cloud can serve customers in a variety of ways. It can deliver a richer, more consistent and personalized experience for a studio’s content creators, as well as offer a collaborative community.

The latest digital technologies are guaranteed to reshape economics, production, and distribution of the entertainment industry. But to be on their game and remain competitive, studios must adapt to these new Internet and computer technologies.

If our industry is willing to push itself through these myths and preconceived assumptions, cloud technology can indeed revolutionize film production. When that begins to happen, more and more studios will adopt this competitive edge, and it will make for an exciting shift.


Igor Boshoer is a media technologist with feature film VFX credits, including The Revenant and The Wolf of Wall Street. His experience building studio technology inspired his company, Linc, a studio platform as a service. He also hosts the monthly media technology meetup Filmologic in the Bay Area.

Point 360 grows team with senior colorist Charlie Tucker

Senior colorist Charlie Tucker has joined Burbank’s Point 360. He comes to the facility from Technicolor, and brings with him over 20 years of color grading experience.

The UK-born Tucker’s credits include TV shows such as The Vampire Diaries and The Originals on CW, Wet Hot American Summer and A Futile & Stupid Gesture on Netflix, as well as Amazon’s Lore. He also just completed YouTube Red’s show Cobra Kai. Tucker, who joined the company just last week, will be working on Blackmagic Resolve.

Now at Point 360, Tucker reteams with Jason Kavner, who took the helm as senior VP of episodic sales in 2017. Tucker also joins fellow senior colorist Aidan Stanford, whose recent credits include the Academy Award-winning feature Get Out and the film Happy Death Day. Stanford’s recent episodic work includes the FX series You’re the Worst and ABC’s Fresh Off the Boat.

When prodded to sum up his feelings regarding joining Point 360, Tucker said, “I am chuffed to bits to now be part of and call Point 360 my home. It is a bloody lovely facility that has a welcoming, collaborative feel, which is refreshing to find within this pressure cooker we call Hollywood. The team I am privileged to join is a brilliant, talented and very experienced group of industry professionals who truly enjoy what they do, and I know my clients will love my new coloring bay and the creative vibe that Point 360 has created.”

NBCUni 7.26

Creative editorial and post boutique Hiatus opens in Detroit

Hiatus, a full-service, post production studio with in-house creative editorial, original music composition and motion graphics departments, has opened in Detroit. Their creative content offerings cover categories such as documentary, narrative, conceptual, music videos and advertising media for all video platforms.

Led by founder/senior editor Shane Patrick Ford, the new company includes executive producer/partner Catherine Pink, and executive producer Joshua Magee, who joins Hiatus from the animation studio Lunar North. Additional talents feature editor Josh Beebe, composer/editor David Chapdelaine and animator James Naugle.

The roots of Hiatus began with The Factory, a music venue founded by Ford while he was still in college. It provided a venue for local Detroit musicians to play, as well as touring bands. Ford, along with a small group of creatives, then formed The Work – a production company focused on commercial and advertising projects. For Ford, the launch of Hiatus is an opportunity to focus solely on his editorial projects and to expand his creative reach and that of his team nationally.

Leading up to the launch of Hiatus, the team has worked on projects for brands such as Sony, Ford Motor Company, Acura and Bush’s, as well as recent music videos for Lord Huron, Parquet Courts and the Wombats.

The Hiatus team is also putting the finishing touches on the company’s first original feature film Dare to Struggle, Dare to Win. The film uncovers a Detroit Police decoy unit named STRESS and the efforts made to restore civil order in 1970s post-rebellion Detroit. Dare to Struggle, Dare to Win makes its debut at the Indy Film Festival on Sunday April 29th and Tuesday May 1st in Indianapolis, before it hits the film festival circuit.

“Launching Hiatus was a natural evolution for me,” says Ford. “It was time to give my creative team even more opportunities, to expand our network and to collaborate with people across the country that I’ve made great connections with. As the post team evolved within The Work, we outgrew the original role it played within a production company. We began to develop our own team, culture, offerings and our own processes. With the launch of Hiatus, we are poised to better serve the visual arts community, to continue to grow and to be recognized for the talented creative team we are.”

“Instead of having a post house stacked with people, we’d prefer to stay small and choose the right personal fit for each project when it comes to color, VFX and heavy finishing,” explains Hiatus EP Catherine Pink. “We have a network of like-minded artists that we can call on, so each project gets the right creative attention and touch it deserves. Also, the lower overhead allows us to remain nimble and work with a variety of budget needs and all kinds of clients.”


NAB 2018: A closer look at Firefly Cinema’s suite of products

By Molly Hill

Firefly Cinema, a French company that produces a full set of post production tools, premiered Version 7 of its products at NAB 2018. I visited with co-founder Philippe Reinaudo and head of business development Morgan Angove at the Flanders Scientific booth. They were knowledgeable and friendly, and they helped me to better understand their software.

Firefly’s suite includes FirePlay, FireDay, FirePost and the brand-new FireVision. All the products share the same database and Éclair color management, making for a smooth and complete workflow. However, Reinaudo says their programs were designed with specific UI/UXs to better support each product’s purpose.

Here is how they break down:
FirePlay: This is an on-set media player that supports most any format or file. The player is free to use, but there’s a paid option to include live color grading.

FireDay: Firefly Cinema’s dailies software includes a render tree for multiple versions and supports parallel processing.

FirePost: This is Firefly Cinema’s proprietary color grading software. One of its features was a set of “digital filters,” which were effects with adjustable parameters (not just pre-set LUTs). I was also excited to see the inclusion of curve controls similar to Adobe Lightroom’s Vibrance setting, which increases the saturation of just the more muted colors.

FireVision: This new product is a cloud-based review platform, with smooth integration into FirePost. Not only do tags and comments automatically move between FirePost and FireVision, but if you make a grading change in the former and hit render, the version in FireVision automatically updates. While other products such as Frame.io have this feature, Firefly Cinema offers all of these in the same package. The process was simple and impressive.

One of the downsides of their software package is its lack of support for HDR, but Raynaud says that’s a work in progress. I believe this will likely begin with ÉclairColor HDR, as Reinaudo and his co-founder Luc Geunard are both former Éclair employees. It’s also interesting that they have products for every step after shooting except audio and editing, but perhaps given the popularity of Avid Media Composer, Adobe Premiere and Avid Pro Tools, those are less of a priority for a young company.

Overall, their set of products was professional, comprehensive and smooth to operate, and I look forward to seeing what comes next for Firefly Cinema.


Molly Hill is a motion picture scientist and color nerd, soon-to-be based out of San Francisco. You can follow her on Twitter @mollymh4.


AlterMedia rolling out rebuild of its Studio Suite 12 at NAB

At this year’s NAB, AlterMedia is showing Studio Suite 12, a ground-up rebuild of its studio, production and post management application. The rebuilt codebase and streamlined interface have made the application lighter, faster and more intuitive; it functions as a web application and yet still has the ability to be customized easily to adapt to varying workflows.

“We literally started over with a blank slate with this version,” says AlterMedia founder Joel Stoner. “The goal was really to reconsider everything. We took the opportunity to shed tons of old code and tired interface paradigms. That said, we maintained the basic structure and flow so existing users would feel comfortable jumping right in. Although there are countless new features, the biggest is that every user can now access Studio Suite 12 through a browser from anywhere.”

Studio Suite 12 now provides better integration within the Internet ecosystem by connecting with Slack and Twillio (for messaging), as well as Google Calendar, Exchange Calendar, Apple Calendar, IMDB, Google Maps, Ebay, QuickBooks and Xero accounting software and more.


Editor Dylan Tichenor to headline SuperMeet at NAB 2018

For those of you heading out to Las Vegas for NAB 2018, the 17th annual SuperMeet will take place on Tuesday, April 10 at the Rio Hotel. Speaking this year will be Oscar-nominated film editor Dylan Tichenor (There Will be Blood, Zero Dark Thirty). Additionally, there will be presentations from Blackmagic, Adobe, Frame.io, HP/Nvidia, Atomos and filmmaker Bradley Olsen, who will walk the audience through his workflow on Off the Tracks, a documentary about Final Cut Pro X.

Blackmagic Resolve designers Paul Saccone, Mary Plummer, Peter Chamberlain and Rohit Gupta will answer all questions on all things DaVinci Resolve, Fusion or Fairlight Audio.

Adobe Premiere Pro product manager Patrick Palmer will reveal new features in Adobe video solutions for their editing, color, graphics and audio editing workflows.

Frame.io CEO Emery Wells will preview the next generation of its collaboration and workflow tool, which will be released this summer.

Atomos’ Jeromy Young will talk about some of their new partners. He says, “It involves software and camera makers alike.”

As always, the evening will round out with the SuperMeet’s ”World Famous Raffle,” where the total value of prizes has now reached over $101,000. Part of that total includes a Blackmagic Advanced Control Panel, worth $29,995.

Doors will open at 4:30pm with the SuperMeet Vendor Showcase, which features 23 software and hardware developers. Those attending can enjoy a few cocktails and mingle with industry peers.

To purchase tickets, and for complete daily updates on the SuperMeet, including agenda updates, directions, transportation options and a current list of raffle prizes, visit the SuperMeet website.


B&H expands its NAB footprint to target multiple workflows

By Randi Altman

In a short time, many in our industry will be making the pilgrimage to Las Vegas for NAB. They will come (if they are smart) with their comfy shoes, Chapstick and the NAB Show app and plot a course for the most efficient way to see all they need to see.

NAB is a big show that spans a large footprint, and typically companies showing their wares need to pick a hall — Central, South Lower, South Upper or North. This year, however, The Studio-B&H made some pros’ lives a bit easier by adding a booth in South Lower in addition to their usual presence in Central Hall.

B&H’s business and services have grown, so it made perfect sense to Michel Suissa, managing director at The Studio-B&H, to grow their NAB presence to include many of the digital workflows the company has been servicing.

We reached out to Suissa to find out more.

This year B&H and its Studio division are in the South Lower. Why was it important for you guys to have a presence in both the Central and South Halls this year?
The Central Hall has been our home for a long time and it remains our home with our largest footprint, but we felt we needed to have a presence in South Hall as well.

Production and post workflows merge and converge constantly and we need to be knowledgeable in both. The simple fact is that we serve all segments of our industry, not just image acquisition and camera equipment. Our presence in image and data centric workflows has grown leaps and bounds.

This world is a familiar one for you personally.
That’s true. The post and VFX worlds are very dear to me. I was an editor, Flame artist and colorist for 25 years. This background certainly plays a role in expanding our reach and services to these communities. The Studio-B&H team is part of a company-wide effort to grow our presence in these markets. From a business standpoint, the South Hall attendees are also our customers, and we needed to show we are here to assist and support them.

What kind of workflows should people expect to see at both your NAB locations?
At the South Hall, we will show a whole range of solutions to show the breadth and diversity of what we have to offer. That includes VR post workflow, color grading, animation and VFX, editing and high-performance Flash storage.

In addition to the new booth in South Hall, we have two in Central. One is for B&H’s main product offerings, including our camera shootout, which is a pillar of our NAB presence.

This Studio-B&H booth features a digital cinema and broadcast acquisition technology showcase, including hybrid SDI/IP switching, 4K studio cameras, a gyro-stabilized camera car, the most recent full-frame cinema cameras, and our lightweight cable cam, the DynamiCam.

Our other Central Hall location is where our corporate team can discuss all business opportunities with new and existing B2B customers

How has The Studio-B&H changed along with the industry over the past year or two?
We have changed quite a bit. With our services and tools, we have re-invented our image from equipment providers to solution providers.

Our services now range from system design to installation and deployment. One of the more notable recent examples is our recent collaboration with HBO Sports on World Championship Boxing. The Studio-B&H team was instrumental in deploying our DynamiCam system to cover several live fights in different venues and integrating with NEP’s mobile production team. This is part of an entirely new type of service —  something the company had never offered its customers before. It is a true game-changer for our presence in the media and entertainment industry.

What do you expect the “big thing” to be at NAB this year?
That’s hard to say. Markets are in transition with a number of new technology advancements: machine learning and AI, cloud-based environments, momentum for the IP transition, AR/VR, etc.

On the acquisition side, full frame/large sensor cameras have captured a lot of attention. And, of course, HDR will be everywhere. It’s almost not a novelty anymore. If you’re not taking advantage of HDR, you are living in the past.


Netflix’s Altered Carbon: the look, the feel, the post

By Randi Altman

Netflix’s Altered Carbon is a new sci-fi series set in a dystopian future where people are immortal thanks to something called “stacks,” which contain their entire essence — their personalities, their memories, everything. The one setback is that unless you are a Meth (one of the rich and powerful), you need to buy a “sleeve” (a body) for your stack, and it might not have any resemblance to your former self. It could be a different color, a different sex, a different age, a different everything. You have to take what you can get.

Based on a 2002 novel by Richard K. Morgan, it stars Swedish actor Joel Kinnaman.

Jill Bogdanowicz

We reached out to the show’s colorist, Jill Bogdanowicz, as well as post producer Allen Marshall Palmer to find out more about the show’s varied and distinctive looks.

The look has a very Blade Runner-type feel. Was that in homage to the films?
Bogdanowicz: The creators wanted a film noir look. Blade Runner is the same genre, but the show isn’t specifically an homage to Blade Runner.

Palmer: I’ll leave that for fans to dissect.

Jill, can you talk about your process? What tools did you use?
Bogdanowicz: I designed a LUT to create that film noir look before shooting. I actually provided a few options, and they chose my favorite one and used it throughout. After they shot everything and I had all 10 episodes in my bay, I got familiar with the content, wrapped my head around the story and came up with ideas to tell that story with color.

The show covers many different times and places so scenes needed to be treated visually to show audiences where the story is and what’s happened. I colored both HDR (Dolby Vision) and SDR passes using DaVinci Resolve.

I worked very closely with both DPs — Martin Ahlgren and Neville Kidd — in pre-timing the show, and they gave me a nice idea of what they were looking for so I had a great starting point. They were very close knit. The entire team on this project was an absolute pleasure, and it was a great creative collaboration, which comes through in the final product of the show.

The show is shot and posted like a feature and has a feature feel. Was that part of your marching orders?
Bogdanowicz: I’m primarily a features colorist, so I’m very familiar with the film noir look and heavy VFX, and that’s one reason I was included on this project. It was right up my alley.

Palmer: We approached Altered Carbon as a 10-part feature rather than a television series. I coined the term “feature episodic entertainment,” which describes what we were aspiring to — destination viewing instead of something merely disposable. In a world with so many viewing options, we wanted to command the viewer’s full attention, and fans are rewarded for that attention.

We were very concerned about how images, especially VFX, were going to look in HDR so we had weekly VFX approval sessions with Jill, our mastering colorist, in her color timing bay.

Executive producers and studio along with the VFX and post teams were able to sit together — adjusting color corrections if needed before giving final approval on shots. This gave us really good technical and creative quality control. Despite our initial concerns about VFX shots in HDR, we found that with vendors like Double Negative and Milk with their robust 16-bit EXR pipelines we weren’t “breaking” VFX shots when color correcting for HDR.

How did the VFX affect the workflow?
Bogdanowicz: Because I was brought on so early, the LUT I created was shared with the VFX vendors so they had a good estimation of the show’s contrast. That really helped them visualize the look of the show so that the look of the shots was pretty darn close by the time I got them in my bay.

Was there a favorite scene or scenes?
Bogdanowicz: There are so many spectacular moments, but the emotional core for me is in episode 104 when we see the beginning of the Kovacs and Quell love story in the past and how that love gives Kovacs the strength to survive in the present day.

Palmer: That’s a tough question! There are so many, it’s hard to choose. I think the episode that really jumps out is the one in which Joel Kinnaman’s character is being tortured and the content skips back and forth in time, changes and alternates between VR and reality. It was fun to create a different visual language for each space.

Can you talk about challenges in the process and how you overcame them?
Bogdanowicz: The show features a lot of VFX and they all need to look as real as possible, so I had to make sure they felt part of the worlds. Fortunately, VFX supervisor Everett Burrell and his team are amazing and the VFX is top notch. Coming up with different ideas and collaborating with producers James Middleton and Laeta Kalogridis on those ideas was a really fun creative challenge. I used the Sapphire VFX plugin for Resolve to heavily treat and texture VR looks in different ways.

Palmer: In addition to the data management challenges on the picture side, we were dealing with mixing in Dolby Atmos. It was very easy to get distracted with how great the Atmos mix sounds — the downmixes generally translated very well, but monitoring in 5.1 and 2.0 did reveal some small details that we wanted to adjust. Generally, we’re very happy with how both the picture and sound is translating into viewer’s homes.

Dolby Vision HDR is great at taking what’s in the color bay into the home viewing environment, but there are still so many variables in viewing set-ups that you can still end up chasing your own tail. It was great to see the behind the scenes of Netflix’s dedication to providing the best picture and sound quality through the service.

The look of the AI hotel was so warm. I wanted to live there. Can you talk about that look?
Bogdanowicz: The AI hotel look was mostly done in design and lighting. I saw the warm practical lights and rich details in the architecture and throughout the hotel and ran with it. I just aimed to keep the look filmic and inviting.

What about the look of where the wealthy people lived?
Bogdanowicz: The Meth houses are above the clouds, so we kept the look very clean and cool with a lot of true whites and elegant color separation.

Seems like there were a few different looks within the show?
Bogdanowicz: The same LUT for the film noir look is used throughout the show, but the VR looks are very different. I used Sapphire to come up with different concepts and textures for the different VR looks, from rich quality of the high-end VR to the cheap VR found underneath a noodle bar.

Allen, can you walk us through the workflow from production to post?
Palmer: With the exception of specialty shots, the show was photographed on Alexa 65 — mostly in 5K mode, but occasionally in 6.5K and 4K for certain lenses. The camera is beautiful and a large part of the show’s cinematic look, but it generates a lot of data (about 1.9TB/hour for 5K) so this was the first challenge. The camera dictates using the Codex Vault system, and Encore Vancouver was up to the task for handling this material. We wanted to get the amount of data down for post, so we generated 4096×2304 ProRes 4444XQ “mezzanine” files, which we used for almost all of the show assembly and VFX pulls.

During production and post, all of our 4K files were kept online at Efilm using their portal system. This allowed us fast, automated access to the material so we could quickly do VFX pulls, manage color, generate 16-bit EXR frames and send those off to VFX vendors. We knew that time saved there was going to give us more time on the back end to work creatively on the shots so the Portal was a very valuable tool.

How many VFX shots did you average per episode? Seems like a ton, especially with the AI characters. Who provided those and what were those turnarounds like?
Palmer: There were around 2,300 visual effects shots during this season — probably less than most people would think because we built a large Bay City street inside a former newspaper printing facility outside of Vancouver. The shot turnaround varied depending on the complexity and where we were in the schedule. We were lucky that something like episode 1’s “limo ride” sequence was started very early on because it gave us a lot of time to refine our first grand views of Bay City. Our VFX supervisor Everett Burrell and VFX producer Tony Meagher were able to get us out in front of a lot of challenges like the amount of 3D work in the last two episodes by starting that work early on since we knew we would need those shots from the script and prep phase.


Review: HP’s lower-cost DreamColor Z24x display

By Dariush Derakhshani

So, we all know how important a color-accurate monitor is in making professional-level graphics, right? Right?!? Even at the most basic level, when you’re stalking online for the perfect watch band for your holiday present of a smart watch, you want the orange band you see in the online ad to be what you get when it arrives a few days later. Even if your wife thinks orange doesn’t suit you, and makes you look like “you’re trying too hard.”

Especially as a content developer, you want to know what you’re looking at is an accurate representation of the image. Ever walk into a Best Buy and see multiple screens showing the same content but with wild ranging differences in color? You can’t have that discrepancy working as a pro, especially in collaboration; you need color accuracy. In my own experience, that position has been filled by HP’s 10-bit DreamColor displays for many years now, but not everyone is awash in bitcoins, and justifying a price tag of over $1,200 is sometimes hard to justify, even for a studio professional.

Enter HP’s DreamColor Z24x display at half the price, coming in around $550 online. Yes, DreamColor for half the cost. That’s pretty significant. For the record, I haven’t used a 24-inch monitor since the dark ages; when Lost was the hot TV show. I’ve been fortunate enough to be running at 27-inch and higher, so there was a little shock when I started using the Z24x HP sent me for review. But this is something I quickly got used to.

With my regular 32-inch 4K display still my primary — so I can fit loads of windows all over the place — I used this DreamColor screen as my secondary display, primarily to check output for my Adobe After Effects comps, Adobe Premiere Pro edits and to hold my render view window as I develop shaders and lighting in Autodesk Maya. I felt comfortable knowing the images I shared with my colleagues across town would be seen as I intended them, evening the playing field when working collaboratively (as long as everyone is on the same LUT and color space). Speaking of color spaces, the Z24x hits 100% of sRGB, 99% of AdobeRGB and 96% of DCI P3, which is just slightly under HP’s Z27x DreamColor. It is, however, slightly faster with a 6ms response rate.

The Z24x has a 24-inch IPS panel from LG that exhibits color in 10-bit, like its bigger 27-inch Z27x sibling. This gives you over a billion colors, which I have personally verified by counting them all —that was one, long weekend, I can tell you. Unlike the highest-end DreamColor screens though, the Z24x dithers up from 8-bit to 10-bit (called an 8-bit+FRC). This means it’s better than an 8-bit color display, for sure, but not quite up to real 10-bit, making it color accurate but not color critical. HP’s implementation of dithering is quite good, when subjectively compared to my full 10-bit main display. Frankly, a lot of screens that claim 10-bit may actually be 8-bit+FRC anyway!

While the Z27x gives you 2560×1440 as you expect of most 27inch displays, if not full on 4K, the Z24x is at a comfortable 1920×1200, just enough for a full 1080p image and a little room for a slider or info bar. Being the res snob that I am, I had wondered if that was just too low, but at 24-inches I don’t think you would want a higher resolution, even if you’re sitting only 14-inches away from it. And this is a sentiment echoed by the folks at HP who consulted with so many of their professional clients to build this display. That gives a pixel density of about 94PPI, a bit lower than the 109PPI of the Z27x. This density is about the same as a 1080p HD display at 27-inch, so it’s still crisp and clean.

Viewing angles are good at about 178 degrees, and the screen is matte, with an anti-glare coating, making it easier to stare at without blinking for 10 hours at a clip, as digital artists usually do. Compared to my primary display, this HP’s coating was more matte and still gave me a richer black in comparison, which I liked to see.

Connection options are fairly standard with two DisplayPorts, one HDMI, and one DVI dual link for anyone still living in the past. You also get four USB ports and an analog 3.5mm audio jack if you want to drive some speakers, since you can’t from your phone anymore (Apple, I’m looking at you).

Summing Up
So while 24-inches is a bit small for my tastes for a display, I am seriously impressed at the street price of the Z24x, allowing a lot more pros and semi-pros to get the DreamColor accuracy HP offers at half the price. While I wouldn’t recommend color grading a show on the Z24x, this DreamColor does a nice job of bringing a higher level of color confidence at an attractive price. As a secondary display, the z24x is a nice addition to an artist workflow with budget in mind — or who has a mean, orange-watch-band-hating spouse.


Dariush Derakhshani is a VFX supervisor and educator in Southern California. You can follow his random tweets at @koosh3d.

HPA Tech Retreat: The production budget vs. project struggle

“Executive producers often don’t speak tech language,” said Aaron Semmel, CEO and head of BoomBoomBooya, in addressing the HPA Tech Retreat audience Palm Springs in late February. “When people come to us with requests and spout all sorts of tech mumbo jumbo, it’s very easy for us to say no,” he continued. “Trust me, you need to speak to us in our language.”

Semmel was part of a four-person HPA panel that included Cirina Catania, The Catania Group; Larry O’Connor, OWC Digital; and Jeff Stansfield, Advantage Video Systems. Moderated by Andy Marken of Marken Communications, the panel explored solutions that can bring the executive and line producers and the production/post teams closer together to implement the right solutions for every project and satisfy everyone, including accounting.

An executive and co-producer on more than a dozen film and TV series projects, Semmel said his job is to bring together the money and then work with the best creative people possible. He added that the team’s job was to make certain the below-the-line items — actual production and post production elements — stay on or below budget.

Semmel noted that most executive producers often work off of the top sheet of the budget, typically an overview of the budget. He explained that executive producers may go through all of the budget and play with numbers here and there but leave the actual handling of the budget to the line producer and supervising producer. In this way, they can “back into” a budget number set by the executive producer.

“I understand the technologies at a higher level and could probably take a highlighter and mark budget areas where we could reduce our costs, but I also know I have very experienced people on the team who know the technologies better than I do to make effective cuts.

L-R: Jeff Stansfield, Aaron Semmel, Cirina Catania

“For example, in talking with many of you in the audience here at the Retreat, I learned that there’s no such thing as an SSD hard drive,” he said. “I now know there are SSDs and there are hard drives and they’re totally different.”

Leaning into her mic, Catania got a laugh when she said, “One of the first things we all have to do is bring our production workflows into the 21st century. But seriously, the production and post teams are occasionally not consulted during the lengthy budgeting process. Our keys can make some valuable contributions if they have a seat at the table during the initial stages. In terms of technology, we have some exciting new tools we’d like to put to work on the project that could save you valuable time, help you organize your media and metadata, and have a direct and immediate positive impact on the budget. What if I told you that you could save endless hours in post if you had software that helped your team enter metadata and prep for post during the early phase — and hardware that worked much faster, more securely and more reliably.”

With wide agreement from the audience, Catania emphasized that it is imperative for all departments involved in prep/production/post and distribution to be involved in the budget process from the outset.

“We know the biggest part of your budget might be above-the-line costs,” she continued. “But production, post and distribution are where much of the critical work also gets done. And if we’re involved at the outset, and that includes with people like Jeff (Stansfield), who can help us come up with creative workflow and financing options, that will save you and the investors’ money, we will surely turn a profit.”

Semmel said the production/post team could probably be of assistance in the early budget stages to pinpoint where work could be done more efficiently to actually improve the overall quality and ensure EPs do what they need to do for their reputation… deliver the best and be under budget.

The Hatfields and the McCoys via History Channel

“But for some items, there seem to be real constraints,” he emphasized. “For example, we were shooting America’s Feud: Hatfields & McCoys, a historical documentary in Romania — yes, Romania,” he grinned; “and we were behind schedule. We shot the farmhouse attack on day one, shot the burning of the house on day two and on day three we received our dailies to review for day one’s work. We were certain we had everything we needed so we took a calculated risk and burned the building,” he recalled. “But no one exhaled until we had a chance to go through the dailies.”

“What if I told you there’s a solution that will transfer your data at 2800MB/s and enable you to turn around your dailies in a couple of hours instead of a couple of days?” O’Connor asked.

Semmel replied, “I don’t understand the 2800MB/s stuff, but you clearly got my attention by saying dailies in a couple of hours instead of days. If there had been anything wrong with the content we had shot, we would have been faced with the huge added expense of rebuilding and reshooting everything,” he added. “Even accounting can understand the savings in hours vs. days.”

Semmel pointed out that because films and TV shows start and end digital, there’s always a concern about frames and segments being lost when you’re on location and a long distance from the safety net of your production facilities.

“No one likes that risk, including production/post leaders, integrators or manufacturers,” said O’Connor. “In fact, a lot of crews go to extraordinary lengths to ensure nothing is lost; and frankly, I don’t blame them.”

He recalled a film crew going to Haiti to shoot a documentary that was told by the airline they were over their limit on baggage for the trip.

“They put their clothes in an airport locker and put their three RAID storage systems in their backpacks. They wanted to make certain they could store, backup and backup their work again to ensure they had all of the content they needed when they got back to their production/post facility.”

Stansfield and Catania said they had seen and heard of similar gut-level decisions made by executive and line producers. They encouraged the production/post audience not to simply accept the line item budgets they are given to work with but be more involved at the beginning of the project to explore and define all of the below-the-line budget to minimize risk and provide alternative plans just in case unexpected challenges arise.

“An EP and line producer’s mantra for TV and film projects is you only get two out of three things: time, money and quality,” Semmel said. “If you can deliver all three, then we’ll listen, but you have to approach it from our perspective.

“Our budgets aren’t open purses,” he continued. “You have to make recommendations and deliver products and solutions that enable us to stay under budget, because no matter how neat they are or how gee-whiz technical they are, they aren’t going to be accepted. We have two very fickle masters — finance and viewer — so you have to give us the tools and solutions that satisfy both of them. Don’t give us bits, bytes and specs, just focus on meeting our needs in words we can understand.

“When you do that, we all win; and we can all work on the next project together,” Semmel concluded. “We only surround ourselves with people who will help us through the project. People who deliver.”

Xytech Dash: Cloud-based management for small studios

Xytech, makers of facility management software, is targeting smaller facilities with its newly launched cloud-based software, Dash. The subscription-based app takes just three days to install and uses security offered by the Microsoft Azure Managed Cloud.

With Dash, Xytech can now manage the end-to-end business cycle for small- and medium-sized studios. These customers range from boutique post facilities to large universities with sophisticated media departments to corporate communication departments.

The monthly subscription model for Dash offers access to all dashboards, graphs and charts, plus customers can manage resources, handle scheduling tasks, cost forecasting, invoicing and reporting all on one system. Dash also offers the option of a built-in library management program as well as a bidding module, enabling users to bid on a project and have it accepted on the spot.

The new web interface allows users easy access to the Dash applications from any supported web browser. “We listened to our clients and adapted our software into a series of directed workflows allowing users to schedule, raise a bid and generate an invoice,” says Xytech COO Greg Dolan. “Additionally, we’ve made installation support fast and seamless on Dash, so our team can easily teach our clients and get them up and running in just a few days.”

The software has a low per-user price and is available on a monthly subscription basis. The company is offering early adopters of Dash an early-bird discount, which will be announced shortly.

The challenges of creating a shared storage ‘spec’

By James McKenna

The specification — used in a bid, tender, RFQ or simply to provide vendors with a starting point — has been the source of frustration for many a sales engineer. Not because we wish that we could provide all the features that are listed, but because we can’t help but wonder what the author of those specs was thinking.

Creating a spec should be like designing your ideal product on paper and asking a vendor to come as close as they can to that ideal. Unlike most other forms of shopping, you avoid the sales process until the salesperson knows exactly what you want. This is good in some ways, but very limiting in others.

I dislike analogies with the auto industry because cars are personal and subjective, but in this way, you can see the difference in spec versus evaluation and research. Imagine writing down all the things you want in a car and showing up at the dealership looking for a match. You want power, beauty, technology, sports-car handling and room for five?

Your chances of finding the exact car you want are slim, unless you’re willing to compromise or adjust your budget. The same goes for facility shared storage. Many customers get hung up on the details and refuse to prioritize important aspects, like usability and sustainability, and as a result end up looking at quotes that are two to three times their cost expectations for systems that don’t perform the day-to-day work any better (and often perform worse).

There are three ways to design a specification:

Based On Your Workflow
By far, this is the best method and will result in the easiest path to getting what you want. Go ahead and plan for years down the road and challenge the vendors to keep up with your trajectory. Keep it grounded in what you believe is important to your business. This should include data security, usable administration and efficient management. Lay out your needs for backup strategy and how you’d like that to be automated, and be sure to prioritize these requests so the vendor can focus on what’s most important to you.

Be sure to clearly state the applications you’ll be using, what they will be requiring from the storage and how you expect them to work with the storage. The highest priority and true test of a successful shared storage deployment is: Can you work reliably and consistently to generate revenue? These are my favorite types of specs.

Based On Committee
Some facilities are the victim of their own size or budget. When there’s an active presence from the IT department, or the dollar amounts get too high, it’s not just up to the creative folks to select the right product. The committee can include consultants, system administrators, finance and production management, and everyone wants to justify their existence at the table. People with experience in enterprise storage and “big iron” systems will lean on their past knowledge and add terms like “Five-9s uptime,” “No SPOF,” “single namespace,” “multi-path” and “magic quadrant.”

In the enterprise storage world these would be important, but they don’t force vendors to take responsibility for prioritizing the interactions between the creative applications and the storage, and the usability and sustainability of a solution in the long term. The performance necessary to smoothly deliver a 4K program master, on time and on budget, might not even be considered. I see these types of specifications and I know that there will be a rude awakening when the quotes are distributed, usually leading to some modifications of the spec.

Based On A Product
The most limiting way to design a spec is by copying the feature list of a single product to create your requirements. I should mention that I have helped our customers to do this on some occasions, so I’m guilty here. When a customer really knows the market, and wants to avoid being bid an inferior product, this can be justified. However, you have better completed your research beforehand because there may be something out there that could change your opinion, and you don’t want to find out about it after you’re locked into the status quo. If you choose to do this but want to stay on the lookout for another option, simply prioritize the features list by what’s most important to you.

If you really like something about your storage, prioritize that and see if another vendor has something similar. When I respond to these bid specs, I always provide details on our solution and how we can achieve better results than the one that is obviously being requested. Sometimes it works, sometimes not, but at least now they’re educated.

The primary frustration with specifications that miss the mark is the waste of money and time. Enterprise storage features come with enterprise storage complexity and enterprise storage price tags. This requires training or reliance upon the IT staff to manage, or in some cases completely control the network for you. Cost savings in the infrastructure can be repurposed to revenue-generating workstations and artists can be employed instead of full-time techs. There’s a reason that scrappy, grassroots facilities produce faster growth and larger facilities tend to stagnate. They focus on generating content, invest only where needed and scale the storage as the bigger jobs and larger formats arrive.

Stick with a company that makes the process easy and ensures that you’ll never be without a support person that knows your daily grind.


James McKenna is VP of marketing and sales at shared storage company Facilis.

DigitalFilm Tree’s Ramy Katrib talks trends and keynoting BMD conference

By Randi Altman

Blackmagic, which makes tools for all parts of the production and post workflow, is holding its very first Blackmagic Design Conference and Expo, produced with FMC and NAB Show. This three-day event takes place on February 11-13 in Los Angeles. The event includes a paid conference featuring over 35 sessions, as well as a free expo on February 12, which includes special guests, speakers and production and post companies.

Ramy Katrib, founder and CEO of Hollywood-based post house and software development company DigitalFilm Tree, is the keynote speaker for the conference. FotoKem DI colorist Walter Volpatto and color scientist Joseph Slomka will be keynoting the free expo on the 12th.

We reached out to Katrib to find out what he’ll be focusing on in his keynote, as well as pick his brains about technology and trends.

Can you talk about the theme of your keynote?
Resolve has grown mightily over the past few years, and is the foundation of DigitalFilm Tree’s post finishing efforts. I’ll discuss the how Resolve is becoming an essential post tool. And with Resolve 14, folks who are coloring, editing, conforming and doing VFX and audio work are now collaborating on the same timeline, and that is huge development for TV, film and every media industry creative and technician.

Why was it important for you to keynote this event?
DaVinci was part of my life when I was a colorist 25 years ago, and today BMD is relevant to me while I run my own post company, DigitalFilm Tree. On a personal note, I’ve known Grant Petty since 1999 and work with many folks at BMD who develop Resolve and the hardware products we use, like I/O cards and Teranex converters. This relationship involves us sharing our post production pain points and workflow suggestions, while BMD has provided very relevant software and hardware solutions.

Can you give us a sample of something you might talk about?
I’m looking forward to providing an overview of how Resolve is now part of our color, VFX, editorial, conform and deliverables effort, while having artists provide micro demos on stage.

You alluded to the addition of collaboration in Resolve. How important is this for users?
Resolve 14’s new collaboration tools are a huge development for the post industry, specifically in this golden age of TV where binge delivery of multiple episodes at the same time is common place. As the complexity of production and post increases, greater collaboration across multiple disciplines is a refreshing turn — it allows multiple artists and technicians to work in one timeline instead of 10 timelines and round tripping across multiple applications.

Blackmagic has ramped up their NLE offerings with Resolve 14. Do you see more and more editors embracing this tool for editing?
Absolutely. It always takes a little time to ramp up in professional communities. It reminds me of when the editors on Scrubs used Final Cut Pro for the first time and that ushered FCP into the TV arena. We’re already working with scripted TV editors who are in the process of transitioning to Resolve. Also, DigitalFilm Tree’s editors are now using Resolve for creative editing.

What about the Fairlight audio offerings within? Will you guys take advantage of that in any way? Do you see others embracing it?
For simple audio work like mapping audio tracks, creating multi mixes for 5.1 and 7.1 delivery and mapping various audio tracks, we are talking advantage of Fairlight and audio functionality within Resolve. We’re not an audio house, yet it’s great to have a tool like this for convenience and workflow efficiency.

What trends did you see in 2017 and where do you think things will land in 2018?
Last year was about the acceptance of cloud-based production and post process. This year is about the wider use of cloud-based production and post process. In short, what used to be file-based workflows will give way to cloud-based solutions and products.

postPerspective readers can get $50 off of Registration for the Blackmagic Design Conference & Expo by using Code: POST18. Click here to register

Made in NY’s free post training program continues in 2018

New York City’s post production industry continues to grow thanks to the creation of New York State’s Post Production Film Tax Credit, which was established in 2010. Since then, over 1,000 productions have applied for the credit, creating almost a million new jobs.

“While this creates more pathways for New York City residents to get into the industry, there is evidence that this growth is not equally distributed among women and people of color. In response to this need, the NYC Mayor’s Office of Media and Entertainment decided to create the Made in New York Post Production Training Program, which built on the success of the Made in New York PA Training Program, which for the last 11 years has trained over 700 production assistants for work on TV and film sets,” explains Ryan Penny, program director of the Made In NY Post Production Training Program.

The Post Production Training Program seeks to diversify New York’s post industry by training low-income and unemployed New Yorkers in the basics of editing, animation and visual effects. Created in partnership with the Blue Collar Post Collective, BRIC Media Arts and Borough of Manhattan Community College, the course is free to participants and consists of a five-week, full-time skills training and job placement program administered by workforce development non-profit Brooklyn Workforce Innovations.

Trainees take part in classroom training covering the history and theory of post production, as well as technical training in Avid Media Composer, Adobe’s Premiere, After Effects and Photoshop, as well as Foundry’s Nuke. “Upon successful completion of the training, our staff will work with graduates to identify job opportunities for a period of two years,” says Penny.

Ryan Penny, far left with the most recent graduating class.

Launched in June 2017, the Made in New York Post Production Training Program graduated its second cycle of trainees in January 2018 and is now busy establishing partnerships with New York City post houses and productions who are interested in hiring graduates of the program as post PAs, receptionists, client service representatives, media management technicians and more.

“Employers can expect entry-level employees who are passionate about post and hungry to continue learning on the job,” reports Penny. “As an added incentive, the city has created a work-based learning program specifically for MiNY Post graduates, which allows qualified employers to be reimbursed for up to 80% of the first 280 hours of a trainee’s wages. This results in a win-win for employers and employees alike.”

The Made in New York Post Production Training Program will be conducting further cycles throughout the year, beginning with Cycle 3 planned for spring 2018. More information on the program and how to hire program graduates can be found here.

Sim Post LA beefs up with Greg Ciaccio and Paul Chapman

It’s always nice when good things happen to good people. Recently, long-time industry post pros Greg Ciaccio and Paul Chapman joined Sim Post LA — Greg as VP of post and Paul as VP of engineering and technology.

postPerspective has known both Greg and Paul for years and often call on them to pick their brains about technology, so having them end up working together warms our hearts.

Sim Post is a division of Sim, which provides end-to-end solutions for TV and feature film production and post production in LA, Vancouver, Toronto, New York and Atlanta.

“I’ll be working with the operations, sales, technology and finance teams to ensure tight integration between departments — always in the service of our clients,” reports Ciaccio. “Our ability to offer end-to-end services is a great advantage in the industry. I’ve admired the work produced by the talented group at Sim Post LA (formerly Chainsaw and Bling), and now I’m pleased to be a part of the team.”

Ciaccio’s resume includes executive operations management positions for creative service divisions at Ascent, Technicolor and Deluxe, and has led product development teams creating products. He also serves as chair of the ASC Motion Imaging Technology Council’s Workflow Committee, currently focused on ACES education and enlightenment, and is a member of the UHD/HDR Committee and Joint ASC/ICG/VES/PGA VR Committee.

Chapman, a Fellow of SMPTE, has held executive technology and engineering positions over the last 30 years, including his long-time role at FotoKem, as well as stints at Unitel Video and others. His skillset includes expertise in storage and networking infrastructure, facility engineering and operations.

“Sim has a lot of potential, and when the opportunity was presented to lead their engineering and technology departments, it really intrigued me,” says Chapman. “The LA facility itself is well constructed from the ground up. I’m looking forward to working with the creative and technical teams across the organization to enhance our technical operations, foster innovation and elevate performance for our clients.”

Greg and Paul are based at Sim’s operations in Hollywood.

Main Caption: (L-R) Greg Ciaccio and Paul Chapman

Industry mainstay Click3X purchased by Industrial Color Studios

Established New York City post house Click3X has been bought by Industrial Color Studios. Click3X is a 25-year-old facility that specializes in new media formats such as VR, AR, CGI and live streaming. Industrial Color Studios is a visual content production company. Founded in 1992, Industrial Color’s services range from full image capture and e-commerce photography to production support and post services, including creative editorial, color grading and CG.

With offices in New York and LA, Industrial Color has developed its own proprietary systems to support online digital asset management for video editing and high-speed file transfers for its clients working in broadcast and print media. The company is an end-to-end visual content production provider, partnering with top brands, agencies and creative professionals to accelerate multi-channel creative content.

Click3X was founded in 1993 by Peter Corbett, co-founder of numerous companies specializing in both traditional and emerging forms of media.  These include Media Circus (a digital production and web design company), IllusionFusion, Full Blue, ClickFire Media, Reason2Be, Sound Lounge and Heard City. A long-time member of the DGA as a commercial film director, Corbett emigrated to the US from Australia to pursue a career as a commercial director and, shortly thereafter, segued into integrated media and mixed media, becoming one of the first established film directors to do so.

Projects produced at Click3X have been honored with the industry’s top awards, including Cannes Lions, Clios, Andy Awards and others. Click3X also was presented with the Crystal Apple Award, presented by the New York City Mayor’s Office of Media and Entertainment, in recognition of its contributions to the city’s media landscape.

Corbett will remain in place at Click3X and eventually the companies will share the ICS space on 6th Avenue in NYC.

“We’ve seen a growing need for video production capabilities and have been in the market for a partner that would not only enhance our video offering, but one that provided a truly integrated and complementary suite of services,” says Steve Kalalian, CEO of Industrial Color Studios. “And Click3X was the ideal fit. While the industry continues to evolve at lightning speed, I’ve long admired Click3X as a company that’s consistently been on the cutting edge of technology as it pertains to creative film, digital video and new media solutions. Our respective companies share a passion for creativity and innovation, and I’m incredibly excited to share this unique new offering with our clients.”

“When Steve and I first entered into talks to align on the state of our clients’ future, we were immediately on the same page,” says Corbett, president of Click3X. “We share a vision for creating compelling content in all formats. As complementary production providers, we will now have the exciting opportunity to collaborate on a robust and highly-regarded client roster, but also expand the company’s creative and new media capabilities, using over 200,000 square feet of state-of-the-art facilities in New York, Los Angeles and Philadelphia.”

The added capabilities Click3X gives Industrial Color in video production and new media mirrors its growth in the field of e-commerce photography and image capture. The company has recently opened a new 30,000 square-foot studio in downtown Los Angeles designed to produce high-volume, high-quality product photography for advertisers. That studio complements the company’s existing e-commerce photography hub in Philadelphia.

Main Image: (L-R) Peter Corbett and Steve Kalalian

Fotokem posts Star Wars: The Last Jedi

Burbank-based post house FotoKem provided creative and technical services for the Disney/Lucasfilm movie Star Wars: The Last Jedi. The facility built advanced solutions that supported the creative team from production to dailies to color grade. Services included a customized workflow for dailies, editorial and VFX support, conform and a color pipeline that incorporated all camera formats (film and file-based).

The long-established post house worked directly with director Rian Johnson; DP Steve Yedlin, ASC; producer Ram Bergman; Lucasfilm head of post Pippa Anderson; and Lucasfilm director of post Mike Blanchard.

FotoKem was brought on prior to the beginning of principal photography and designed an intricate workflow tailored to accommodate the goals of production. A remote post facility was assembled near-set in London where film technician Simone Appleby operated two real-time Scanity film scanners, digitizing up to 15,000 feet a day of 35mm footage at full-aperture 4K resolution. Supported by a highly secure network, FotoKem NextLab systems ingested the digitized film and file-based camera footage, providing “scan once instant-access” to everything, and creating a singular workflow for every unit’s footage. By the end of production over one petabyte of data was managed by NextLab. This allowed the filmmakers, visual effects teams, editors and studio access to securely and easily share large volumes of assets for any part of the workflow.

“I worked with FotoKem previously and knew their capabilities. This project clearly required a high level of support to handle global locations with multiple units and production partners,” says Bergman. “We had a lot of requirements at this scale to create a consistent workflow for all the teams using the footage, from production viewing dailies to the specific editorial deliverables, visual effects plates, marketing and finishing, with no delays or security concerns.”

Before shooting began, Yedlin worked with FotoKem’s film and digital lab to create specialized scanner profiles and custom Look Up Tables (LUTs). FotoKem implemented the algorithms devised by Yedlin into their NextLab software to obtain a seamless match between digital footage and film scans. Yedlin also received full-resolution stills, which served as a communication funnel for color and quality control checks. This color workflow was devised in collaboration with FotoKem color scientist Joseph Slomka, and executed by NextLab software developer Eric Cameron and dailies colorist Jon Rocke, who were on site throughout the entire production.

“As cinematographers, we work hard to create looks, and FotoKem made it possible for me to take control of each step in the process and know exactly what was happening,” says Yedlin. The color science support I received made true image control a realized concept.”

Calibrated 4K monitoring via the Sony X300 and the high availability SAN on site, managed by NextLAB, enabled a real time workflow for dailies. Visual effects and editorial teams, via high density NAS, were allowed instant access to full fidelity footage during and after production for all VFX pulls and conform pulls. The NAS acted as a back-up for all source content, and was live throughout production. Through the system’s interface, they could procure footage, pull shots as needed, and maintain exact color and metadata integration between any step.

For the color grade, FotoKem colorist Walter Volpatto used Blackmagic Resolve to fine-tune raw images, as well as those from ILM, with Johnson and Yedlin using the color and imaging pipeline established from day one. FotoKem also set up remote grading suites at Skywalker Sound and Disney so the teams could work during the sound mix, and later while grading for HDR and other specialty theatrical deliverables. They used a Barco 4K projector for final finishing.

“The film emulation LUT that Steve (Yedlin) created carried nuances he wanted in the final image and he was mindful of this while shooting, lighting both the film and digital scenes so that minimal manipulation was required in the color grade,” Volpatto explains. “Steve’s mastery of lighting for both formats, as well as his extensive understanding of color science, helped to make the blended footage look more cohesive.”

Volpatto also oversaw the HDR pass and IMAX versions. Ultimately, multiple deliverables were created by FotoKem including standard DCP, HDR10, Dolby Vision, HLG, 3D (in standard, stereo Dolby and 2D Dolby HDR) and home video formats. FotoKem worked with IMAX to align the color science pipeline with their Xenon and laser DCPs and 15-perf 70mm prints as well.

“It’s not every day that we would ship scanners to remote locations and integrate a real-time post environment that would rival many permanent installations,” concludes Mike Brodersen, FotoKem’s chief strategy officer.

Behind the Title: Frame of Reference CEO/Chief Creative Twain Richardson

NAME: Twain Richardson

COMPANY: Kingston, Jamaica-based Frame of Reference (@forpostprod)

CAN YOU DESCRIBE YOUR COMPANY?
Frame of Reference is a boutique post production company specializing in TV commercials, digital content, music videos and films.

WHAT’S YOUR JOB TITLE?
CEO and chief creative, but also head cook and bottle washer. At the moment we are a small team, so my roles overlap.

WHAT DOES THAT ENTAIL?
Working on some projects. I’ll jump in and help the team edit or do some color. I’m also making sure clients and employees are happy.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That it’s fun, or I find it fun. It makes life interesting.

WHAT HAVE YOU LEARNED OVER THE YEARS ABOUT RUNNING A BUSINESS?
It’s hard, very hard. There are always new and improved challenges that keep you up at night. Also, you have to be reliable, and being reliable means that you meet deadlines or answer the phone when a client calls.

WHAT TOOLS DO YOU USE?
We use Adobe Premiere for editing and Blackmagic Resolve for color work.

A LOT OF IT MUST BE ABOUT TRYING TO KEEP EMPLOYEES AND CLIENTS HAPPY. HOW DO YOU BALANCE THAT?
I find that one of the most impactful rules is to remember what it felt like to be an employee, and to always listen to your staff concerns. I think I am blessed with the perfect team so keeping employees happy is not too hard at Frame of Reference. Once employees are happy, then we can make and maintain the happiness of our clients.

WHAT’S YOUR FAVORITE PART OF THE JOB?
A happy client.

WHAT’S YOUR LEAST FAVORITE?
I don’t have a least favorite. There are days that I don’t like, of course, but I know that’s a part of running a business so I push on through.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on Twitter and Instagram, I like Twitter for the conversations that you can engage in. The #postchat is a great hashtag to follow and a way to meet other post professionals.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The moment I wake up. There is no greater feeling than opening your eyes, taking your first deep breath of the day and realizing that you’re alive.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I relax. This could mean reading a book, and fortunately we are located in Jamaica where the beach is a stone’s throw away.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Growing up I wanted to be a pilot or a civil engineer, but I can’t picture myself doing something else. I love post production and running a business.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently did a TV commercial for the beer company Red Stripe, and a music video for international artist Tres, titled Looking for Love.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My MacBook Pro, my phone and my mechanical watch.

AICP and AICE to merge January 1

The AICP and AICE are celebrating the New Year in a very special way — they are merging into one organization. These two associations represent companies that produce and finish the majority of advertising and marketing content in the moving image. Post merger, AICP and AICE will function as a single association under the AICP brand. They will promote and advocate for independent production and post companies when it comes to producing brand communications for advertising agencies, advertisers and media companies.

The merger comes after months of careful deliberations on the part of each association’s respective boards and final votes of approval by their memberships. Under the newly merged association’s structure, executive director of AICE Rachelle Madden will assume the title of VP, post production and digital production affairs of AICP. She will report to president/CEO of AICP Matt Miller. Madden is now tasked with taking the lead on AICP’s post production offerings, including position papers, best practices, roundtables, town halls and other educational programs. She will also lead a post production council, which is being formed to advise the AICP National Board on post matters.

Former AICE members will be eligible to join the General Member Production companies of AICP, with access to all benefits starting in 2018. These include: Participation in the Producers’ Health Benefits Plan (PHBP); the AICP Legal Initiative (which provides legal advice on contracts with agencies and advertisers); and access to position papers, guidelines and other tools as they relate to business affairs and employment issues. Other member benefits include access to attend meetings, roundtables, town halls and seminars, as well as receiving the AICP newsletter, member discounts on services and a listing in the AICP membership directory on the AICP website.

All AICP offerings — including its AICP Week Base Camp for thought leadership — will reflect the expanded membership to include topics and issues pertaining to post production. Previously created AICE documents, position papers and forms will now live on aicp.com.

The AICP was founded in 1972 to protect the interests of independent commercial producers, crafting guidelines and best practice in an effort to help its members run their businesses more effectively. Through its AICP Awards, the organization celebrates creativity and craft in marketing communications.

AICE was founded in 1998 when three independent groups representing editing companies in Chicago, Los Angeles and New York formed a national association to discuss issues and undertake initiatives affecting post production on a broader scale. In addition to editing, the full range of post production disciplines, including color correction, visual effects, audio mixing and music and sound design are represented.

From AICP’s perspective, says Miller, merging the two organizations has benefits for members of both groups. “As we grow more closely allied, it makes more sense than ever for the organizations to have a unified voice in the industry,” he notes. He points out that there are numerous companies that are members of both organizations, reflecting the blurring of the lines between production and post that’s been occurring as media platforms, technologies and client needs have changed.

For Madden, AICE’s members will be joining an organization that provides them with a firm footing in terms of resources, programs, benefits and initiatives. “There are many reasons why we moved forward on this merger, and most of them involve amplifying the voice of the post production industry by combining our interests and advocacy with those of AICP members. We now become part of a much larger group, which gives us a strength in numbers we didn’t have before while adding critical post production perspectives to key discussions about business practices and industry trends.”

Main Image: Matt Miller and Rachelle Madden

Storage Roundtable

Production, post, visual effects, VR… you can’t do it without a strong infrastructure. This infrastructure must include storage and products that work hand in hand with it.

This year we spoke to a sampling of those providing storage solutions — of all kinds — for media and entertainment, as well as a storage-agnostic company that helps get your large files from point A to point B safely and quickly.

We gathered questions from real-world users — things that they would ask of these product makers if they were sitting across from them.

Quantum’s Keith Lissak
What kind of storage do you offer, and who is the main user of that storage?
We offer a complete storage ecosystem based around our StorNext shared storage and data management solution,including Xcellis high-performance primary storage, Lattus object storage and Scalar archive and cloud. Our customers include broadcasters, production companies, post facilities, animation/VFX studios, NCAA and professional sports teams, ad agencies and Fortune 500 companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Xcellis features continuous scalability and can be sized to precisely fit current requirements and scaled to meet future demands simply by adding storage arrays. Capacity and performance can grow independently, and no additional accelerators or controllers are needed to reach petabyte scale.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
We don’t have exact numbers, but a growing number of our customers are using cloud storage. Our FlexTier cloud-access solution can be used with both public (AWS, Microsoft Azure and Google Cloud) and private (StorageGrid, CleverSafe, Scality) storage.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We offer a range of StorNext 4K Reference Architecture configurations for handling the demanding workflows, including 4K, 8K and VR. Our customers can choose systems with small or large form-factor HDDs, up to an all-flash SSD system with the ability to handle 66 simultaneous 4K streams.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
StorNext systems are OS-agnostic and can work with all Mac, Windows and Linux clients with no discernible difference.

Zerowait’s Rob Robinson
What kind of storage do you offer, and who is the main user of that storage?
Zerowait’s SimplStor storage product line provides storage administrators scalable, flexible and reliable on-site storage needed for their growing storage requirements and workloads. SimplStor’s platform can be configured to work in Linux or Windows environments and we have several customers with multiple petabytes in their data centers. SimplStor systems have been used in VFX production for many years and we also provide solutions for video creation and many other large data environments.

Additionally, Zerowait specializes in NetApp service, support and upgrades, and we have provided many companies in the media and VFX businesses with off-lease transferrable licensed NetApp storage solutions. Zerowait provides storage hardware, engineering and support for customers that need reliable and big storage. Our engineers support customers with private cloud storage and customers that offer public cloud storage on our storage platforms. We do not provide any public cloud services to our customers.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our customers typically need on-site storage for processing speed and security. We have developed many techniques and monitoring solutions that we have incorporated into our service and hardware platforms. Our SimplStor and NetApp customers need storage infrastructures that scale into the multiple petabytes, and often require GigE, 10GigE or a NetApp FC connectivity solution. For customers that can’t handle the bandwidth constraints of the public Internet to process their workloads, Zerowait has the engineering experience to help our customers get the most of their on-premises storage.

How many of the people buying your solutions are using them with another cloud-based products (i.e. Microsoft Azure)?
Many of our customers use public cloud solutions for their non-proprietary data storage while using our SimplStor and NetApp hardware and support services for their proprietary, business-critical, high-speed and regulatory storage solutions where data security is required.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
SimplStor’s density and scalability make it perfect for use in HD and higher resolution environments. Our SimplStor platform is flexible and we can accommodate customers with special requests based on their unique workloads.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Zerowait’s NetApp and SimplStor platforms are compatible with both Linux (NFS) and Windows (CIFS) environments. OS X is supported in some applications. Every customer has a unique infrastructure and set of applications they are running. Customers will see differences in performance, but our flexibility allows us to customize a solution to maximize the throughput to meet workflow requirements.

Signiant’s Mike Nash
What kind of storage works with your solution, and who is the main user or users of that storage?
Signiant’s Media Shuttle file transfer solution is storage agnostic, and for nearly 200,000 media pros worldwide it is the primary vehicle for sending and sharing large files. Even though Media Shuttle doesn’t provide storage, and many users think of their data as “in Media Shuttle.” In reality, their files are located in whatever storage their IT department has designated. This might be the company’s own on-premises storage, or it could be their AWS or Microsoft Azure cloud storage tenancy. Our users employ a Media Shuttle portal to send and share files; they don’t have to think about where the files are stored.

How are you making sure your products are scalable so people can grow either their use or the bandwidth of their networks (or both)?
Media Shuttle is delivered as a cloud-native SaaS solution, so it can be up and running immediately for new customers, and it can scale up and down as demand changes. The servers that power the software are managed by our DevOps team and monitored 24×7 — and the infrastructure is auto-scaling and instantly available. Signiant does not charge for bandwidth, so customers can use our solutions with any size pipe at no additional cost. And while Media Shuttle can scale up to support the needs of the largest media companies, the SaaS delivery model also makes it accessible to even the smallest production and post facilities.

How many of the people buying your solutions are using them with cloud storage (i.e. AWS or Microsoft Azure)?
Cloud adoption within the M&E industry remains uneven, so it’s no surprise that we see a mixed picture when we look at the storage choices our customers make. Since we first introduced the cloud storage option, there has been a constant month-over-month growth in the number of customers deploying portals with cloud storage. It’s not yet in parity with on-prem storage, but the growth trends are clear.

On-premises content storage is very far from going away. We see many Media Shuttle customers taking a hybrid approach, with some portals using cloud storage and others using on-prem storage. It’s also interesting to note that when customers do choose cloud storage, we increasingly see them use both AWS and Azure.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We can move any size of file. As media files continue to get bigger, the value of our solutions continues to rise. Legacy solutions such as FTP, which lack any file acceleration, will grind things to a halt if 4K, 8K, VR and other huge files need to be moved between locations. And consumer-oriented sharing services like Dropbox and Google Drive become non-starters with these types of files.

What platforms do your system connect to (e.g. Mac OS X, Windows, Linux), and what differences might end-users notice when connecting on these different platforms?
Media Shuttle is designed to work with a wide range of platforms. Users simply log in to portals using any web browser. In the background, a native application installed on the user’s personal computer provides the acceleration functionality. This App works with Windows or Mac OSX systems.

On the IT side of things, no installed software is required for portals deployed with cloud storage. To connect Media Shuttle to on-premises storage, the IT team will run Signiant software on a computer in the customer’s network. This server-side software is available for Linux and Windows.

NetApp’s Jason Danielson
What kind of storage do you offer, and who is the main user of that storage?
NetApp has a wide portfolio of storage and data management products and services. We have four fundamentally different storage platforms — block, file, object and converged infrastructure. We use these platforms and our data fabric software to create a myriad of storage solutions that incorporate flash, disk and cloud storage.

1. NetApp E-Series block storage platform is used by leading shared file systems to create robust and high-bandwidth shared production storage systems. Boutique post houses, broadcast news operations and corporate video departments use these solutions for their production tier.
2. NetApp FAS network-attached file storage runs NetApp OnTap. This platform supports many thousands of applications for tens of thousands of customers in virtualized, private cloud and hybrid cloud environments. In media, this platform is designed for extreme random-access performance. It is used for rendering, transcoding, analytics, software development and the Internet-of-things pipelines.
3. NetApp StorageGrid Webscale object store manages content and data for back-up and active archive (or content repository) use cases. It scales to dozens of petabytes, billions of objects and currently 16 sites. Studios and national broadcast networks use this system and are currently moving content from tape robots and archive silos to a more accessible object tier.
4. NetApp SolidFire converged and hyper-converged platforms are used by cloud providers and enterprises running large private clouds for quality-of-service across hundreds to thousands of applications. Global media enterprises appreciate the ease of scaling, simplicity of QOS quota setting and overall maintenance for largest scale deployments.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The four platforms mentioned above scale up and scale out to support well beyond the largest media operations in the world. So our challenge is not scalability for large environments but appropriate sizing for individual environments. We are careful to design storage and data management solutions that are appropriate to media operations’ individual needs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Seven years ago, NetApp set out on a major initiative to build the data fabric. We are well on the path now with products designed specifically for hybrid cloud (a combination of private cloud and public cloud) workloads. While the uptake in media and entertainment is slower than in other industries, we now have hundreds of customers that use our storage in hybrid cloud workloads, from backup to burst compute.

We help customers wanting to stay cloud-agnostic by using AWS, Microsoft Azure, IBM Cloud, and Google Cloud Platform flexibly and as the project and pricing demands. AWS, Microsoft Azure, IBM, Telsra and ASE along with another hundred or so cloud storage providers include NetApp storage and data management products in their service offerings.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
For higher bandwidth, or bitrate, video production we’ll generally architect a solution with our E-Series storage under either Quantum StorNext or PixitMedia PixStor. Since 2012, when the NetApp E5400 enabled the mainstream adoption of 4K workflows, the E-Series platform has seen three generations of upgrades and the controllers are now more than 4x faster. The chassis has remained the same through these upgrades so some customers have chosen to put the latest controllers into these chassis to improve bandwidth or to utilize faster network interconnect like 16 gigabit fibrechannel. Many post houses continue to use fibrechannel to the workstation for these higher bandwidth video formats while others have chosen to move to Ethernet (40 and 100 Gigabit). As flash (SSDs) continue to drop in price it is starting to be used for video production in all flash arrays or in hybrid configurations. We recently showed our new E570 all flash array supporting NVM Express over Fabrics (NVMe-oF) technology providing 21GB/s of bandwidth and 1 million IOPs with less than 100µs of latency. This technology is initially targeted at super-computing use cases and we will see if it is adopted over the next couple of years for UHD production workloads.

What platforms do your system connect to (Mac OSx, Windows, Linux, etc.), and what differences might end-users notice when connecting on these different platforms?
NetApp maintains a compatibility matrix table that delineates our support of hundreds of client operating systems and networking devices. Specifically, we support Mac OS X, Windows and various Linux distributions. Bandwidth expectations differ between these three operating systems and Ethernet and Fibre Channel connectivity options, but rather than make a blanket statement about these, we prefer to talk with customers about their specific needs and legacy equipment considerations.

G-Technology’s Greg Crosby
What kind of storage do you offer, and who is the main user of that storage?
Western Digital’s G-Technology products provide high-performing and reliable storage solutions for end-to-end creative workflows, from capture and ingest to transfer and shuttle, all the way to editing and final production.

The G-Technology brand supports a wide range of users for both field and in-studio work, with solutions that span a number of portable handheld drives — which are often times used to backup content on-the-go — all the way to in-studio drives that offer capacities up to 144TB. We recognize that each creative has their own unique workflow and some embrace the use of cloud-based products. We are proud to be companions to those cloud services as a central location to store raw content or a conduit to feed cloud features and capabilities.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our line ranges from small portable and rugged drives to large, multi-bay RAID and NAS solutions, for all aspects of the media and entertainment industry. Integrating the latest interface technology such as USB-C or Thunderbolt 3, our storage solutions will take advantage of the ability to quickly transfer files.

We make it easy to take a ton of storage into the field. The G-Speed Shuttle XL drive is available in capacities up to 96TB, and an optional Pelican case, with handle, is available, making it easy to transport in the field and mitigating any concerns about running out of storage. We recently launched the G-Drive mobile SSD R-Series. This drive is built to withstand a three meter (nine foot) drop, and is able to endure accidental bumps or drops, given that it is a solid-state drive.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many of our customers are using cloud-based solutions to complement their creative workflows. We find that most of our customers use our solutions as the primary storage or to easily transfer and shuttle their content since the cloud is not an efficient way to move large amounts of data. We see the cloud capabilities as a great way to share project files and low-resolution content, or collaborate with others on projects as well as distribute share a variety of deliverables.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Today’s camera technology enables not only capture at higher resolutions but also higher frame rates with more dynamic imagery. We have solutions that can easily support multi-stream 4K, 8K and VR workflows or multi-layer photo and visual effects projects. G-Technology is well positioned to support these creative workflows as we integrate the latest technologies into our storage solutions. From small portable and rugged SSD drives to high-capacity and fast multi-drive RAID solutions with the latest Thunderbolt 3 and USB-C interface technology we are ready tackle a variety of creative endeavors.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.), and what differences might users notice when connecting on these different platforms?
Our complete portfolio of external storage solutions work for Mac and PC users alike. With native support for Apple Time Machine, these solutions are formatted for Mac OS out of the box, but can be easily reformatted for Windows users. G-Technology also has a number of strategic partners with technology vendors, including Apple, Atomos, Red Camera, Adobe and Intel.

Panasas’ David Sallak
What kind of storage do you offer, and who is the main user of that storage?
Panasas ActiveStor is an enterprise-class easy-to-deploy parallel scale-out NAS (network-attached storage) that combines Flash and SATA storage with a clustered file system accessed via a high-availability client protocol driver with support for standard protocols.

The ActiveStor storage cluster consists of the ActiveStor Director (ASD-100) control engine, the ActiveStor Hybrid (ASH-100) storage enclosure, the PanFS parallel file system, and the DirectFlow parallel data access protocol for Linux and Mac OS.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
ActiveStor is engineered to scale easily. There are no specific architectural limits for how widely the ActiveStor system can scale out, and adding more workloads and more users is accomplished without system downtime. The latest release of ActiveStor can grow either storage or bandwidth needs in an environment that lets metadata responsiveness, data performance and data capacity scale independently.

For example, we quote capacity and performance numbers for a Panasas storage environment containing 200 ActiveStor Hybrid 100 storage node enclosures with 5 ActiveStor Director 100 units for filesystem metadata management. This configuration would result in a single 57PB namespace delivering 360GB/s of aggregate bandwidth with an excess of 2.6M IOPs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Panasas customers deploy workflows and workloads in ways that are well-suited to consistent on-site performance or availability requirements, while experimenting with remote infrastructure components such as storage and compute provided by cloud vendors. The majority of Panasas customers continue to explore the right ways to leverage cloud-based products in a cost-managed way that avoids surprises.

This means that workflow requirements for file-based storage continue to take precedence when processing real-time video assets, while customers also expect that storage vendors will support the ability to use Panasas in cloud environments where the benefits of a parallel clustered data architecture can exploit the agility of underlying cloud infrastructure without impacting expectations for availability and consistency of performance.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Panasas ActiveStor is engineered to deliver superior application responsiveness via our DirectFlow parallel protocol for applications working in compressed UHD, 4K and higher-resolution media formats. Compared to traditional file-based protocols such as NFS and SMB, DirectFlow provides better granular I/O feedback to applications, resulting in client application performance that aligns well with the compressed UHD, 4K and other extreme-resolution formats.

For uncompressed data, Panasas ActiveStor is designed to support large-scale rendering of these data formats via distributed compute grids such as render farms. The parallel DirectFlow protocol results in better utilization of CPU resources in render nodes when processing frame-based UHD, 4K and higher-resolution formats, resulting in less wall clock time to produce these formats.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Panasas ActiveStor supports macOS and Linux with our higher-performance DirectFlow parallel client software. We support all client platforms via NFS or SMB as well.

Users would notice that when connecting to Panasas ActiveStor via DirectFlow, the I/O experience is as if users were working with local media files on internal drives, compared to working with shared storage where normal protocol access may result in the slight delay associated with open network protocols.

Facilis’ Jim McKenna
What kind of storage do you offer, and who is the main user of that storage?
We have always focused on shared storage for the facility. It’s high-speed attached storage and good for anyone who’s cutting HD or 4K. Our workflow and management features really make us different than basic network storage. We have attachment to the cloud through software that uses all the latest APIs.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Most of our large customers have been with us for several years, and many started pretty small. Our method of scalability is flexible in that you can decide to simply add expansion drives, add another server, or add a head unit that aggregates multiple servers. Each method increases bandwidth as well as capacity.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many customers use cloud, either through a corporate gateway or directly uploaded from the server. Many cloud service providers have ways of accessing the file locations from the facility desktops, so they can treat it like another hard drive. Alternatively, we can schedule, index and manage the uploads and downloads through our software.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Facilis is known for our speed. We still support Fibre Channel when everyone else, it seems, has moved completely to Ethernet, because it provides better speeds for intense 4K and beyond workflows. We can handle UHD playback on 10Gb Ethernet, and up to 4K full frame DPX 60p through Fibre Channel on a single server enclosure.

What platforms do your systems connect to (e.g. Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
We have a custom multi-platform shared file system, not NAS (network attached storage). Even though NAS may be compatible with multiple platforms by using multiple sharing methods, permissions and optimization across platforms is not easily manageable. With Facilis, the same volume, shared one way with one set of permissions, looks and acts native to every OS and even shows up as a local hard disk on the desktop. You can’t get any more cross-platform compatible than that.

SwiftStack’s Mario Blandini
What kind of storage do you offer, and who is the main user of that storage?
We offer hybrid cloud storage for media. SwiftStack is 100% software and runs on-premises atop the server hardware you already buy using local capacity and/or capacity in public cloud buckets. Data is stored in cloud-native format, so no need for gateways, which do not scale. Our technology is used by broadcasters for active archive and OTT distribution, digital animators for distributed transcoding and mobile gaming/eSports for massive concurrency among others.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The SwiftStack software architecture separates access, storage and management, where each function can be run together or on separate hardware. Unlike storage hardware with the mix of bandwidth and capacity being fixed to the ports and drives within, SwiftStack makes it easy to scale the access tier for bandwidth independently from capacity in the storage tier by simply adding server nodes on the fly. On the storage side, capacity in public cloud buckets scales and is managed in the same single namespace.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Objectively, use of capacity in public cloud providers like Amazon Web Services and Google Cloud Platform is still “early days” for many users. Customers in media however are on the leading edge of adoption, not only for hybrid cloud extending their on-premises environment to a public cloud, but also using a second source strategy across two public clouds. Two years ago it was less than 10%, today it is approaching 40%, and by 2020 it looks like the 80/20 rule will likely apply. Users actually do not care much how their data is stored, as long as their user experience is as good or better than it was before, and public clouds are great at delivering content to users.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Arguably, larger assets produced by a growing number of cameras and computers have driven the need to store those assets differently than in the past. A petabyte is the new terabyte in media storage. Banks have many IT admins, where media shops have few. SwiftStack has the same consumption experience as public cloud, which is very different than on-premises solutions of the past. Licensing is based on the amount of data managed, not the total capacity deployed, so you pay-as-you-grow. If you save four replicas or use erasure coding for 1.5X overhead, the price is the same.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
The great thing about cloud storage, whether it is on-premises or residing with your favorite IaaS providers like AWS and Google, the interface is HTTP. In other words, every smartphone, tablet, Chromebook and computer has an identical user experience. For classic applications on systems that do not support AWS S3 as an interface, users see the storage as a mount point or folder in their application — either NFS or SMB. The best part, it is a single namespace where data can come in file, get transformed via object, and get read either way, so the user experience does not need to change even though the data is stored in the most modern way.

Dell EMC’s Tom Burns
What kind of storage do you offer, and who is the main user of that storage?
At Dell EMC, we created two storage platforms for the media and entertainment industry: the Isilon scale-out NAS All-Flash, hybrid and archive platform to consolidate and simplify file-based workflows and the Dell EMC Elastic Cloud Storage (ECS), a scalable enterprise-grade private cloud solution that provides extremely high levels of storage efficiency, resiliency and simplicity designed for both traditional and next-generation workloads.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
In the media industry, change is inevitable. That’s why every Isilon system is built to rapidly and simply adapt by allowing the storage system to scale performance and capacity together, or independently, as more space or processing power is required. This allows you to scale your storage easily as your business needs dictate.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Over the past five years, Dell EMC media and entertainment customers have added more than 1.5 exabytes of Isilon and ECS data storage to simplify and accelerate their workflows.

Isilon’s cloud tiering software, CloudPools, provides policy-based automated tiering that lets you seamlessly integrate with cloud solutions as an additional storage tier for the Isilon cluster at your data center. This allows you to address rapid data growth and optimize data center storage resources by using the cloud as a highly economical storage tier with massive storage capacity.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
As technologies that enhance the viewing experience continue to emerge, including higher frame rates and resolutions, uncompressed 4K, UHD, high dynamic range (HDR) and wide color gamut (WCG), underlying storage infrastructures must effectively scale to keep up with expanding performance requirements.

Dell EMC recently launched the sixth generation of the Isilon platform, including our all-flash (F800), which brings the simplicity and scalability of NAS to uncompressed 4K workflows — something that up until now required expensive silos of storage or complex and inefficient push-pull workflows.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc)? And what differences might end-users notice when connecting on these different platforms?
With Dell EMC Isilon, you can streamline your storage infrastructure by consolidating file-based workflows and media assets, eliminating silos of storage. Isilon scale-out NAS includes integrated support for a wide range of industry-standard protocols allowing the major operating systems to connect using the most suitable protocol, for optimum performance and feature support, including Internet Protocols IPv4, and IPv6, NFS, SMB, HTTP, FTP, OpenStack Swift-based Object access for your cloud initiatives and native Hadoop Distributed File System (HDFS).

The ECS software-defined cloud storage platform provides the ability to store, access, and manipulate unstructured data and is compatible with existing Amazon S3, OpenStack Swift APIs, EMC CAS and EMC Atmos APIs.

EditShare’s Lee Griffin
What kind of storage do you offer, and who is the main user of that storage?
Our storage platforms are tailored for collaborative media workflows and post production. It combines the advanced EFS (that’s EditShare File System, in short) distributed file system with intelligent load balancing. It’s a scalable, fault-tolerant architecture that offers cost-effective connectivity. Within our shared storage platforms, we have a unique take on current cloud workflows, with current security and reliability of cloud-based technology prohibiting full migration to cloud storage for production, EditShare AirFlow uses EFS on-premise storage to provide secure access to media from anywhere in the world with a basic Internet connection. Our main users are creative post houses, broadcasters and large corporate companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Recently, we upgraded all our platforms to EFS and introduced two new single-node platforms, the EFS 200 and 300. These single-node platforms allow users to grow their storage whilst keeping a single namespace which eliminates management of multiple storage volumes. It enables them to better plan for the future, when their facility requires more storage and bandwidth, they can simply add another node.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
No production is in one location, so the ability to move media securely and back up is still a high priority to our clients. From our Flow media asset management and via our automation module, we offer clients the option to backup their valuable content to places like Amazon S3 servers.

How does your system handle UHD, 4K and other higher-than HD resolutions?
We have many clients working with UHD content who are supplying programming content to broadcasters, film distributors and online subscription media providers. Our solutions are designed to work effortlessly with high data rate content, enabling the bandwidth to expand with the addition of more EFS nodes to the intelligent storage pool. So, our system is ready and working now for 4K content and is future proof for even higher data rates in the future.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
EditShare supplies native client EFS drivers to all three platforms, allowing clients to pick and choose which platform they want to work on. If it is an Autodesk Flame for VFX, a Resolve for grading or our own Lightworks for editing on Linux, we don’t mind. In fact, EFS offers a considerable bandwidth improvement when using our EFS drivers over existing AFP and SMB protocol. Improved bandwidth and speed to all three platforms makes for happy clients!

And there are no differences when clients connect. We work with all three platforms the same way, offering a unified workflow to all creative machines, whether on Mac, Windows or PC.

Scale Logic’s Bob Herzan
What kind of storage do you offer, and who is the main user of that storage?
Scale Logic has developed an ecosystem (Genesis Platform) that includes servers, networking, metadata controllers, single and dual-controller RAID products and purpose-built appliances.

We have three different file systems that allow us to use the storage mentioned above to build SAN, NAS, scale-out NAS, object storage and gateways for private and public cloud. We use a combination of disk, tape and Flash technology to build our tiers of storage that allows us to manage media content efficiently with the ability to scale seamlessly as our customers’ requirements change over time.

We work with customers that range from small to enterprise and everything in between. We have a global customer base that includes broadcasters, post production, VFX, corporate, sports and house of worship.

In addition to the Genesis Platform we have also certified three other tier 1 storage vendors to work under our HyperMDC SAN and scale-out NAS metadata controller (HPE, HDS and NetApp). These partnerships complete our ability to consult with any type of customer looking to deploy a media-centric workflow.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Great questions and it’s actually built into the name and culture of our company. When we bring a solution to market it has to scale seamlessly and it needs to be logical when taking the customer’s environment into consideration. We focus on being able to start small but scale any system into a high-availability solution with limited to no downtime. Our solutions can scale independently if clients are looking to add capacity, performance or redundancy.

For example, a customer looking to move to 4K uncompressed workflows could add a Genesis Unlimited as a new workspace focused on the 4K workflow, keeping all existing infrastructure in place alongside it, avoiding major adjustments to their facility’s workflow. As more and more projects move to 4K, the Unlimited can scale capacity, performance and the needed HA requirements with zero downtime.

Customers can then start to migrate their content from their legacy storage over to Unlimited and then repurpose their legacy storage onto the HyperFS file system as second tier storage.Finally, once we have moved the legacy storage onto the new file system we also are more than happy to bring the legacy storage and networking hardware under our global support agreements.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Cloud continues to be ramping up for our industry, and we have many customers using cloud solutions for various aspects of their workflow. As it pertains to content creation, manipulation and long-term archive, we have not seen much adoption with our customer base. The economics just do not support the level of performance or capacity our clients demand.

However, private cloud or cloud-like configurations are becoming more mainstream for our larger customers. Working with on-premise storage while having DR (disaster recovery) replication offsite continues to be the best solution at this point for most of our clients.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Our solutions are built not only for the current resolutions but completely scalable to go beyond them. Many of our HD customers are now putting in UHD and 4K workspaces on the same equipment we installed three years ago. In addition to 4K we have been working with several companies in Asia that have been using our HyperFS file system and Genesis HyperMDC to build 8K workflows for the Olympics.

We have a number of solutions designed to meet our customer’s requirements. Some are done with spinning disk, others with all flash, and then even more that want a hybrid approach to seamlessly combine the technologies.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
All of our solutions are designed to support Windows, Linux, and Mac OS. However, how they support the various operating systems is based on the protocol (block or file) we are designing for the facility. If we are building a SAN that is strictly going to be block level access (8/16/32 Gbps Fibre Channel or 1/10/25/40/100 Gbps iSCSI, we would use our HyperFS file system and universal client drivers across all operating systems. If our clients also are looking for network protocols in addition to the block level clients we can support jSMB and NFS but allow access to block and file folders and files at the same time.

For customers that are not looking for block level access, we would then focus our design work around our Genesis NX or ZX product line. Both of these solutions are based on a NAS operating system and simply present themselves with the appropriate protocol over 1/10/25/40 or 100Gb. Genesis ZX solution is actually a software-defined clustered NAS with enterprise feature sets such as unlimited snapshots, metro clustering, thin provisioning and will scale up over 5 Petabytes.

Sonnet Technologies‘ Greg LaPorte
What kind of storage do you offer, and who is the main user of that storage?
We offer a portable, bus-powered Thunderbolt 3 SSD storage device that fits in your hand. Primary users of this product include video editors and DITs who need a “scratch drive” fast enough to support editing 4K video at 60fps while on location or traveling.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The Fusion Thunderbolt 3 PCIe Flash Drive is currently available with 1TB capacity. With data transfer of up to 2,600 MB/s supported, most users will not run out of bandwidth when using this device.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
Computers with Thunderbolt 3 ports running either macOS Sierra or High Sierra, or Windows 10 are supported. The drive may be formatted to suit the user’s needs, with either an OS-specific format such as HFS+, or cross-platform format such as exFAT.

Post Supervisor: Planning an approach to storage solutions

By Lance Holte

Like virtually everything in post production, storage is an ever-changing technology. Camera resolutions and media bitrates are constantly growing, requiring higher storage bitrates and capacities. Productions are increasingly becoming more mobile, demanding storage solutions that can live in an equally mobile environment. Yesterday’s 4K cameras are being replaced by 8K cameras, and the trend does not look to be slowing down.

Yet, at the same time, productions still vary greatly in size, budget, workflow and schedule, which has necessitated more storage options for post production every year. As a post production supervisor, when deciding on a storage solution for a project or set of projects, I always try to have answers to a number of workflow questions.

Let’s start at the beginning with production questions.

What type of video compression is production planning on recording?
Obviously, more storage will be required if the project is recording to Arriraw rather than H.264.

What camera resolution and frame rate?
Once you know the bitrate from the video compression specs, you can calculate the data size on a per-hour basis. If you don’t feel like sitting down with a calculator or spreadsheet for a few minutes, there are numerous online data size calculators, but I particularly like AJA’s DataCalc application, which has tons of presets for cameras and video and audio formats.

How many cameras and how many hours per day is each camera likely to be recording?
Data size per hour, multiplied by hours per day, multiplied by shoot days, multiplied by number of cameras gives a total estimate of the storage required for the shoot. I usually add 10-20% to this estimate to be safe.

Let’s move on to post questions…

Is it an online/offline workflow?
The simplicity of editing online is awesome, and I’m holding out for the day when all projects can be edited with online media. In the meantime, most larger projects require online/offline editorial, so keep in mind the extra storage space for offline editorial proxies. The upside is that raw camera files can be stored on slower, more affordable (even archival) storage through editorial until the online process begins.

On numerous shows I’ve elected to keep the raw camera files on portable external RAID arrays (cloned and stored in different locations for safety) until picture lock. G-Tech, LaCie, OWC and Western Digital all make 48+ TB external arrays on which I’ve stored raw median urging editorial. When you start the online process, copy the necessary media over to your faster online or grading/finishing storage, and finish the project with only the raw files that are used in the locked cut.

How much editorial staff needs to be working on the project simultaneously?
On smaller projects that only require an editorial staff of two or three people who need to access the media at the same time, you may be able to get away with the editors and assistants network sharing a storage array, and working in different projects. I’ve done numerous smaller projects in which a couple editors connected to an external RAID (I’ve had great success with Proavio and QNAP arrays), which is plugged into one workstation and shares over the network. Of course, the network must have enough bandwidth for both machines to play back the media from the storage array, but that’s the case for any shared storage system.

For larger projects that employ five, 10 or more editors and staff, storage that is designed for team sharing is almost a certain requirement. Avid has opened up integrated shared storage to outside storage vendors the past few years, but Avid’s Nexis solution still remains an excellent option. Aside from providing a solid solution for Media Composer and Symphony, Nexis can also be used with basically any other NLE, ranging from Adobe Premiere Pro to Blackmagic DaVinci Resolve to Final Cut Pro and others. The project sharing abilities within the NLEs vary depending on the application, but the clear trend is moving toward multiple editors and post production personnel working simultaneously in the same project.

Does editorial need to be mobile?
Increasingly, editorial is tending to begin near the start of physical production and this can necessitate the need for editors to be on or near set. This is a pretty simple question to answer but it is worth keeping in mind so that a shoot doesn’t end up without enough storage in a place where additional storage isn’t easily available — or the power requirements can’t be met. It’s also a good moment to plan simple things like the number of shuttle or transfer drives that may be needed to ship media back to home base.

Does the project need to be compartmentalized?
For example, should proxy media be on a separate volume or workspace from the raw media/VFX/music/etc.? Compartmentalization is good. It’s safe. Accidents happen, and it’s a pain if someone accidentally deletes everything on the VFX volume or workspace on the editorial storage array. But it can be catastrophic if everything is stored in the same place and they delete all the VFX, graphics, audio, proxy media, raw media, projects and exports.

Split up the project onto separate volumes, and only give write access to the necessary parties. The bigger the project and team, the bigger the risk for accidents, so err on the side of safety when planning storage organization.

Finally, we move to finishing, delivery and archive questions…

Will the project color and mix in-house? What are the delivery requirements? Resolution? Delivery format? Media and other files?
Color grading and finishing often require the fastest storage speeds of the whole pipeline. By this point, the project should be conformed back to the camera media, and the colorist is often working with high bitrate, high-resolution raw media or DPX sequences, EXRs or other heavy file types. (Of course, there are as many workflows as there are projects, many of which can be very light, but let’s consider the trend toward 4K-plus and the fact that raw media generally isn’t getting lighter.) On the bright side, while grading and finishing arrays need to be fast, they don’t need to be huge, since they won’t house all the raw media or editorial media — only what is used in the final cut.

I’m a fan of using an attached SAS or Thunderbolt array, which is capable of providing high bandwidth to one or two workstations. Anything over 20TB shouldn’t be necessary, since the media will be removed and archived as soon as the project is complete, ready for the next project. Arrays like Areca ARC-5028T2 or Proavio EB800MS give read speeds of 2000+ MB/s,which can play back 4K DPXs in real time.

How should the project be archived?
There are a few follow-up questions to this one, like: Will the project need to be accessed with short notice in the future? LTO is a great long-term archival solution, but pulling large amounts of media off LTO tape isn’t exactly quick. For projects that I suspect will be reopened in the near future, I try to keep an external hard drive or RAID with the necessary media onsite. Sometimes it isn’t possible to keep all of the raw media onsite and quickly accessible, so keeping the editorial media and projects onsite is a good compromise. Offsite, in a controlled, safe, secure location, LTO-6 tapes house a copy of every file used on the project.

Post production technology changes with the blink of an eye, and storage is no exception. Once these questions have been answered, if you are spending any serious amount of money, get an opinion from someone who is intimately familiar with the cutting edge of post production storage. Emphasis on the “post production” part of that sentence, because video I/O is not the same as, say, a bank with the same storage size requirements. The more money devoted to your storage solutions, the more opinions you should seek. Not all storage is created equal, so be 100% positive that the storage you select is optimal for the project’s particular workflow and technical requirements.

There is more than one good storage solution for any workflow, but the first step is always answering as many storage- and workflow-related questions as possible to start taking steps down the right path. Storage decisions are perhaps one of the most complex technical parts of the post process, but like the rest of filmmaking, an exhaustive, thoughtful, and collaborative approach will almost always point in the right direction.

Main Image: G-Tech, QNAP, Avid and Western Digital all make a variety of storage solutions for large and small-scale post production workflows.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

Panasas intros faster, customizable storage solutions for M&E

Panasas has introduced three new products that target those working in the media and entertainment world, a world that requires fast and customizable workflows that offer a path for growth.

Panasas’s ActiveStor is now capable of scaling capacity to 57PB and offering 360GB/s of bandwidth. According to the company, this system doubles metadata performance to cut data access time in half, scales performance and capacity independently and seamlessly adapts to new technology advancements.

The new ActiveStor Director 100 (ASD-100) control-plane engine and the new ActiveStor Hybrid 100 (ASH-100) configurable plug-and-play storage system allows users to design storage systems that meet their exact specifications and workflow requirements, as well as grow the system if needed.

For the first time, Panasas is offering a disaggregated Director Blade  — ASD-100, the brain of the Panasas storage system — to provide flexibility. Customers can now add any number of ASD-100s to drive exactly the level of metadata performance they need. With double the raw CPU power and RAM capacity of previous Director Blades, the ASD-100 offers double the metadata performance on metadata-intensive workloads.

Based on industry-standard hardware, the ASD-100 manages metadata and the global namespace; it also acts as a gateway for standard data-access protocols such as NFS and SMB. The ASD-100 uses non-volatile dual in-line memory modules (NVDIMMs) to store metadata transaction logs, and Panasas is contributing its NVDIMM driver to the FreeBSD community.

The ASH-100 and ASD-100 rack

The ASH-100 hardware platform offers the high-capacity HDD (12TB) and SSD (1.9TB) in a parallel hybrid storage system. A broad range of HDD and SSD capacities can be paired as needed to meet specific workflow needs. The ASH-100 can be configured with ASD-100s or can be delivered with integrated traditional ActiveStor Director Blades (DBs), depending on user requirements.

The latest version of this plug-and-play parallel file system features an updated FreeBSD operating foundation and a GUI that supports asynchronous “push” notification of system changes without user interaction.

Panasas’ updated DirectFlow parallel data access protocol offers a 15 percent improvement in throughput thanks to enhancements to memory allocation and readahead. All ActiveStor models will benefit from this performance increase after upgrading to the new release of PanFS.

Using ASD-100, the ASH-100, an updated PanFS 7.0 parallel file system and enhancements to the DirectFlow parallel data-access protocol has these advantages:
Performance – Users can scale metadata performance, data bandwidth, and data capacity independently for faster time-to-results.
Flexibility – The ability to mix and match HDD and SSD configurations under a single global namespace enables users to best match the system performance to their workload requirements.
Productivity – The new ActiveStor solution doubles productivity by cutting data access time in half, regardless of the number of users.
Investment Protection – The solution is backward and forward compatible with the ActiveStor product portfolio.

The ASH-100 is shipping now. The ASD-100 and PanFS 7.0 will be available in Q1 2018.

Autodesk Flame family updates offer pipeline enhancements

Autodesk has updated its Flame 2018 family of 3D visual effects and finishing software, which includes Flame, Flare, Flame Assist and Lustre. Flame 2018.3 offers more efficient ways of working in post, with feature enhancements that offer greater pipeline flexibility, speed and support for emerging formats and technology.

Flame 2018.3 highlights include:

• Action Selective: Apply FX color to an image surface or the whole action scene via the camera

• Motion Warp Tracking: Organically distort objects that are changing shape, angle and form with new 32-bit motion vector-based tracking technology

• 360-degree VR viewing mode: View LatLong images in a 360-degree VR viewing mode in the Flame player or any viewport during compositing and manipulate the field of view

• HDR waveform monitoring: Set viewport to show luminance waveform; red, green, blue (RGB) parade; color vectorscope or 3D cube; and monitor a range of HDR and wide color gamut (WCG) color spaces including Rec2100 PQ, Rec2020 and DCI P3

• Shotgun Software Loader: Load assets for a shot and build custom batches via Flame’s Python API, and browse a Shotgun project for a filtered view of individual shots

• User-requested improvements for Action, Batch, Timeline and Media Hub

“The new standalone Python console in Flame 2018.3 is a great,” says Treehouse Edit finishing artist John Fegan, a Flame family beta tester. “We’re also excited about the enhanced FBX export with physically based renderer (PBR) for Maya and motion analysis updates. Using motion vector maps, we can now achieve things we couldn’t with a planar tracker or 3D track.”

Flame Family 2018.3 is available today at no additional cost to customers with a current Flame Family 2018 subscription.

Ten Questions: SpeedMedia’s Kenny Francis

SpeedMedia is a bicoastal post studio whose headquarters are in Venice Beach, California. They offer editorial, color grading, finishing, mastering, closed captions/subtitles, encoding and distribution. This independently-owned facility, which has 15 full-time employees, turns 10 this month.

We recently asked a few questions of Kenny Francis, president of the company in an effort to find out how he has not only survived in a tough business but thrived over the years.

WHAT DOES MAKING IT 10 YEARS IN THIS INDUSTRY MEAN?
This industry has a high turnover rate. We have been able to maintain a solid brand and studio relationships, building our own brand equity in the process. At the time we started the company high-def television content was new to the marketplace; there were only a handful of vendors that had updated to that technology and could cater to this larger file size. Most existing vendors were using antiquated machines and methodology to distribute HD, causing major bottlenecks at the station level. We built the company in anticipation of this new trend, which allowed us to properly manage our clients post production and distribution needs.

HOW HAS THE POST PIPELINE CHANGED IN A DECADE?
Now everything is needed “immediately.” Lightning fast is now the new norm. Ten years ago there was a decent amount of time in production schedules for editing, spot tagging, trafficking, clearance, every part of the post process… these days everything is expected to happen now. There’s been a huge sense of time compression because the exception has now become the rule.

WHAT DO YOU SEE AS THE BIGGEST CHALLENGE IN THE FUTURE?
Staying relevant as a company and trying to evolve with the times and our clients’ needs. What worked 10 years ago creatively or productively doesn’t hold the same weight today. We’re living in an age of online and guerrilla marketing campaigns where advertising has already become wildly diversified, so staying relevant is key. To be successful, we’ve had to anticipate these trends and stay nimble enough to reconfigure our equipment to cater to them. We were early adopters of 3D content, and now we are gearing up for UHD finishing and distribution.

WHAT DO YOU SEE FOR THE FUTURE OF YOUR COMPANY AND THE INDUSTRY?
We’re constantly accruing new business, so we’re looking forward to building onto our list of accounts. As a new technology launches, emerging companies compete, one acquires them all and becomes a monopoly, and then the cycle repeats itself. We have been through a few of these cycles, but plan to see many more in the years ahead.

HOW DID YOU ESTABLISH THAT FOUNDATION?
Well, aside from just building a business, it’s been about building a home for our team — giving them a platform to grow. Our employees are family. My uncle used to tell me, “If you concentrate on building a business and not the person, you will not achieve, but if you concentrate on building the person, you achieve both.” SpeedMedia has been focused on building that kind of team — we pride ourselves on supporting one another.

HOW WOULD YOU DESCRIBE THE SPACE AT SPEEDMEDIA STUDIO?
As comfy as possible. We’ve been in the same place for 10 years — a block away from those iconic Venice letters. It’s a great place to be, and why we’ve never left. It’s a home away from home for our employees, so we’ve got big couches, a kitchen, televisions and even our own bar for the monthly company mixers.

Stop by and you’ll see a little bit of Matrix code scrolling down some of the walls, as this historic building was actually Joel Silver’s production office back in the day. If these walls could talk…

HOW HAS VENICE CHANGED SINCE YOU OPENED?
Venice is a living and breathing city, now more than ever. Despite Silicon Beach moving into the area and putting a serious premium on real estate, we’re staying put. It would have been cheaper to move inland, but then that’s all it would have been — an office, not a second home. We’d lose some of our identity for sure.

WHO ARE SOME OF YOUR CLIENTS?
It all started with Burger King. I have a long-standing relationship with the company since my days back at Amoeba, a Santa Monica-based advertising agency. I held a number of positions there and learned the business inside and out. The experience and relationships cultivated there helped me bring Burger King in as an anchor account to help launch SpeedMedia back in 2007. We now work with a wide variety of brands, from Adidas to Old Navy to Expedia to Jaguar Land Rover.

WHAT’S IT LIKE RUNNING A BICOASTAL BUSINESS?
In our business, it’s important to have a presence on both coasts. We have some great clients in NYC, and it’s nice to actually be local for them. Styles of business on the east coast are a bit different than in LA. It actually used to make more sense back in the tape-based workflow days for national logistics. We had a realtime exchange between coasts, creating physical handoffs.

Now we’re basically hard-lined together, operators in Soho working remotely with Venice Beach and vice-versa, sharing assets and equipment and collaborating 24-hours a day. This is all possible thanks to our proprietary order management software system, Matrix. This system allows SpeedMedia the ability to seamlessly integrate with every digital distribution network globally via API tap-ins with our various technology partners.

WHEN DID YOU KNOW IT WAS TIME TO START YOUR OWN BUSINESS?
Well, we were at the end of one of these cycles in the marketplace and many of our brand relationships did not want to go along with the monopoly that was forming. That’s when we created SpeedMedia. We listened to our clients and made sure they had a logical and reliable alternative in the marketplace for post, distribution and asset management. And here we are 10 years later.

Making 6 Below for Barco Escape

By Mike McCarthy

There is new movie coming out this week that is fairly unique. Telling the true story of Eric LeMarque surviving eight days lost in a blizzard, 6 Below: Miracle on the Mountain is the first film shot and edited in its entirety for the new Barco Escape theatrical format. If you don’t know what Barco Escape is, you are about to find out.

This article is meant to answer just about every question you might have about the format and how we made the film, on which I was post supervisor, production engineer and finishing editor.

What is Barco Escape?
Barco Escape is a wraparound visual experience — it consists of three projection screens filling the width of the viewer’s vision with a total aspect ratio of 7.16:1. The exact field of view will vary depending on where you are sitting in the auditorium, but usually 120-180 degrees. Similar to IMAX, it is not about filling the entire screen with your main object but leaving that in front of the audience and letting the rest of the image surround them and fill their peripheral vision in a more immersive experience. Three separate 2K scope theatrical images play at once resulting in 6144×858 pixels of imagery to fill the room.

Is this the first Barco Escape movie?
Technically, four other films have screened in Barco Escape theaters, the most popular one being last year’s release of Star Trek Beyond. But none of these films used the entire canvas offered by Escape throughout the movie. They had up to 20 minutes of content on the side screens, but the rest of the film was limited to the center screen that viewers are used to. Every shot in 6 Below was framed with the surround format in mind, and every pixel of the incredibly wide canvas is filled with imagery.

How are movies created for viewing in Escape?
There are two approaches that can be used to fill the screen with content. One is to place different shots on each screen in the process of telling the story. The other is to shoot a wide enough field of view and high enough resolution to stretch a single image across the screens. For 6 Below, director Scott Waugh wanted to shoot everything at 6K, with the intention of filling all the screens with main image. “I wanted to immerse the viewer in Eric’s predicament, alone on the mountain.”

Cinematographer Michael Svitak shot with the Red Epic Dragon. He says, “After testing both spherical and anamorphic lens options, I chose to shoot Panavision Primo 70 prime lenses because of their pristine quality of the entire imaging frame.” He recorded in 6K-WS (2.37:1 aspect ratio at 6144×2592), framing with both 7:1 Barco Escape and a 2.76:1 4K extraction in mind. Red does have an 8:1 option and a 4:1 option that could work if Escape was your only deliverable. But since there are very few Escape theaters at the moment, you would literally be painting yourself into a corner. Having more vertical resolution available in the source footage opens up all sorts of workflow possibilities.

This still left a few challenges in post: to adjust the framing for the most comfortable viewing and to create alternate framing options for other deliverables that couldn’t use the extreme 7:1 aspect ratio. Other projects have usually treated the three screens separately throughout the conform process, but we treated the entire canvas as a single unit until the very last step, breaking out three 2K streams for the DCP encode.

What extra challenges did Barco Escape delivery pose for 6 Below’s post workflow?
Vashi Nedomansky edited the original 6K R3D files in Adobe Premiere Pro, without making proxies, on some maxed-out Dell workstations. We did the initial edit with curved ultra-wide monitors and 4K TVs. “Once Mike McCarthy optimized the Dell systems, I was free to edit the source 6K Red RAW files and not worry about transcodes or proxies,” he explains. “With such a quick turnaround everyday, and so much footage coming in, it was critical that I could jump on the footage, cut my scenes, see if they were playing well and report back to the director that same day if we needed additional shots. This would not have been possible time-wise if we were transcoding and waiting for footage to cut. I kept pushing the hardware and software, but it never broke or let me down. My first cut was 2 hours and 49 minutes long, and we played it back on one Premiere Pro timeline in realtime. It was crazy!”

All of the visual effects were done at the full shooting resolution of 6144×2592, as was the color grade. Once Vashi had the basic cut in place, there was no real online conform, just some cleanup work to do before sending it to color as an 8TB stack of 6K frames. At that point, we started examining it from the three-screen perspective with three TVs to preview it in realtime, courtesy of the Mosaic functionality built into Nvidia’s Quadro GPU cards. Shots were realigned to avoid having important imagery in the seams, and some areas were stretched to compensate for the angle of the side screens from the audiences perspective.

DP Michael Svitak and director Scott Waugh

Once we had the final color grade completed (via Mike Sowa at Technicolor using Autodesk Lustre), we spent a day in an Escape theater analyzing the effect of reflections between the screens and its effect on the contrast. We made a lot of adjustments to keep the luminance of the side screens from washing out the darks on the center screen, which you can’t simulate on TVs in the edit bay. “It was great to be able to make the final adjustments to the film in realtime in that environment. We could see the results immediately on all three screens and how they impacted the room,” says Waugh.

Once we added the 7.1 mix, we were ready to export assets for our delivery in many different formats and aspect ratios. Making the three streams for Escape playback was a simple as using the crop tool in Adobe Media Encoder to trim the sides in 2K increments.

How can you see movies in the Barco Escape format?
Barco maintains a list of theaters that have Escape screens installed, which can be found at ready2escape.com. But for readers in the LA area, the only opportunity to see a film in Barco Escape in the foreseeable future is to attend one of the Thursday night screenings of 6Below at the Regal LA Live Stadium or the Cinemark XD at Howard Hughes Center. There are other locations available to see the film in standard theatrical format, but as a new technology, Barco Escape is only available in a limited number of locations. Hopefully, we will see more Escape films and locations to watch them in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

The A-List: Director Marc Webb on The Only Living Boy in New York

By Iain Blair

Marc Webb has directed movies both big and small. He made his feature film debut in 2009 with the low-budget indie rom-com (500) Days of Summer, which was nominated for two Golden Globes. He then went on to helm two recent The Amazing Spider-Man blockbusters, the fourth and fifth films in the multi-billion-dollar-grossing franchise.

Webb isn’t just about the big screen. He directed and executive produced the TV series Limitless for CBS, based on the film starring Bradley Cooper, and is currently an executive producer and director of the CW’s Golden Globe-winning series Crazy Ex-Girlfriend.

Marc Webb

Now Webb, whose last film was the drama Gifted, released earlier this year, has again returned to his indie roots with the film The Only Living Boy in New York, starring Jeff Bridges, Kate Beckinsale, Pierce Brosnan, Cynthia Nixon, Callum Turner and Kiersey Clemons.

Set in New York City, the sharp and witty coming-of-age story focuses on a privileged young man, Thomas Webb (Turner) — the son of a publisher and his artistic wife — who has just graduated from college. After moving from his parents’ Upper West Side apartment to the Lower East Side, he befriends his neighbor W.F. (Bridges), an alcoholic writer who dispenses worldly wisdom alongside healthy shots of whiskey.

Thomas’ world begins to shift when he discovers that his long-married father (Brosnan) is having an affair with a seductive younger woman (Beckinsale). Determined to break up the relationship, Thomas ends up sleeping with his father’s mistress, launching a chain of events that will change everything he thinks he knows about himself and his family.

Collaborating with Webb from behind the scenes was director of photography Stuart Dryburgh (Gifted, The Secret Life of Walter Mitty, Alice Through the Looking Glass) and editor Tim Streeto (The Squid and the Whale, Greenberg, Vinyl).

I recently talked with Webb about making the film, and if there is another superhero movie in his future.

What was the appeal of making another small film on the heels of Gifted?
They were both born out of a similar instinct, an impulse to simplify after doing two blockbusters. I had them lined up after Spider-Man and the timing worked out.

 

What sort of themes were you interested in exploring through this?
I think of it as a fable, with a very romantic image of New York as the backdrop, and on some levels it’s an examination of honesty or coming clean. I think people often cover a lot in trying to protect others, and that’s important in life where you have various degrees of truth-telling. But at some point you have to come clean, and that can be very hard. So it’s about that journey for Thomas, and regardless of the complex nature of his desires, he tries to be honest with himself and those close to him.

Can you talk about the look of New York in this film and working with your DP, who also shot your last film?
It was the same DP, but we had the opposite approach and philosophy on this. Gifted was very naturalistic with a diverse color palette and lots of hand-held stuff. On this we mostly kept the camera at eye level, as if it was a documentary, and it has more panache and “style” and more artifice. We restrained the color palette since New York has a lot of neutral tones and people wear a lot of black, and I wanted to create a sort tribute to the classic New York films I love. So we used a lot of blacks and grays, and almost no primary colors, to create an austere look. I wanted to push that but without becoming too stylized; that way when you do see a splash of red or some bright color, it has more impact and it becomes meaningful and significant. We also tried to do a lot of fun shots, like high angle stuff that gives you this objective POV of the city, making it a bit more dramatic.

Why did you shoot 35mm rather than digital?
I’ve always loved film and shooting in film, and it also suited this story as it’s a classic medium. And when you’re projecting digital, sometimes there’s an aliasing in the highlights that bothers me. It can be corrected, but aesthetically I just prefer film. And everyone respects film on set. The actors know you’re not just going to redo takes indefinitely. They feel a little pressure about the money.

Doesn’t that affect the post workflow nowadays?
Yes, it does, as most post people are now used to working in a purely digital format, but I think shooting analog still works better for a smaller film like this, and I’ve had pretty good experiences with film and the labs. There are more labs now than there were two years ago, and there are still a lot of films being shot on film. TV is almost completely digital now, with the odd exception of Breaking Bad. So the post workflow for film is still very accessible.

Where did you do the post?
We did the editing at Harbor Picture Company, and all the color correction at Company 3 with Stefan Sonnenfeld, who uses Blackmagic Resolve. C5’s Ron Bochar was the supervising sound editor and did a lot of it at Harbor. (For the mix at Harbor he employed D-Command using Avid Pro Tools as a mix engine.)

Do you like the post process?
I really love post… going through all the raw footage and then gradually molding it and shaping it. And because of my music video background I love working on all the sound and music in particular.  I started off as an editor, and my very first job in the business was re-cutting music videos for labels and doing documentaries and EPKs. Then I directed a bunch of music videos and shorts, so it’s a process that I’m very familiar with and understand the power of. I feel very much at home in an edit bay, and I edit the movie in my head as I shoot.

You edited with Tim Streeto. Tell us how it worked.
I loved his work on The Squid and the Whale, and I was anxious to work with him. We had a cool relationship. He wasn’t on the set, and he began assembling as I shot, as we had a fairly fast post schedule. I knew what I wanted, so it wasn’t particularly dramatic. We made some changes as we went, but it was pretty straightforward. We had our cut in 10 weeks, and the whole post was just three or four months.

What were the main challenges of editing this?
Tracking the internal life of the character and making sure the tone felt playful. We tried several different openings to the film before we settled on the voiceover that had this organic raison-d’etre, and that all evolved in the edit.

The Spider-Man films obviously had a huge number of very complex visual effects shots. Did you do many on this film?
Very few. Phosphene in New York did them. We had the opening titles and then we did some morphing of actors from time to time in order to speed things up. (Says Phosphene CEO/EP Vivian Connolly, “We designed an animated the graphic opening sequence of the film — using Adobe Photoshop and After Effects — which was narrated by Jeff Bridges. We commissioned original illustrations by Tim Hamilton, and animated them to help tell the visual story of the opening narration of the film.”)

It has a great jazzy soundtrack. Can you talk about the importance of music and sound?
The score had to mingle with all the familiar sounds of the concrete jungle, and we used a bit of reverb on some of the sounds to give it more of a mystical quality. I really love the score by Rob Simonsen, and my favorite bit is the wedding toast sequence. We’d temped in waltzes, but it never quite worked. Then Rob came up with this tango, and it all just clicked.

I also used some Dave Brubeck, some Charlie Mingus and some Moondog — he was this well-known blind New York street musician I’ve been listening to a lot lately — and together it all evoked the mood I wanted. Music is so deeply related to how I started off making movies, so music immediately helps me understand a scene and how to tell it the best way, and it’s a lot of fun for me.

How about the DI? What look did you go for?
It was all about getting a very cool look and palette. We’d sometimes dial up a bit of red in a background, but we steered away from primary colors and kept it a bit darker than most of my films. Most of the feel comes from the costumes and sets and locations, and Stefan did a great job, and he’s so fast.

What’s next? Another huge superhero film?
I’m sure I’ll do another at some point, but I’ve really enjoyed these last two films. I had a ball hanging out with the actors. Smaller movies are not such a huge risk, and you have more fun and can be more experimental.

I just did a TV pilot, Extinct, for CBS, which was a real fun murder mystery, and I’ll probably do more TV next.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Creative nominees named for HPA Awards

Nominees in the creative categories for the 2017 HPA Awards have been announced. Receiving a record-breaking number of entrants this year, the HPA Awards creative categories recognize the outstanding work done by individuals and teams who bring compelling content to a global audience.

Launched in 2006, the HPA Awards recognize outstanding achievement in editing, sound, visual effects and color grading for work in television, commercials and feature films. The winners of the 12th Annual HPA Awards will be announced on November 16 at the Skirball Cultural Center in Los Angeles.

The 2017 HPA Award nominees are:

Outstanding Color Grading – Feature Film
The Birth of a Nation
Steven J. Scott // Technicolor – Hollywood

Ghost in the Shell
Michael Hatzer // Technicolor – Hollywood

Photo Credit: Hopper Stone.

Hidden Figures

Hidden Figures
Natasha Leonnet // Efilm

Doctor Strange
Steven J. Scott // Technicolor – Hollywood

Beauty and the Beast
Stefan Sonnenfeld // Company 3

Fences
Michael Hatzer // Technicolor – Hollywood

Outstanding Color Grading – Television
The Last Tycoon – Burying the Boy Genius
Timothy Vincent // Technicolor – Hollywood

Game of Thrones – Dragonstone
Joe Finley // Chainsaw

Genius – Einstein: Chapter 1
Pankaj Bajpai // Encore Hollywood

The Crown – Smoke and Mirrors
Asa Shoul // Molinare

The Man in the High Castle – Detonation
Roy Vasich // Technicolor

Outstanding Color Grading – Commercial
Land O’ Lakes – The Farmer
Billy Gabor // Company 3

Pennzoil – Joyride Tundra
Dave Hussey // Company 3

Jose Cuervo – Last Days
Tom Poole // Company 3

Nedbank – The Tale of a Note
Sofie Borup // Company 3

Squarespace – John’s Journey
Tom Poole // Company 3

Outstanding Editing – Feature Film
Hidden Figures
Peter Teschner

Dunkirk
Lee Smith, ACE

The Ivory Game
Verena Schönauer

Get Out
Gregory Plotkin

Lion
Alexandre de Franceschi

Game of Thrones

Outstanding Editing – Television
Game of Thrones – Stormborn
Tim Porter, ACE

Stranger Things – Chapter 1: The Vanishing of Will Byers
Dean Zimmerman

Game of Thrones – The Queen’s Justice
Jesse Parker

Narcos – Al Fin Cayo!
Matthew V. Colonna, Trevor Baker

Westworld – The Original
Stephen Semel, ACE, Marc Jozefowicz

Game of Thrones – Dragonstone
Crispin Green

Outstanding Editing – Commercial
Nespresso – Comin’ Home
Chris Franklin // Big Sky Edit

Bonafont – Choices
Doobie White // Therapy Studios

Optum – Heroes
Chris Franklin // Big Sky Edit

SEAT – Moments
Doobie White // Therapy Studios

Outstanding Sound – Feature Film
Fate of the Furious
Peter Brown, Mark Stoeckinger, Paul Aulicino, Steve Robinson, Bobbi Banks // Formosa Group

Guardians of the Galaxy Vol. 2
Addison Teague, Dave Acord, Chris Boyes, Lora Hirschberg // Skywalker Sound

Sully
Alan Murray, Bub Asman, John Reitz, Tom Ozanich // Warner Bros. Post Production Creative Services

John Wick: Chapter 2
Mark Stoeckinger, Alan Rankin, Andy Koyama, Martyn Zub, Gabe Serano // Formosa Group

Doctor Strange
Shannon Mills, Tom Johnson, Juan Peralta, Dan Lauris // Skywalker Sound

Outstanding Sound – Television
Underground – Soldier
Larry Goeb, Mark Linden, Tara Paul // Sony Pictures Post

Stranger Things – Chapter 8: The Upside Down
Craig Henigham // FOX
Joe Barnett, Adam Jenkins, Jordan Wilby, Tiffany Griffith // Technicolor – Hollywood

Game of Thrones – The Spoils of War
Tim Kimmel, MPSE, Paula Fairfield, Mathew Waters, CAS, Onnalee Blank, CAS, Bradley C. Katona, Paul Bercovitch // Formosa Group

The Music of Strangers: Yo-Yo Ma and the Silk Road Ensemble
Pete Horner // Skywalker Sound
Dimitri Tisseyre // Envelope Music + Sound
Dennis Hamlin // Hamlin Sound

American Gods – The Bone Orchard
Bradley North, Joseph DeAngelis, Kenneth Kobett, David Werntz, Tiffany S. Griffith // Technicolor

Outstanding Sound – Commercial
Honda – Up
Anthony Moore, Neil Johnson, Jack Hallett // Factory
Sian Rogers // SIREN

Virgin Media – This Is Fibre
Anthony Moore // Factory

Kia – Hero’s Journey
Nathan Dubin // Margarita Mix Santa Monica

SEAT – Moments
Doobie White // Therapy Studios

Rio 2016 Paralympic Games – We’re the Superhumans
Anthony Moore // Factory

Outstanding Visual Effects – Feature Film
Pirates of the Caribbean: Dead Men Tell No Tales
Gary Brozenich, Sheldon Stopsack, Patrick Ledda, Richard Clegg, Richard Little // MPC

War for the Planet of the Apes

War for the Planet of the Apes
Dan Lemmon, Anders Langlands, Luke Millar, Erik Winquist, Daniel Barrett // Weta Digital

Beauty and the Beast
Kyle McCulloch, Glen Pratt, Richard Hoover, Dale Newton, Neil Weatherley // Framestore

Guardians of the Galaxy Vol. 2
Guy Williams, Kevin Andrew Smith, Charles Tait, Daniel Macarin, David Clayton // Weta Digital

Ghost in the Shell
Guillaume Rocheron, Axel Bonami, Arundi Asregadoo, Pier Lefebvre, Ruslan Borysov // MPC

Outstanding Visual Effects – Television
Black Sails – XXIX
Erik Henry
Yafei Wu, Nicklas Andersson, David Wahlberg // Important Looking Pirates
Martin Lippman // Rodeo

Westworld

The Crown – Windsor
Ben Turner, Tom Debenham, Oliver Cubbage, Lionel Heath, Charlie Bennett // One of Us

Taboo – Episode One
Henry Badgett, Nic Birmingham, Simon Rowe, Alexander Kirichenko, Finlay Duncan // BlueBolt VFX

Ripper Street – Occurrence Reports
Ed Bruce, Nicholas Murphy, Denny Cahill, Piotr Swigut, Mark Pinheiro // Screen Scene

Westworld – The Bicameral Mind
Jay Worth // Deep Water FX
Bobo Skipper, Gustav Ahren, Jens Tenland // Important Looking Pirates
Paul Ghezzo // Cosa VFX

Outstanding Visual Effects – Commercial
Walmart – Lost & Found
Morgan MacCuish, Michael Ralla, Aron Hjartarson, Todd Herman // Framestore

Honda – Keep the Peace
Laurent Ledru, Georgia Tribuiani, Justin Booth-Clibborn, Ellen Turner // Psyop

Nespresso – Comin’ Home
Martin Lazaro, Murray Butler, Nick Fraser, Callum McKevney // Framestore

Kia – Hero’s Journey
Robert Sethi, Chris Knight, Tom Graham, Jason Bergman // The Mill

Walmart – The Gift
Mike Warner, Kurt Lawson, Charles Trippe, Robby Geis // Zero VFX

In other awards news, Larry Chernoff has been named recipient of the Lifetime Achievement Award. Winners of the coveted Engineering Excellence Award include Colorfront Engine by Colorfront, Dolby Vision Post Production Tools by Dolby, Mistika VR by SGO and the Weapon 8K Vista Vision by Red Digital Cinema. These special awards will be bestowed at the HPA Awards gala as well.

The HPA Awards gala ceremony is expected to be a sold-out affair and early ticket purchase is encouraged. Tickets for the HPA Awards are on sale now and can be purchased online at www.hpaawards.net.

Updating the long-running Ford F-150 campaign

Giving a decade-long very successful campaign a bit of a goose presents unique challenges, including maintaining tone and creative continuity while bringing a fresh perspective. To help with the launch of the new 2018 Ford F-150, Big Block director Paul Trillo brought all of his tools to the table, offering an innovative spin to the campaign.

Big Block worked closely with agency GTB, from development to previz, live-action, design, editorial, all the way through color and finish.

Trillo wanted to maintain the tone and voice of the original campaign while adding a distinct technical style and energy. Dynamic camera movement and quick editing helped bring new vitality to the “Built Ford Tough” concept.

Technically challenging camera moves help guide the audience through distinct moments. While previous spots relied largely on motion graphics, Trillo’s used custom camera rigs on real locations.

Typography remained a core of the spots, all underscored by an array of stop-motion, hyperlapse, dolly zooms, drone footage, camera flips, motion control and match frames.

We reached out to Big Block’s Paul and VFX supervisor John Cherniack to find out more…

How early did Big Block get involved in this F-150 campaign?
We worked with Detroit agency GTB starting in May 2017.

How much creative input did you have on the campaign? In terms of both original concept and execution?
Trillo: What was so original about this pitch was that they gave us a blank canvas and VO script to work with, and that’s it. I was building off a campaign that had been running for nearly 10 years and I knew what the creatives were looking for in terms of some sort of kinetic, constantly transitioning energy. However, it was essentially up to me to design each moment of the spot and how we get from A to B to C.

Typically, car commercials can be pretty prescriptive and sensitive to how the car is depicted. This campaign functions a lot differently than your typical car commercial. There was a laundry list of techniques, concepts, tricks and toys I’ve wanted to implement, so we seized the opportunity to throw the kitchen sink at this. Then, by breaking down the script and pairing it with the different tricks I wanted to try out, I sort of formed the piece. It was through the development of the scripts, boards and animatics that certain ideas fell to the wayside and the best rose to the top.

Cherniack: Paul had some great ideas from the very beginning, and the whole team got to help contribute to the brainstorming. We took the best ideas and started to put them all together in a previz to see which ones would stitch together seamlessly.

Paul, Justin Trask (production designer) and I all spent a very long together going through each board and shot, determining which elements we could build, and what we would make in CG. As much as we wanted to build a giant gantry to raise the bar, some elements were cost-prohibitive. This is where we were able to get creative on what we would be able to achieve between practical and CG elements.

How much creative input did you have on set?
Trillo: The only creative decisions we were let to make on set were coming up with creative solutions for logistical challenges. We’d done all the pre-production work, mapping out the camera moves and transitions down to the frame, so the heavy lifting was finished. Of course, you always look to make it better on set and find the right energy in the moment, but that’s all icing.

Cherniack: By the time we started shooting, we had gone through a good amount of planning, and I had a good feeling about everything that Paul was trying to achieve. One area that we both worked together on set was to get the most creative shot, while also maintaining our plans for combining the shots in post.

What challenges did you face?
Trillo: I think I have a sort of addictive personality when it comes to logistical and creative challenges. Before this thing was fully locked in, before we had any storyboards or a single location, I knew what I had written out was going to be super challenging if not impossible. Especially because I wanted to shot as much as we could practically. However, what you write down on a piece of paper and what you animate in a 3D environment doesn’t always align with the physics of the real world. Each shot provided its own unique challenge, whether it’s an art department build or deciding which type of camera rig to use to move the camera in an unusual way. Fortunately, I had a top-notch crew both in camera (DP Dan Mindel) and production design (Justin Trask) that there were always a couple ways to solve each problem.

Cherniack: In order to have all of the measurements, HDRI, set surveys and reference photography, I had to always be on the move, while being close enough should any VFX questions come up. Doing this in 110+ degree heat, in the quarry, during three of the hottest days of the summer was quite a challenge. We also had very little control of lake currents, and had to modify the way we shot the boat scene in Brainiac on the fly. We had a great crew who was able to change directions quickly.

What was your favorite part of working on this campaign? What aspect are you most proud of?
Trillo: It was pretty spectacular to see each of these pieces evolve from chicken scratch into a fully-realized image. There was little creative compromise in that entire process. But I have to say I think I’m proudest of dropping 400lbs of french fries out of a shipping container.

Any major differences between automotive campaigns and ads for other industries?
The main difference is there aren’t any rules here. The only thing you need to keep in mind when doing this campaign is stay true to the F-150’s brand and ethos. As long as you remain true to the spirit, there are no other guidelines to follow in terms of how a car commercial needs to function. What appeals to me about this campaign is it combines a few of my interests of design, technical camera work and a dash of humor.

What tools did you use?
Cherniack: We used the software Maya, 3ds Max, Nuke, Flame, PFTrack for post-production.

Heidi Netzley joins We Are Royale as director of biz dev

Creative agency We Are Royale has added Heidi Netzley as director of business development. She will be responsible for helping to evolve the company’s business development process and building its direct-to-brand vertical.

Most recently, Netzley held a similar position at Digital Kitchen, where she expanded and diversified the company’s client base and also led projects like a digital documentary series for Topgolf and show launch campaigns for CBS and E! Network. Prior to that, she was business development manager at Troika, where she oversaw brand development initiatives and broadcast network rebrands for the agency’s network clients, including ABC, AwesomenessTV, Audience Network and Sundance Cinemas.

Netzley launched her career at Disney/ABC Television Group within the entertainment marketing division. During her seven-year tenure, she held various roles ranging from marketing specialist to manager of creative services, where she helped manage the brand across multi-platform marketing campaigns for all of ABC’s primetime properties.

“With our end-to-end content creation capabilities, we can be both a strategic and creative partner to other types of brands, and I look forward to helping make that happen,” says Netzley.

When Netzley isn’t training for the 2018 LA Marathon, she’s busy fundraising for causes close to her heart, including the Leukemia & Lymphoma Society, for which she was nominated as the organization’s 2016 Woman of the Year. She currently sits on the society’s leadership committee.

LumaForge offering support for shared projects in Adobe Premiere

LumaForge, which designs and sells high-performance servers and shared storage appliances for video workflows, will be at IBC this year showing full support for new collaboration features in Adobe Premiere Pro CC. When combined with LumaForge’s Jellyfish or ShareStation post production servers, the new Adobe features — including multiple open projects and project locking —allow production groups and video editors to work more effectively with shared projects and assets. This is something that feature film and TV editors have been asking for from Adobe.

Project locking allows multiple users to work with the same content. In a narrative workflow, an editing team can divide their film into shared projects per reel or scene. An assistant editor can get to work synchronizing and logging one scene, while the editor begins assembling another. Once the assistant editor is finished with their scene, the editor can refresh their copy of the scene’s Shared Project and immediately see the changes.

An added benefit of using Shared Projects on productions with large amounts of footage is the significantly reduced load time of master projects. When a master project is broken into multiple shared project bins, footage from those shared projects is only loaded once that shared project is opened.

“Adobe Premiere Pro facilitates a broad range of editorial collaboration scenarios,” says Sue Skidmore, partner relations for Adobe Professional Video. “The LumaForge Jellyfish shared storage solution complements and supports them well.”

All LumaForge Jellyfish and LumaForge ShareStation servers will support the Premiere Pro CC collaboration features for both Mac OS and Windows users, connecting over 10Gb Ethernet.

Check out their video on the collaboration here.

Sony Imageworks’ VFX work on Spider-Man: Homecoming

By Daniel Restuccio

With Sony’s Spider-Man: Homecoming getting ready to release digitally on September 26 and on 4K Ultra HD/Blu-ray, Blu-ray 3D, Blu-ray and DVD on October 17, we thought this was a great opportunity to talk about some of the film’s VFX.

Sony Imageworks has worked on every single Spider-Man movie in some capacity since the 2002 Sam Raimi version. On Spider-Man: Homecoming, Imageworks worked on mostly the “third act,” which encompasses the warehouse, hijacked plane and beach destruction scenes. This meant delivering over 500 VFX shots, created by over 30 artists (at one point this peaked at 200) and compositors, and rendering out 2K finished scenes.

All of the Imageworks artists used Dell R7910 workstations with Intel Xeon CPU E5-2620 24 cores, 64GB memory and Nvidia Quadro P5000 graphics cards. They used Cinesync for client reviews and internally they used their in-house Itview software. Rendering technology was SPI Arnold (not the commercial version) and their custom shading system. Software used was Autodesk 2015, Foundry’s Nuke X 10.0 and Side Effects Houdini 15.5. They avoided plug-ins so that their auto-vend, breaking of comps into layers for the 3D conversion process, would be as smooth as possible. Everything was rendered internally on their on-premises renderfarm. They also used the Sony “Kinect” scanning technique that allowed their artists to do performance capture on themselves and rapidly prototype ideas and generate reference.

We sat down with Sony Imageworks VFX supervisor Theo Bailek, who talks about the studio’s contribution to this latest Spidey film.

You worked on The Amazing Spider-Man in 2012 and The Amazing Spider-Man 2 in 2014. From a visual effects standpoint, what was different?
You know, not a lot. Most of the changes have been iterative improvements. We used many of the same technologies that we developed on the first few movies. How we do our city environments is a specific example of how we build off of our previous assets and techniques, leveraging off the library of buildings and props. As the machines get faster and the software more refined, it allows our artists increased iterations. This alone gave our team a big advantage over the workflows from five years earlier. As the software and pipeline here at Sony has gotten more accessible, it has allowed us to more quickly integrate new artists.

It’s a lot of very small, incremental improvements along the way. The biggest technological changes between now and the early Spider-Mans is our rendering technology. We use a more physically-accurate-based rendering incarnation of our global illumination Arnold renderer. As the shaders and rendering algorithms become more naturalistic, we’re able to conform our assets and workflows. In the end, this translates to a more realistic image out of the box.

The biggest thing on this movie was the inclusion of Spider-Man in a Marvel Universe: a different take on this film and how they wanted it to go. That would be probably the biggest difference.

Did you work directly with director Jon Watts, or did you work with production VFX supervisor Janek Sirrs in terms of the direction on the VFX?
During the shooting of the film I had the advantage of working directly with both Janek and Jon. The entire creative team pushed for open collaboration, and Janek was very supportive toward this goal. He would encourage and facilitate interaction with both the director and Tom Holland (who played Spider-Man) whenever possible. Everything moved so quick on set, often times if you waited to suggest an idea you’d lose the chance, as they would have to set up for the next scene.

The sooner Janek could get his vendor supervisors comfortable interacting, the bigger our contributions. While on set I often had the opportunity to bring our asset work and designs directly to Jon for feedback. There were times on set when we’d iterate on a design three or four times over the span of the day. Getting this type of realtime feedback was amazing. Once post work began, most of our reviews were directly with Janek.

When you had that first meeting about the tone of the movie, what was Jon’s vision? What did he want to accomplish in this movie?
Early on, it was communicated from him through Janek. It was described as, “This is sort of like a John Hughes, Ferris Bueller’s take on Spider-Man. Being a teenager he’s not meant to be fully in control of his powers or the responsibility that comes with them. This translates to not always being super-confident or proficient in his maneuvers. That was the basis of it.

Their goal was a more playful, relatable character. We accomplished this by being more conservative in our performances, of what Spider-Man was capable of doing. Yes, he has heightened abilities, but we never wanted every landing and jump to be perfect. Even superheroes have off days, especially teenage ones.

This being part of the Marvel Universe, was there a pool of common assets that all the VFX houses used?
Yes. With the Marvel movies, they’re incredibly collaborative and always use multiple vendors. We’re constantly sharing the assets. That said, there are a lot of things you just can’t share because of the different systems under the hood. Textures and models are easily exchanged, but how the textures are combined in the material and shaders… that makes them not reusable given the different renderers at companies. Character rigs are not reusable across vendors as facilities have very specific binding and animation tools.

It is typical to expect only models, textures, base joint locations and finished turntable renders for reference when sending or receiving character assets. As an example, we were able to leverage somewhat on the Avengers Tower model we received from ILM. We did supply our Toomes costume model and Spider-Man character and costume models to other vendors as well.

The scan data of Tom Holland, was it a 3D body scan of him or was there any motion capture?
Multiple sessions were done through the production process. A large volume of stunts and test footage were shot with Tom before filming that proved to be invaluable to our team. He’s incredibly athletic and can do a lot of his own stunts, so the mocap takes we came away with were often directly usable. Given that Tom could do backflips and somersaults in the air we were able to use this footage as a reference for how to instruct our animators later on down the road.
Toward the later-end of filming we did a second capture session, focusing on the shots we wanted to acquire using specific mocap performances. Then again several months later, we followed up with a third mocap session to get any new performances required as the edit solidified.

As we were trying to create a signature performance that felt like Tom Holland, we exclusively stuck to his performances whenever possible. On rare occasions when the stunt was too dangerous, a stuntman was used. Other times we resorted to using our own in-house method of performance capture using a modified Xbox Kinect system to record our own animators as they acted out performances.

In the end performance capture accounted for roughly 30% of the character animation of Spider-Man and Vulture in our shots, with the remaining 70% being completed using traditional key-framed methods.

How did you approach the fresh take on this iconic film franchise?
It was clear from our first meeting with the filmmakers that Spider-Man in this film was intended to be a more relatable and light-hearted take on the genre. Yes, we wanted to take the characters and their stories seriously, but not at the expense of having fun with Peter Parker along the way.

For us that meant that despite Spider-Man’s enhanced abilities, how we displayed those abilities on screen needed to always feel grounded in realism. If we faltered on this goal, we ran the risk of eroding the sense of peril and therefore any empathy toward the characters.

When you’re animating a superhero it’s not easy to keep the action relatable. When your characters possess abilities that you never see in the real world, it’s a very thin line between something that looks amazing and something that is amazingly silly and unrealistic. Over-extend the performances and you blow the illusion. Given that Peter Parker is a teenager and he’s coming to grips with the responsibilities and limits of his abilities, we really tried to key into the performances from Tom Holland for guidance.

The first tool at our disposal and the most direct representation of Tom as Spider-Man was, of course, motion capture of his performances. On three separate occasions we recorded Tom running through stunts and other generic motions. For the more dangerous stunts, wires and a stuntman were employed as we pushed the limit of what could be recorded. Even though the cables allowed us to record huge leaps, you couldn’t easily disguise the augmented feel to the actor’s weight and motion. Even so, every session provided us with amazing reference.

Though a bulk of the shots were keyframed, it was always informed by reference. We looked at everything that was remotely relevant for inspiration. For example, we have a scene in the warehouse where the Vulture’s wings are racing toward you as Spider-Man leaps into the air stepping on the top of the wings before flipping to avoid the attack. We found this amazing reference of people who leap over cars racing in excess of 70mph. It’s absurdly dangerous and hard to justify why someone would attempt a stunt like that, and yet it was the perfect for inspiration for our shot.

In trying to keep the performances grounded and stay true to the goals of the filmmakers, we also found it was always better to err on the side of simplicity when possible. Typically, when animating a character, you look for opportunities to create strong silhouettes so the actions read clearly, but we tended to ignore these rules in favor of keeping everything dirty and with an unscripted feel. We let his legs cross over and knees knock together. Our animation supervisor, Richard Smith, pushed our team to follow the guidelines of “economy of motion.” If Spider-Man needed to get from point A to B he’d take the shortest route — there’s not time to strike an iconic pose in-between!


Let’s talk a little bit about the third act. You had previsualizations from The Third Floor?
Right. All three of the main sequences we worked on in the third act had extensive previs completed before filming began. Janek worked extremely closely with The Third Floor and the director throughout the entire process of the film. In addition, Imageworks was tapped to help come up with ideas and takes. From early on it was a very collaborative effort on the part of the whole production.
The previs for the warehouse sequence was immensely helpful in the planning of the shoot. Given we were filming on location and the VFX shots would largely rely on carefully choreographed plate photography and practical effects, everything had to be planned ahead of time. In the end, the previs for that sequence resembled the final shots in most cases.

The digital performances of our CG Spider-Man varied at times, but the pacing and spirit remained true to the previs. As our plane battle sequence was almost entirely CG, the previs stage was more of an ongoing process for this section. Given that we weren’t locked into plates for the action, the filmmakers were free to iterate and refine ideas well past the time of filming. In addition to The Third Floor’s previs, Imageworks’ internal animation team also contributed heavily to the ideas that eventually formed the sequence.

For the beach battle, we had a mix of plate and all-CG shots. Here the previs was invaluable once again in informing the shoot and subsequent reshoots later on. As there were several all-CG beats to the fight, we again had sections where we continued to refine and experiment till late into post. As with the plane battle, Imageworks’ internal team contributed extensively to pre and postvis of this sequence.

The one scene, you mentioned — the fight in the warehouse — in the production notes, it talks about that scene being inspired by an actual scene from the comic The Amazing Spider-Man #33.
Yes, in our warehouse sequence there are a series of shots that are directly inspired by the comic book’s cells. Different circumstances in the the comic and our sequence lead to Spider-Man being trapped under debris. However, Tom’s performance and the camera angles that were shot play homage to the comic as he escapes. As a side note, many of those shots were added later in the production and filmed as reshoots.

What sort of CG enhancements did you bring to that scene?
For the warehouse sequence, we added digital Spider-Man, Vulture wings, CG destruction, enhanced any practical effects, and extended or repaired the plate as needed.The columns that the Vulture wings slice through as it circles Spider-Man were practically destroyed with small denoted charges. These explosives were rigged within cement that encased the actual warehouses steel girder columns. They had fans on set that were used to help mimic interaction from the turbulence that would be present from a flying wingsuit powered by turbines. These practical effects were immensely helpful for our effects artists as they provided the best-possible in-camera reference. We kept much of what was filmed, adding our fully reactive FX on top to help tie it into the motion of our CG wings.

There’s quite a bit of destruction when the Vulture wings blast through walls as well. For those shots we relied entirely on CG rigid body dynamic simulations for the CG effects, as filming it would have been prohibitive and unreliable. Though most of the shots in this sequence had photographed plates, there were still a few that required the background to be generated in CG. One shot in particular, with Spider-Man sliding back and rising up, stands out in particular. As the shot was conceived later in the production, there was no footage for us to use as our main plate. We did however have many tiles shot of the environment, which we were able to use to quickly reconstruct the entire set in CG.

I was particularly proud of our team for their work on the warehouse sequence. The quality of our CG performances and the look of the rendering is difficult to discern from the live action. Even the rare all-CG shots blended seamlessly between scenes.

When you were looking at that ending plane scene, what sort of challenges were there?
Since over 90 shots within the plane sequence were entirely CG we faced many challenges, for sure. With such a large number of shots without the typical constraints that practical plates impose, we knew a turnkey pipeline was needed. There just wouldn’t be time to have custom workflows for each shot type. This was something Janek, our client-side VFX supervisor, stressed from the onset, “show early, show often and be prepared to change constantly!”

To accomplish this, a balance of 3D and 2D techniques were developed to make the shot production as flexible as possible. Using our compositing software Nuke’s 3D abilities we were able to offload significant portions of the shot production into the compositor’s hands. For example: the city ground plane you see through the clouds, the projections of the imagery on the plane’s cloaking LED’s and the damaged flickering LED’s were all techniques done in the composite.

A unique challenge to the sequence that stands out is definitely the cloaking. Making an invisible jet was only half of the equation. The LEDs that made up the basis for the effect also needed to be able to illuminate our characters. This was true for wide and extreme close-up shots. We’re talking about millions of tiny light sources, which is a particularly expensive rendering problem to tackle. Mix in the fact that the design of these flashing light sources is highly subjective and thus prone to needing many revisions to get the look right.

Painting control texture maps for the location of these LEDs wouldn’t be feasible for the detail needed on our extreme close-up shots. Modeling them in would have been prohibitive as well, resulting in excessive geometric complexity. Instead, using Houdini, our effects software, we built algorithms to automate the distribution of point clouds of data to intelligently represent each LED position. This technique could be reprocessed as necessary without incurring the large amounts of time a texture or model solution would have required. As the plane base model often needed adjustments to accommodate design or performance changes, this was a real factor. The point cloud data was then used by our rendering software to instance geometric approximations of inset LED compartments on the surface.

Interestingly, this was a technique we adopted from rendering technology we use to create room interiors for our CG city buildings. When rendering large CG buildings we can’t afford to model the hundreds and sometimes thousands of rooms you see through the office windows. Instead of modeling the complex geometry you see through the windows, we procedurally generate small inset boxes for each window that have randomized pictures of different rooms. This is the same underlying technology we used to create the millions of highly detailed LEDs on our plane.

First our lighters supplied base renders to our compositors to work with inside of Nuke. The compositors quickly animated flashing damage to the LEDs by projecting animated imagery on the plane using Nuke’s 3D capabilities. Once we got buyoff on the animation of the imagery we’d pass this work back to the lighters as 2D layers that could be used as texture maps for our LED lights in the renderer. These images would instruct each LED when it was on and what color it needed to be. This back and forth technique allowed us to more rapidly iterate on the look of the LEDs in 2D before committing and submitting final 3D renders that would have all of the expensive interactive lighting.

Is that a proprietary system?
Yes, this is a shading system that was actually developed for our earlier Spider-Man films back when we used RenderMan. It has since been ported to work in our proprietary version of Arnold, our current renderer.

Behind the Title: Park Road Post’s Anthony Pratt

NAME: Anthony Pratt

COMPANY: Park Road Post Production

CAN YOU DESCRIBE YOUR COMPANY?
Park Road is a bespoke post production facility, and is part of the Weta Group of Companies based on the Miramar Peninsular in Wellington, New Zealand.

We are internationally recognized for our award-winning sound and picture finishing for TV and film. We walk alongside all kinds of storytellers, supporting them from shoot through to final delivery.

WHAT’S YOUR JOB TITLE?
Workflow Architect — Picture

WHAT DOES THAT ENTAIL?
I get to think about how we can work with a production to wrap process and people around a project, all with a view of achieving the best result at the end of that process. It’s about taking a step back and challenging our current view while thinking about what’s next.

We spend a lot of time working with the camera department and dailies team, and integrating their work with editorial and VFX. I work alongside our brilliant director of engineering for the picture department, and our equally skilled systems technology team — they make me look good!

From a business development perspective, I try to integrate the platforms and technologies we advance into new opportunities for Park Road as a whole.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I quite like outreach around the company and the group, so presenting and sharing is fun — and it’s certainly not always directly related to the work in the picture department. Our relationships with film festivals, symposia, the local industry guilds and WIFT always excite me.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite time of all is when we get to see our clients work in a cinema with an audience for the first time — then the story is really real.

It’s great when our team is actively engaged as a creative partner, especially during the production phase. I enjoy learning from our technical team alongside our creative folk, and there’s always something to learn.

We have fantastic coffee and baristas; I get to help QC that throughout my day!

WHAT’S YOUR LEAST FAVORITE?
It’s always hard when a really fantastic story we’ve helped plan for isn’t greenlit. That’s the industry, of course, but there are some stories we really want to see told! Like everyone, there are a few Monday mornings that really need to start later.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I get a huge kick on the days we get to sign off the final DCP for a theatrical release. It’s always inspiring seeing all that hard work come together in our cinema.

I am also particularly fond of summer days where we can get away from the facility for a half hour and eat lunch on a beach somewhere with the crew — in Miramar a beach is only 10 minutes away.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be building my own business and making my own work — if it wasn’t strictly film related it would still be narrative — and I’m always playing with technology, so no doubt I’d be asking questions about what that meant from a lived perspective, regardless of the medium. I’d quite probably be distilling a bit more gin as well!

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I think it kind of chose me in the end… I’ve always loved the movies and experimented with work in various media types from print and theatre through animation and interactivity — there was always a technology overtone — before landing where I needed to be: in cinema.

I came to high-end film post somewhat obliquely, having built an early tapeless TV pipeline; I was able to bring that comfort with digital acquisition to an industry transitioning from 35mm in the mid 2000s.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’m profoundly privileged to work for a company owned by Peter Jackson, and I have worked on every project of his since The Lovely Bones. We are working on Christian Rivers’ Mortal Engines at present. We recently supported the wonderful Jess Hall shooting on the Alexa 65 for Ghost in the Shell. He’s a really smart DOP.

I really enjoy our offshore clients. As well as the work we do with our friends in the USA. we’ve done some really great work recently with clients in China and the Middle East. Cultural fusion is exhilarating.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We worked with director Geoff Murphy to restore and revisit his seminal New Zealand feature from 1983 UTU Redux, and that was the opening night feature for the 2013 NZ International Film Festival. It was incredibly good fun, was honorable and is a true taonga in our national narrative.

A Park Road Mistika grading suite.

The Hobbit films were a big chunk of the last decade for us, and our team was recognized with multiple awards. The partnerships we built with SGO, Quantum, Red and Factorial are strong to this day. I was very fortunate to collect some of those awards on our team’s behalf, and was delighted to have that honor.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I rely on clean water and modern medicine to help keep myself, and our wider community, safe from harm. And I am really conscious that to keep that progress moving forward we’re going to have to shepherd our natural world one hell of a lot better.

Powerful computing and fast Internet transformed not only our work, but time and distance for me. I’ve learned more about film, music and art because of the platforms running without friction on the Internet than I would have dared dreamed in the ‘90s.

I hold in my hand a mobile access point that can not only access a mind-bogglingly large world of knowledge and media, but can also dynamically represent that information for my benefit and allow me to acknowledge the value of trust in that connection — there’s hope for us in the very large being accessible by way of the very small.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I kind of abandoned Facebook a few years ago, but Film Twitter has some amazing writers and cinematographers represented. I tend to be a bit of a lurker most other places — sometimes the most instructive exercise is to observe! Our private company Slack channels supplement the rest of my social media time.

To be honest, most of our world is respectfully private, but I do follow @ParkRoadPost on Instagram and Twitter.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Our team has a very broad range of musical tastes, and we tend to try and share that with each other… and there is always Radiohead. I have a not-so-secret love of romantic classical music and lush film scores. My boss and I agree very much on what rock (and a little alt-country) should sound like, so there’s a fair bit of that!

When my headphones are on there is sometimes old-school liquid or downbeat electronica, but mostly I am listening to the best deep house that Berlin and Hamburg have to offer while I work.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
A quick walk around the peninsular is a pretty good way of chilling out, especially if it’s dusk or dawn — then I can watch some penguins on the rocks while ships come in and out of the harbor!

My family (including the furry ones!) are incredible, and they help provide perspective in all things.

Wellington is the craft beer capital of New Zealand, so there’s always an opportunity for some food and an interesting drop from Garage Project (or Liberty brewing out of Auckland) with mates in town. I try and hang out with a bunch of my non-industry friends every month or so — those nights are definitely my favorite for music, and are good for my soul!

Colorist Stephen Nakamura on grading Stephen King’s It

By Randi Altman

A scary clown can be thanked for helping boost what had been a lackluster summer box office. In its first weekend, Stephen King’s It opened with an impressive $125 million. Not bad!

Stephen Nakamura

This horror film takes place in a seemingly normal small town, but of course things aren’t what they seem. And while most horror films set most of the action in shadowy darkness, the filmmakers decided to let a lot of this story unfold in the bright glow of daylight in order to make the most of the darkness that eventually takes over. That presented some interesting opportunities for Deluxe’s Company 3 veteran colorist Stephen Nakamura.

How early did you get involved on It?
We came onboard early to do the first trailer. The response on YouTube and other places was enormous. I can’t speak for the filmmakers, but that was when I first realized how much excitement there was out there for this movie.

Had you worked with director Andy Muschietti before? What kind of direction were you given and how did he explain the look he wanted?
One of the concepts about the look that evolved during production, and we continued it in the DI, was this idea that a lot of the film takes place in fairly high-key situations, not the kind of dark, shadowy world some horror films exist in. It’s a period piece. It’s set in a small town that sort of looks like this pleasant place to be, but all this wild stuff is happening! You see these scary movies and everything’s creepy and it’s overcast outside and it’s clearly a horror movie from the outset. Naturally, that can work, but it can be even scarier when you play against that. The violence and everything feels more shocking.

How would you describe the look of the film?
You have the parts that are like I just described and then it does get very dark and shadowy as the action goes into dark spaces and into the sewer. And all that is particularly effective because we’ve kind of gotten to know all the kids who are in what’s called the Losers’ Club, and we’re rooting for them and scared about what might happen to them.

Can you talk about the Dolby Cinema pass? People generally talk about how bright you can get something with HDR, but I understand you were more interested in how dark the image can look.
Right. When you’re working in HDR, like Dolby lets you do, you have a lot more contrast to work with than you do in the normal digital cinema version. I worked on some of the earliest movies to do a Dolby Cinema version, and when I was working with Brad Bird and Claudio Miranda on Tomorrowland, we experimented with how much brighter we could make portions of the frame than what would be possible with normal digital cinema projection, without making the image into something that had a completely different feel from the P3 version. But when you’re in that space, you can also make things appear much much darker too. So the overall level in the theater can get really dark but because of that contrast you can actually see more detail on a person’s face, or a killer clown’s face, even when the overall level is so low. It’s more like you’re really in that dark space.

It doesn’t make it a whole different movie or anything, but it’s a good example of where Dolby can add something to the experience. I’d tell people to see it in Dolby Cinema if they could.

There was obviously a lot of VFX work that helped the terrifying shapeshifting clown, Pennywise, do what he does, but you also did some work on him in the DI, correct?
Yes. We had alpha channel mattes cut around his eyes for every shot he’s in and we used the color corrector to make changes to his eyes. Sometimes the changes were very subtle — making them brighter or pushing the color around — and sometimes we went more extreme, but I don’t want to talk about that too much. People can see for themselves when they see the movie.

What system do you use, and why? How does that tool allow you to be more creative?
I use Blackmagic’s DaVinci Resolve. I’ve been a colorist since the ‘90s and I’ve used Resolve pretty much my whole career. There are other systems out there that are also very good, but for the kinds of projects I do and the way I like to work, I find it the fastest and most intuitive and every time there’s a new upgrade, I find some new tool that helps me be even more efficient.

Quantum targets smaller post houses with under $25K NAS storage

Quantum is now offering an entry-level NAS storage solution targeting post houses and corporate video departments. Xcellis Foundation is a high-performance, entry-level workflow storage system specifically designed to address the technical and budgetary requirements of small- to medium-sized studios.

Based on Quantum’s StorNext shared file system and data management platform, this new product offers enterprise-class Xcellis storage, including high performance and scalability, in a NAS appliance for under $25,000.

The 3U Xcellis Foundation system includes Quantum’s QXS disk storage chassis and Workflow Director appliance, which provides NAS connectivity and support for billions of files across up to 64 virtual file systems. Xcellis Foundation comes standard with 48TB of raw capacity, and users can upgrade to 72TB or 96TB. When the user is ready to scale the system, adding performance and capacity can be done cost-effectively and non-disruptively by simply connecting more storage. Connectivity is via dual 10 GbE or optional 40 GbE, and NAS protocol support is included with no per-seat licensing.

Here are some additional details about the new system:
• works with higher video resolutions, including 1080p and 4K, without introducing complexity or unnecessary cost to the workflow
• cost-effective IP connectivity over standard NAS protocols
• advanced data management capabilities that optimize performance and maximize capacity across different storage tiers while assuring that content is always in the right place at the right time
• seamless integration into a multi-tier storage infrastructure that includes flash, disk, nearline object storage, public cloud and tape archive
• the ability to scale up and scale out through readily extended capacity, connectivity and redundancy
• simple installation and setup via a web-based GUI

Quantum will be showing Xcellis Foundation at the upcoming IBCShow in Amsterdam, and the new appliance will be available through Quantum and its reseller partners later this month.

Speaking of resellers, here is what one —Nick Smith, director of technology at JB&A Distribution — had to say about the new system: “Xcellis Foundation gives our reseller community exactly what it’s been wanting ― a Quantum StorNext-powered shared storage solution designed specifically for smaller video production environments. [It combines] easy NAS connectivity, 4K-ready performance and simplified setup and management, all at a cost-effective price point.”

Sim Group purchases Vancouver’s The Crossing Studios

Sim Group, a family of companies offering production and post services across TV, feature film and commercials, has strengthened its place in the industry with the acquisition of Vancouver-based The Crossing Studios. This full-service studio and production facility adds approximately 400,000 square feet to Sim’s footprint.

With proximity to downtown Vancouver, the city’s international airport and all local suppliers, The Crossing Studios has been home to many television series, specials and feature films. In addition to providing full-service studio rentals, mill/paint/lockup space and production office space, The Crossing Studios also offer post production services, including Avid suite rentals, dailies, color correction and high-speed connectivity.

The Crossing Studios was founded by Dian Cross-Massey in 2015 and is the second-largest studio facility in the lower mainland, comprised of nine buildings in Vancouver, all are located just 30 minutes from downtown. Cross-Massey has over 25 years of experience in the industry, having worked as a writer, executive producer, visual effects supervisor, director, producer and a production manager. Thanks to this experience, Cross-Massey prides herself on knowing first-hand how to anticipate client needs and contributes to the success of her clients’ projects.

“When I was a producer, I worked with Sim regularly and always felt they had the same approach to fair, honest work as I did, so when the opportunity presented itself to combine resources and support our shared clients with more offerings, the decision to join together felt right,” says Cross-Massey.

The Crossing Studios clients include Viacom, Fox, Nickelodeon, Lifetime, Sony Pictures, NBCUniversal and ABC.

“The decision to add The Crossing Studios to the Sim family was a natural one,” says James Haggarty, CEO, Sim Group. “Through our end-to-end services, we pride ourselves on delivering streamlined solutions that simplify the customer experience. Dian and her team are extremely well respected within the entertainment industry, and together, we’ll not only be able to support the incredible growth in the Vancouver market, but clients will have the option to package everything they need from pre-production through post for better service and efficiencies.”

My Passion Project: We Call Her Yolanda

By Anthony Bari Jr.

For the past couple years, I’ve been producing a documentary called We Call Her Yolanda. After volunteering on disaster relief in the Philippines in the aftermath of 2013’s super typhoon, I was taken with the people’s positivity and resiliency even though they had lost everything, including loved ones and livelihoods. I was inspired to go back and start filming a documentary, the shooting for which just wrapped.

While the rest of the world knew the devastating storm as Typhoon Haiyan, Filipinos had their own name for it — Super Typhoon Yolanda. As such, We Call Her Yolanda was an apt title for the film.

Production
For We Call Her Yolanda, we completed four shoots over two years on a mix of cameras and formats. We used two GoPro Hero4 Black cameras (one was mounted on a drone and the other was first-person view), two Canon C300s, a Sony FS7 and a Canon 5D Mark II. We always travelled with at least two laptops for transcoding and media management. We also carried G-Technology hard drives in our backpacks. I relied heavily on software presets for this project, setting up a bunch of them before we left for the Philippines so we could bag and tag all files during the trip.

Just one of Bari’s shooting setups.

For those who are still dragging and dropping hundreds of gigabytes of media from card to drive, beware. That method is wide open to error. ShotPut Pro, Imagine Products’ offloading app, is my go-to tool for safely offloading media. Computers and technology aren’t perfect, so offloading camera cards and making multiple backups is incredibly important. Version 6 has a new interface that looks just like the Finder window on my Mac.

The software’s checksumming capability verifies the integrity of every data transfer and raises a flag if things don’t add up. This feature is not only important for ensuring complete backups, but it also helps pinpoint problems with hardware or systems — and gives me the visual tools to explain the problems to clients.

Rather than just sticking a camera in people’s faces and asking them for their stories during the Yolanda shoots, we spent a lot of time getting to know people and making them comfortable with our team and the technology. Meanwhile, we shot lots of B-roll. Between the relationship building, the filming, the travel and other rigors of the shoot, it was a busy project that kept our whole team going nonstop — which meant I couldn’t always take care of media management myself like I would prefer.

Another critical tool in my data-wrangling workflow also happens to be from Imagine Products — ProxyMill transcoding software, which they recently revamped into PrimeTranscoder. I use this software’s presets a lot. By digging into the tools on the preset menu, flipping switches, or checking/unchecking boxes in the interface, I can program all sorts of functionality and even map certain functions to specific scenarios. For example, I can merge multiple interviews into a single low-res file and program the tool to apply timecode and/or a LUT file to it before sending to a producer or client for review. The fact that I can kick out a low-resolution, color corrected clip that has everything on it and send it off immediately is a big deal. I just dial it in, save it, and it’s ready to go.

Street view of San Joaquin.

The best part about this is that I don’t have to man the station the whole time. I’m ultimately responsible for the data, and I get very nervous when I don’t have control over it, but this workflow lets me delegate the media management duties when needed and trust that it will be done right, even by people with no post experience.

I like to work with native formats whenever possible, but sometimes you have to rely on proxies, especially when some of the footage is shot in data-heavy 4K. With this project, I used Imagine Products’ HD-VU2. This quality-check tool allowed me to preview footage in its native format after a shoot and decide which footage to pull. Then we’d apply ProxyMill to color correct it or add timecode as needed, and then transcode it into one massive ProRes clip using the clip-stitch feature. This capability came in handy when merging all interviews into one file for the translator and when selecting and stabilizing “best-of” drone footage to get it ready for editing later in Adobe Premiere.

Upon returning from the Philippines after each shoot, I made a strict practice of cloning the data from the portable drives onto multiple 4TB G-Technology desktop drives that are more suitable for editing. (We aim never to edit from the portable drives!) During the shoot, there were a handful of moments when we were literally sitting under a coconut tree with a long cable connected to a generator. That made for very unconventional (and nerve-wracking) media management, so I always go for gear with a dedicated power source whenever possible.

Post
Back in Los Angeles working on post for Yolanda, I turned my home into a post production studio. I worked with a carefully chosen team of eight pro editors who operated in rotation at my house, often late into the night. I supplied the food and drinks (you’ve got to keep up morale!), and they showed up and got to work. Some editors brought their own laptops, while others used my two spare MacBook Pros. All computers were equipped with Adobe Premiere CC.

The G-Technology desktop drives each contained the same set of footage, so whenever someone picked up a project, they simply ripped away at the footage from one of those drives. There were also two smaller G-Technology drives floating around with a total of about 600GB of extra footage (such as 4K drone footage) that people could select as needed. I used Basecamp to track the project and assign the work, and CalDigit Thunderbolt stations helped with connectivity.


Anthony Bari is a director/engineer/editor/post consultant. In addition to his freelance and consulting roles, he has worked on major sporting events, TV shows, reality shows and documentaries. He earned an Emmy Award as part of the 2015 FIFA Women’s World Cup on FS1 technical team.

 

Evoking the beauty and power of Dunkirk with 65mm

FotoKem worked to keep Christopher Nolan’s 65mm source natively photochemical and to provide the truest-to-film digital cinema version possible

By Adrian Pennington

Tipped for Oscar glory, Christopher Nolan’s intense World War II masterpiece, Dunkirk, has pushed the boundaries further than any film before it. Having shot sequences of his previous films (including Interstellar) on IMAX, this time the director made the entire picture on 65mm negative. Approximately 75% of the film was captured on 65mm/15-perf IMAX (1.43:1) and the rest on 65mm/5-perf (2.2:1) on Panavision cameras.

Christopher Nolan on set.

Nolan’s vision and passion for the true film experience was carried out by Burbank-based FotoKem in what became the facility’s biggest and most complex large format project to date. In addition to the array of services that went into creating two 65mm master negatives and 70mm release prints in both 15p and 5p formats, FotoKem also provided the movie’s DCP deliverables based on in-house color science designed to match the film master. With the unique capability to project 70mm film (on a Century JJ projector) side by side with the digital projection of 65mm scans, FotoKem meticulously replicated the organic film look shot by Hoyte van Hoytema, ASC, NSC, FSF, and envisioned by Nolan.

In describing the large format film process, Andrew Oran, FotoKem’s VP of large format services, explains, “Hoyte was in contact with FotoKem’s Dan Muscarella (the movie’s color timer) throughout production, providing feedback on the 70mm contact and 35mm reduction dailies being screened on location. The pipeline was devised so that the IMAX (65mm/15p) footage was timed on a customized 65mm Colormaster by FotoKem color timer Kristen Zimmermann, under Muscarella’s supervision. Her timing lights were provided to IMAX Post, who used those for producing 35mm reduction prints. Those prints were screened in Los Angeles by IMAX, Muscarella and editorial, who in turn provided feedback to production on location. Prints and files travelled securely back and forth between FotoKem and IMAX throughout each day by in-house delivery personnel and via FotoKem’s proprietary globalDATA e-delivery platform.”

A similar route was taken for the Panavision (65mm/5p) footage — also under Muscarella’s keen eye — prior to FotoKem producing 70mm/5p contact daily prints. A set of both prints (35mm and 70mm) were transported for screening in a trailer on location 50,000 miles away in England, France (including shooting on Dunkirk beach itself) and The Netherlands. Traveling with editorial during principal photography was a 70mm projector on which editor Lee Smith, ACE, and Nolan could view dailies in 70mm/5 perf. A 35mm Arri LocPro was also used to watch reduction prints on location.

Oran adds, “Zimmermann also applied color timing lights to the 65mm/5p negatives for contact printing to 70mm at FotoKem. Ultimately, prints from every reel of film negative in both formats were screened by Dan at FotoKem before shipping to production. This way, Dan ensured that the color was as Nolan and Hoytema envisioned. Later, the goal for the DCP was to give the audience the same feel as if they were watching the film version.”

HD deliverables for editorial and studio viewing were created on a customized Millennium telecine. Warner Bros. and Nolan required the quality be high at this step of the process — which can be challenging for 65mm formats. To do this, FotoKem made improvements to the 65mm Millennium telecine machine’s optical and light path, and fed the scans through a custom keycode and metadata workflow in the company’s nextLAB media management platform. Scans for the film’s digital cinema mastering were done at 8K on FotoKem’s Imagica 65mm scanners.

 

Then, to produce the DCPs, FotoKem’s principal color scientist, Joseph Slomka, says, “We created color modeling tools using the negative, interpositive and print process to match the digital image to the film as precisely as technically possible. We sat down with film prints and verified that the modeling data matched a printed original negative in our DI suite with side by side projection.”

Walter Volpatto

This is where FotoKem colorist Walter Volpatto says he determined “how much” and “how close” to match the colors. “We did this by using a special machine — called a Harrahscope Minimax Comparator Projector, developed by Mark Harrah and on loan from the Walt Disney Studios — to project still IMAX frames on the screen,” Volpatto elaborates. “We did this for 400 images from the movie and looked at single frames of digital (projected from a Barco 4K DLP) versus film from Harrahscope, and compared, using the data created by the modeling tools.”

Volpatto worked mainly with RGB offsets in Resolve after each single frame verification to maintain a similarity to traditional color timing. “We also modified the DLP white point settings of the projector for purposes of maintaining the closest match,” he says. “Then, once all the tweaks were made with the stills, we moved to motion picture film reels. Everything described in the printer lights at the film stage were translated to digital based on modeling data.”

In addition to working with Dan (Muscarella) on the film screenings to see the quality he would need to match, Volpatto says that working on Interstellar also helped inform him how to approach this process. “It’s about getting the look that Nolan wants — I just had to replicate it with tremendous accuracy on Dunkirk.”

Joseph Slomka

Aside from the standard DCP, two further digital masters were created for distribution including IMAX scans and digital IMAX distribution, and a Dolby Digital Cinema HDR Master from same source material.

“For the Dolby pass, we had to create another set of color science tools — that still represented Nolan’s vision — to exactly replicate the look of film to HDR,” says Slomka. “Because we had all the computer modeling tools used earlier in the process to identify how the film behaved, we were able to build on that for the HDR version.”

Adds Volpatto, “The whole pipeline was designed to preserve the original viewing experience of print film – everything had to integrate purely and unnoticeably. Having this film and color science knowledge here at FotoKem, it’s hard to see that anybody else could achieve what we did at this level.”

Mistika Ultima offering storage connectivity via ATTO HBAs

SGO has certified ATTO’s 12Gb ExpressSAS host bus adapters (HBAs) for use with its high-end post system, the Mistika Ultima. This new addition can help post teams to better manage large data transfers and offer support for realtime editing of uncompressed 4K video.

The latest addition to the ATTO ExpressSAS family, the 12Gb SAS/SATA HBA provides users with fast storage connectivity while allowing scalability for next-gen platforms and infrastructures. Optimized for extremely low latency and high-bandwidth data transfer, ExpressSAS HBAs offer a wide variety of port configurations, RAID-0, -1, and -1e.

“Projects that our customers are working on are becoming incredibly data heavy and the integration of ATTO products into a Mistika solution will help smooth and speed up data transfers, shortening production times,” said Miguel Angel Doncel, CEO of SGO.

Chatting up IBC’s Michael Crimp about this year’s show

Every year, many from our industry head to Amsterdam for the International Broadcasting Convention. With IBC’s start date coming fast, what better time for the organization’s CEO, Michael Crimp, to answer questions about the show, which runs from September 15-19.

IBC is celebrating its 50th anniversary this year. How will you celebrate?
In addition to producing a commemorative book, and our annual party, IBC is starting a new charitable venture, supporting an Amsterdam group that provides support through sport for disadvantaged and disabled children. If you want to play against former Ajax players in our Saturday night match, bid now to join the IBC All-Stars.

It’s also about keeping the conversation going. We are 50 years on and have a huge amount to talk about — from Ultra HD to 5G connectivity, from IP to cyber security.

How has IBC evolved over the past 10 years?
The simple answer is that IBC has evolved along with the industry, or rather IBC has strived to identify the key trends which will transform the industry and ensure that we are ahead of the curve.

Looking back 10 years, digital cinema was still a work in progress: the total transition we have now seen was just beginning. We had dedicated areas focused on mobile video and digital signage, things that we take for granted today. You can see the equivalents in IBC2017, like the IP Showcase and all the work done on interoperability.

Five years ago we started our Leaders’ Summit, the behind-closed-doors conference for CEOs from the top broadcasters and media organizations, and it has proved hugely successful. This year we are adding two more similar, invitation-only events, this time aimed at CTOs. We have a day focusing on cyber security and another looking at the potential for 5G.

We are also trying a new business matchmaking venue this year, the IBC Startup Forum. Working with Media Honeypot, we are aiming to bring startups and scale-ups together with the media companies that might want to use their talents and the investors who might back the deals.

Will IBC and annual trade shows still be relevant in another 50 years?
Yes, I firmly believe they will. Of course, you will be able to research basic information online — and you can do that now. We have added to the online resources available with our IBC365 year-round online presence. But it is much harder to exchange opinions and experiences that way. Human nature dictates that we learn best from direct contact, from friendly discussions, from chance conversations. You cannot do that online. It is why we regard the opportunity to meet old friends and new peers as one of the key parts of the IBC experience.

What are some of the most important decisions you face in your job on a daily basis?
IBC is an interesting business to head. In some ways, of course, my job as CEO is the same as the head of any other company: making sure the staff are all pulling in the same direction, the customers are happy and the finances are secure. But IBC is unlike any other business because our focus is on spreading and sharing knowledge, and because our shareholders are our customers. IBC is organized by the industry for the industry, and at the top of our organization is the Partnership Board, which contains representatives of the six leading professional and trade bodies in the industry: IABM, IEE, IET, RTS, SCTE and SMPTE.

Can you talk a bit about the conference?
One significant development from that first IBC 50 years ago is the nature of the conference. The founders were insistent that an exhibition needed a technical conference, and in 1967 it was based solely on papers outlining the latest research.

Today, the technical papers program still forms the center piece of the conference. But today our conference is much broader, speaking to the creative and commercial people in our community as well as the engineering and operational.

This year’s conference is subtitled “Truth, Trust and Transformation,” and has five tracks running over five days. Session topics range from the deeply technical, like new codec design, to fake news and alternative facts. Speakers range from Alberto Duenas, the principal video architect at chipmaker ARM to Dan Danker, the product director at Facebook.

How are the attendees and companies participating in IBC changing?
The industry is so much broader than it once was. Consumers used to watch television, because that was all that the technology could achieve. Today, they expect to choose what they want to watch, when and where they want to watch it, and on the device and platform which happen to be convenient at the time.

As the industry expands, so does the IBC community. This year, for example, we have the biggest temporary structure we have ever built for an IBC, to house Hall 14, dedicated to content everywhere.

Given that international travel can be painful, what should those outside the EU consider?
Amsterdam is, in truth, a very easy place for visitors in any part of the world to reach. Its airport is a global hub. The EU maintains an open attitude and a practical approach to visas when required, so there should be no barriers to anyone wanting to visit IBC.

The IBC Innovation Awards are always a draw. Can you comment on the calibre of entries this year?
When we decided to add the IBC Innovation Awards to our program, our aim was to reflect the real nature of the industry. We wanted to reward the real-world projects, where users and technology partners got together to tackle a real challenge and come up with a solution that was much more than the sum of its parts.

Our finalists range from a small French-language service based in Canada to Google Earth; from a new approach to transmitters in the USA to an online service in India; and from Asia’s biggest broadcaster to the Spanish national railway company.

The Awards Ceremony on Sunday night is always one of my highlights. This year there is a special guest presenter: the academic and broadcaster Dr. Helen Czerski. The show lasts about an hour and is free to all IBC visitors.

What are the latest developments in adding capacity at IBC?
There is always talk of the need to move to another venue, and of course as a responsible business we keep this continually under review. But where would we move to? There is nowhere that offers the same combination of exhibition space, conference facilities and catering and networking under one roof. There is nowhere that can provide the range of hotels at all prices that Amsterdam offers, nor its friendly and welcoming atmosphere.

Talking of hotels, visitors this year may notice a large building site between hall 12 and the station. This will be a large on-site hotel, scheduled to be open in time for IBC in 2019.

And regulars who have resigned themselves to walking around the hoardings covering up the now not-so-new underground station will be pleased to hear that the North-South metro line is due to open in July 2018. Test trains are already running, and visitors to IBC next year will be able to speed from the centre of the city in under 10 minutes.

As you mentioned earlier, the theme for IBC2017 is “Truth, Trust and Transformation.” What is the rationale behind this?
Everyone has noticed that the terms “fake news” and “alternative facts” are ubiquitous these days. Broadcasters have traditionally been the trusted brand for news: is the era of social media and universal Internet access changing that?

It is a critical topic to debate at IBC, because the industry’s response to it is central to its future, commercially, as well as technically. Providing true, accurate and honest access to news (and related genres like sport) is expensive and demanding. How do we address this key issue? Also, one of the challenges of the transition to IP connectivity is the risk that the media industry will become a major target for malware and hackers. As the transport platform becomes more open, the more we need to focus on cyber security and the intrinsic design of safe, secure systems.

OTT and social media delivery is sometimes seen as “disruptive,” but I think that “transformative” is the better word. It brings new challenges for creativity and business, and it is right that IBC looks at them.

Will VR and AR be addressed at this year’s conference?
Yes, in the Future Zone, and no doubt on the show floor. Technologies in this area are tumbling out, but the business and creative case seems to be lagging behind. We know what VR can do, but how can we tell stories with it? How can we monetize it? IBC can bring all the sides of the industry together to dig into all the issues. And not just in debate, but by seeing and experiencing the state of the art.

Cyber security and security breaches are becoming more frequent. How will IBC address these challenges?
Cyber security is such a critical issue that we have devoted a day to it in our new C-Tech Forum. Beyond that, we have an important session on cyber security on Friday in the main conference with experts from around the world and around the industry debating what can and should be done to protect content and operations.

Incidentally, we are also looking at artificial intelligence and machine learning, with conference sessions in both the technology and business transformation strands.

What is the Platform Futures — Sport conference aiming to address?
Platform Futures is one of the strands running through the conference. It looks at how the latest delivery and engagement technologies are opening new opportunities for the presentation of content.

Sport has always been a major driver – perhaps the major driver – of innovation in television and media. For many years now we have had a sport day as part of the conference. This year, we are dedicating the Platform Futures strand to sport on Sunday.

The stream looks at how new technology is pushing boundaries for live sports coverage; the increasing importance of fan engagement; and the phenomenon of “alternative sports formats” like Twenty20 cricket and Rugby 7s, which provide lucrative alternatives to traditional competitions. It will also examine the unprecedented growth of eSports, and the exponential opportunities for broadcasters in a market that is now pushing towards the half-billion-dollar size.

 

Michael Kammes’ 5 Things – Video editing software

By Randi Altman

Technologist Michael Kammes is back with a new episode of 5 Things, which focuses on simplifying film, TV and media technology. The web series answers, according to Kammes, the “five burning tech questions” people might have about technologies and workflows in the media creation space. This episode tackles professional video editing software being used (or not used) in Hollywood.

Why is now the time to address this segment of the industry? “The market for NLEs is now more crowded than it has been in over 20 years,” explains Kammes. “Not since the dawn of modern NLEs have there been this many questions over what tools should be used. In addition, the massive price drop of NLEs, coupled with the pricing shift (monthly/yearly, as opposed to outright) has created more confusion in the market.”

In his video, Kammes focuses on Avid Media Composer, Adobe Premiere, Apple Final Cut Pro, Lightworks, Blackmagic Resolve and others.

Considering its history and use on some major motion pictures, (such as The Wolf of Wall Street), why hasn’t Lightworks made more strides in the Hollywood community? “I think Lightworks has had massive product development and marketing issues,” shares Kammes. “I rarely see the product pushed online, at user groups or in forums.  EditShare, the parent company of LightWorks, also deals heavily in storage, so one can only assume the marketing dollars are being spent on larger ticket items like professional and enterprise storage over a desktop application.”

What about Resolve, considering its updated NLE tools and the acquisition of audio company Fairlight? Should we expect to see more Resolve being used as a traditional NLE? “I think in Hollywood, adoption will be very, very slow for creative editorial, and unless something drastic happens to Avid and Adobe, Resolve will remain in the minority. For dailies, transcodes or grading, I can see it only getting bigger, but I don’t see larger facilities adopting Resolve for creative editorial. Outside of Hollywood, I see it gaining more traction. Those outlets have more flexibility to pivot and try different tools without the locked-in TV and feature film machine in Hollywood.”

Check it out:

Industry vets open NYC post boutique Twelve

Colorist Lez Rudge and veteran production and post executives Marcelo Gandola, Axel Ericson and Ed Rilli have joined forces to launch New York City-based Twelve, a high-end post boutique for the advertising, film and television industries. Twelve has already been working on campaigns for Jagermeister, Comcast, Maybelline and the NY Rangers.

Twelve’s 4,500-square-foot space in Manhattan’s NoMad neighborhood features three Blackmagic Resolve color rooms, two Autodesk Flame suites and a 4K DI theater with a 7.1 Dolby surround sound system and 25-person seating capacity. Here, clients also have access to a suite of film and production services — editorial, mastering, finishing and audio mixing — as part of a strategic alliance with Ericson and his team at Digital Arts. Ericson, who brings 25 years of experience in film and television, also serves as managing partner of Twelve.

From Twelve’s recent Avion tequila campaign.

Managing director Rilli will handle client relations, strategy, budgets and deadlines, among other deliverables for the business. He was previously head of production at Nice Shoes for 17 years. His long list of agency clients includes Hill Holiday, Publicis, Grey and Saatchi & Saatchi and projects for Dunkin Donuts, NFL, Maybelline and Ford.

Gandola was most recently chief operations officer at Harbor Picture Company. Other positions include EVP at Hogarth, SVP of creative services at Deluxe, VP of operations at Company 3 and principal of Burst @ Creative Bubble, a digital audio and sound design company.

On the creative side, Rudge was formerly a colorist and partner at Nice Shoes. Since 2015, Rudge has also been focusing on his directorial career. His most recent campaign for the NY Rangers and Madison Square Garden — a concept-to-completion project via Twelve — garnered more than 300,000 Facebook hits on its first day.

While Twelve is currently working on short-form content, such as commercials and marketing campaigns, the company is making a concerted effort to extend its reach into film and television. Meanwhile, the partners also have a significant roster expansion in the works.

“After all of these years on both the vendor and client side, we’ve learned how best to get things done,” concludes Gandola. “In a way, technology has become secondary, and artistry is where we keep the emphasis. That’s the essence of what we want to provide clients, and that’s ultimately what pushed us to open our own place.”

Main Image (L-R): Ed Rilli, Axel Ericson, Lez Rudge & Marcelo Gandola

Millennium Digital XL camera: development to delivery

By Lance Holte and Daniel Restuccio

Panavision’s Millennium DXL 8K may be one of today’s best digital cinema cameras, but it might also be one of the most misunderstood. Conceived and crafted to the exacting tradition of the company whose cameras captured such films as Lawrence of Arabia and Inception, the Millennium DXL challenges expectations. We recently sat down with Panavision to examine the history, workflow, some new features and how that all fits into a 2017 moviemaking ecosystem.

Announced at Cine Gear 2016, and released for rent through Panavision in January 2017, the Millennium DXL stepped into the digital large format field as, at first impression, a competitor to the Arri Alexa 65. The DXL was the collaborative result of a partnership of three companies: Panavision developed the optics, accessories and some of the electronics; Red Digital Cinema designed the 8K VV (VistaVision) sensor; and Light Iron provided the features, color science and general workflow for the camera system.

The collaboration for the camera first began when Light Iron was acquired by Panavision in 2015. According to Michael Cioni, Light Iron president/Millennium DXL product manager, the increase in 4K and HDR television and theatrical formats like Dolby Vision and Barco Escape created the perfect environment for the three-company partnership. “When Panavision bought Light Iron, our idea was to create a way for Panavision to integrate a production ecosystem into the post world. The DXL rests atop Red’s best tenets, Panavision’s best tenets and Light Iron’s best tenets. We’re partners in this — information can flow freely between post, workflow, color, electronics and data management into cameras, color science, ergonomics, accessories and lenses.”

HDR OLED viewfinder

Now, one year after the first announcement, with projects like the Lionsgate feature adventure Robin Hood, the Fox Searchlight drama Can You Ever Forgive Me?, the CBS crime drama S.W.A.T. and a Samsung campaign shot by Oscar-winner Linus Sandgren under the DXL’s belt, the camera sports an array of new upgrades, features and advanced tools. They include an HDR OLED viewfinder (which they say is the first), wireless control software for iOS, and a new series of lenses. According to Panavision, the new DXL offers “unprecedented development in full production-to-post workflow.”

Preproduction Considerations
With so many high-resolution cameras on the market, why pick the DXL? According to Cioni, cinematographers and their camera crew are no longer the only people that directly interact with cameras. Panavision examined the impact a camera had on each production department — camera assistants, operators, data managers, DITs, editors, and visual effects supervisors. In response to this feedback, they designed DXL to offer custom toolsets for every department. In addition, Panavision wanted to leverage the benefits of their heritage lenses and enable the same glass that photographed ‘Lawrence of Arabia’ to be available for a wider range of today’s filmmakers on DXL.

When Arri first debuted the Alexa 65 in 2014, there were questions about whether such a high-resolution, data-heavy image was necessary or beneficial. But cinematographers jumped on it and have leaned on large format sensors and glass-to-lens pictures — ranging from Doctor Strange to Rogue One — to deliver greater immersiveness, detail and range. It seems that the large format trend is only accelerating, particularly among filmmakers who are interested in the optical magnification, depth of field and field-of-view characteristics that only large format photography offers.

Kramer Morgenthau

“I think large format is the future of cinematography for the big screen,” says cinematographer Kramer Morgenthau, who shot with the DXL in 2016. “[Large format cinematography] gives more of a feeling of the way human vision is. And so, it’s more cinematic. Same thing with anamorphic glass — anamorphic does a similar thing, and that’s one of the reasons why people love it. The most important thing is the glass, and then the support, and then the user-friendliness of the camera to move quickly. But these are all important.”

The DXL comes to market offering a myriad of creative choice for filmmakers. Among the large format cameras, the Millennium DXL aims to be the crème de la crème — it’s built around an 46mm 8192×4320 Red VV sensor, custom Panavision large format spherical and anamorphic lenses, wrapped in camera department-friendly electronics, using proprietary color science — all of which complements a mixed camera environment.

“The beauty of digital, and this camera in particular, is that DXL actually stands for ‘digital extra light.’ With a core body weight of only 10 pounds, and with its small form factor, I’ve seen DXL used in the back seat of a car as well as to capture the most incredible helicopter scenes,” Cioni notes.

With the help of Light Iron, Panavision developed a tool to match DXL footage to Panavised Red Weapon cameras. Guardians of the Galaxy Vol. 2 used Red Weapon 8K VV Cameras with Panavision Primo 70 lenses. “There are shows like Netflix’s 13 Reasons Why [Season Two] that combined this special matching of the DXL and the Red Helium sensor based on the workflow of the show,” Cioni notes. “They’re shooting [the second season] with two DXLs as their primary camera, and they have two 8K Red cameras with Helium sensors, and they match each other.”

If you are thinking the Millennium DXL will bust your budget, think again. Like many Panavision cameras, the DXL is exclusively leasable through Panavision, but Cioni says they’re happy to help filmmakers to build the right package and workflow. “A lot of budgetary expense can be avoided with a more efficient workflow. Once customers learn how DXL streamlines the entire imaging chain, a DXL package might not be out of reach. We always work with customers to build the right package at a competitive price,” he says.

Using the DXL in Production
The DXL could be perceived as a classic dolly Panavision camera, especially with the large format moniker. “Not true,” says Morgenthau, who shot test footage with the camera slung over his shoulder in the back seat of a car.

He continues, “I sat in the back of a car and handheld it — in the back of a convertible. It’s very ergonomic and user-friendly. I think what’s exciting about the Millennium: its size and integration with technology, and the choice of lenses that you get with the Panavision lens family.”

Panavision’s fleet of large format lenses, many of which date back to the 1950s, made the company uniquely equipped to begin development on the new series of large format optics. To be available by the end of 2017, the Primo Artiste lenses are a full series of T/1.8 Primes — the fastest optics available for large format cinematography — with a completely internalized motor and included metadata capture. Additionally, the Primo Artiste lenses can be outfitted with an anamorphic glass attachment that retains the spherical nature of the base lens, yet induces anamorphic artifacts like directional flares and distorted bokeh.

Another new addition to the DXL is the earlier mentioned Panavision’s HDR OLED Primo viewfinder. Offering 600-nit brightness, image smoothing and optics to limit eye fatigue, the viewfinder also boasts a theoretical contrast ratio of 1,000,000:1. Like other elements on the camera, the Primo viewfinder was the result of extensive polling and camera operator feedback. “Spearheaded by Panavision’s Haluki Sadahiro and Dominick Aiello, we went to operators and asked them everything we could about what makes a good viewfinder,” notes Cioni. “Guiding an industry game-changing product meant we went through multiple iterations. We showed the first Primo HDR prototype version in November 2016, and after six months of field testing, the final version is both better and simpler, and it’s all thanks to user feedback.”

Michael Cioni

In response to the growing popularity of HDR delivery, Light Iron also provides a powerful on-set HDR viewing solution. The HDR Village cart is built with a 4K HDR Sony monitor with numerous video inputs. The system can simultaneously display A and B camera feeds in high dynamic range and standard dynamic range on four different split quadrants. This enables cinematographers to evaluate their images and better prepare for multi-format color grading in post, given that most HDR projects are also required to deliver in SDR.

Post Production
The camera captures R3D files, the same as any other Red camera, but does have metadata that is unique to the DXL, ranging from color science to lens information. It also uses Light Iron’s set of color matrices designed specifically for the DXL: Light Iron Color.

Designed by Light Iron supervising colorist Ian Vertovec, Light Iron Color deviates from traditional digital color matrices by following in the footsteps of film stock philosophy instead of direct replication of how colors look in nature. Cioni likens Light Iron Color to Kodak’s approach to film. “Kodak tried to make different film stocks for different intentions. Since one film stock cannot satisfy every creative intention, DXL is designed to allow look transforms that users can choose, export and integrate into the post process. They come in the form of cube lookup tables and are all non-destructive.”

Light Iron Color can be adjusted and tweaked by the user or by Light Iron, which Cioni says has been done on many shows. The ability to adjust Light Iron Color to fit a particular project is also useful on shows that shoot with multiple camera types. Though Light Iron Color was designed specifically for the Millennium DXL, Light Iron has used it on other cameras — including the Sony A7, and Reds with Helium and Dragon sensors — to ensure that all the footage matches as closely as possible.

While it’s possible to cut with high-resolution media online with a blazing fast workstation and storage solution, it’s a lot trickier to edit online with 8K media in a post production environment that often requires multiple editors, assistants, VFX editors, post PAs and more. The good news is that the DXL records onboard low-bitrate proxy media (ProRes or DNx) for offline editorial while simultaneously recording R3Ds without requiring the use of an external recorder.

Cioni’s optimal camera recording setup for editorial is 5:1 compression for the R3Ds alongside 2K ProRes LT files. He explains, “My rule of thumb is to record super high and super low. And if I have high-res and low-res and I need to make something else, I can generate that somewhere in the middle from the R3Ds. But as long as I have the bottom and the top, I’m good.”

Storage is also a major post consideration. An hour of 8192×4320 R3Ds at 23.976fps runs in the 1TB/hour range — that number may vary, depending on the R3D compression, but when compared to an hour of 6560×3100 Arriraw footage, which lands at 2.6TB an hour, the Millennium DXL’s lighter R3D workflow can be very attractive.

Conform and Delivery
One significant aspect of the Millennium DXL workflow is that even though the camera’s sensor, body, glass and other pipeline tools are all recently developed, R3D conform and delivery workflows remain tried and true. The onboard proxy media exactly matches the R3Ds by name and timecode, and since Light Iron Color is non-destructive, the conform and color-prep process is simple and adjustable, whether the conform is done with Adobe, Blackmagic, Avid or other software.

Additionally, since Red media can be imported into almost all major visual effects applications, it’s possible to work with the raw R3Ds as VFX plates. This retains the lens and camera metadata for better camera tracking and optical effects, as well as providing the flexibility of working with Light Iron Color turned on or off, and the 8K R3Ds are still lighter than working with 4K (as is the VFX trend) DPX or EXR plates. The resolution also affords enormous space for opticals and stabilization in a 4K master.

4K is the increasingly common delivery resolution among studios, networks and over-the-top content distributors, but in a world of constant remastering and an exponential increase in television and display resolutions, the benefit in future-proofing a picture is easily apparent. Baselight, Resolve, Rio and other grading and finishing applications can handle 8K resolutions, and even if the final project is only rendered at 4K now, conforming and grading in 8K ensures the picture will be future-proofed for some time. It’s a simple task to re-export a 6K or 8K master when those resolutions become the standard years down the line.

After having played with DXL footage provided by Light Iron, it was surprising how straightforward the workflow seems. For a very small production, the trickiest part is the requirement of a powerful workstation — or sets of workstations — to conform and play 8K Red media, with a mix of (likely) 4K VFX shots, graphics and overlays. Michael Cioni notes, “[Everyone] already knows a RedCode workflow. They don’t have to learn it, I could show the DXL to anyone who has a Red Raven and in 30 seconds they’ll confidently say, ‘I got this.’”

Choosing the right workstation set-up for the job

By Lance Holte

Like virtually everything in the world of filmmaking, the number of available options for a perfect editorial workstation are almost infinite. The vast majority of systems can be greatly customized and expanded, whether by custom order, upgraded internal hardware or with expansion chassis and I/O boxes. In a time when many workstations are purchased, leased or upgraded for a specific project, the workstation buying process is largely determined by the project’s workflow and budget.

One of Harbor Picture Company’s online rooms.

In my experience, no two projects have identical workflows. Even if two projects are very similar, there are usually some slight differences — a different editor, a new camera, a shorter schedule, bigger storage requirements… the list goes on and on. The first step for choosing the optimal workstation(s) for a project is to ask a handful of broad questions that are good starters for workflow design. I generally start by requesting the delivery requirements, since they are a good indicator of the size and scope of the project.

Then I move on to questions like:

What are the camera/footage formats?
How long is the post production schedule?
Who is the editorial staff?

Often there aren’t concrete answers to these questions at the beginning of a project, but even rough answers point the way to follow-up questions. For instance, Q: What are the video delivery requirements? A: It’s a commercial campaign — HD and SD ProRes 4444 QTs.

Simple enough. Next question.

Christopher Lam from SF’s Double Fine Productions/ Courtesy of Wacom.

Q: What is the camera format? A: Red Weapon 6K, because the director wants to be able to do optical effects and stabilize most of the shots. This answer makes it very clear that we’re going to be editing offline, since the commercial budget doesn’t allow for the purchase of a blazing system with a huge, fast storage array.

Q: What is the post schedule? A: Eight weeks. Great. This should allow enough time to transcode ProRes proxies for all the media, followed by offline and online editorial.

At this point, it’s looking like there’s no need for an insanely powerful workstation, and the schedule looks like we’ll only need one editor and an assistant. Q: Who is the editorial staff? A: The editor is an Adobe Premiere guy, and the ad agency wants to spend a ton of time in the bay with him. Now, we know that agency folks really hate technical slowdowns that can sometimes occur with equipment that is pushing the envelope, so this workstation just needs to be something that’s simple and reliable. Macs make agency guys comfortable, so let’s go with a Mac Pro for the editor. If possible, I prefer to connect the client monitor directly via HDMI, since there are no delay issues that can sometimes be caused by HDMI to SDI converters. Of course, since that will use up the Mac Pro’s single HDMI port, the desktop monitors and the audio I/O box will use up two or three Thunderbolt ports. If the assistant editor doesn’t need such a powerful system, a high-end iMac could suffice.

(And for those who don’t mind waiting until the new iMac Pro ships in December, Apple’s latest release of the all-in-one workstation seems to signal a committed return for the company to the professional creative world – and is an encouraging sign for the Mac Pro overhaul in 2018. The iMac Pro addresses its non-upgradability by futureproofing itself as the most powerful all-in-one machine ever released. The base model starts at a hefty $4,999, but boasts options for up to a 5K display, 18-core Xeon processor, 128GB of RAM, and AMD Radeon Vega GPU. As more and more applications add OpenCL acceleration (AMD GPUs), the iMac Pro should stay relevant for a number of years.)

Now, our workflow would be very different if the answer to the first question had instead been A: It’s a feature film. Technicolor will handle the final delivery, but we still want to be able to make in-house 4K DCPs for screenings, EXR and DPX sequences for the VFX vendors, Blu-ray screeners, as well as review files and create all the high-res deliverables for mastering.

Since this project is a feature film, likely with a much larger editorial staff, the workflow might be better suited to editorial in Avid (to use project sharing/bin locking/collaborative editing). And since it turns out that Technicolor is grading the film in Blackmagic Resolve, it makes sense to online the film in Resolve and then pass the project over to Technicolor. Resolve will also cover any in-house temp grading and DCP creation and can handle virtually any video file.

PCs
For the sake of comparison, let’s build out some workstations on the PC side that will cover our editors, assistants, online editors, VFX editors and artists, and temp colorist. PC vs. Mac will likely be a hotly debated topic in this industry for some time, but there is no denying that a PC will return more cost-effective power at the expense of increased complexity (and potential for increased technical issues) than a Mac with similar specs. I also appreciate the longer lifespan of machines with easy upgradability and expandability without requiring expansion chassis or external GPU enclosures.

I’ve had excellent success with the HP Z line — using z840s for serious finishing machines and z440s and z640s for offline editorial workstations. There are almost unlimited options for desktop PCs, but only certain workstations and components are certified for various post applications, so it pays to do certification research when building a workstation from the ground up.

The Molecule‘s artist row in NYC.

It’s also important to keep the workstation components balanced. A system is only as strong as its weakest link, so a workstation with an insanely powerful GPU, but only a handful of CPU cores will be outperformed by a workstation with 16-20 cores and a moderately high-end GPU. Make sure the CPU, GPU, and RAM are similarly matched to get the best bang for your buck and a more stable workstation.

Relationships!
Finally, in terms of getting the best bang for your buck, there’s one trick that reigns supreme: build great relationships with hardware companies and vendors. Hardware companies are always looking for quality input, advice and real-world testing. They are often willing to lend (or give) new equipment in exchange for case studies, reviews, workflow demonstrations and press. Creating relationships is not only a great way to stay up to date with cutting edge equipment, it expands support options, your technical network and is the best opportunity to be directly involved with development. So go to trade shows, be active on forums, teach, write and generally be as involved as possible and your equipment will thank you.

Our Main Image Courtesy of editor/compositor Fred Ruckel.

 


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

Doing more with Thunderbolt 3

Streamlined speed on set or in the studio

By Beth Marchant

It was only six years ago that Thunderbolt, the high-speed data transfer and display port standard co-developed by Apple and Intel, first appeared in Apple’s MacBook Pros and iMacs. Since then, the blended PCI Express, DisplayPort and power plug cable has jolted its way toward ubiquity, giving computers and peripherals increased speed and functionality with every iteration.

Content creators were the first to discover its potential, and gamers quickly followed. Intel, which now owns the sole rights to the spec, announced in late May it would put Thunderbolt 3 into all of its future CPUs and release the spec to the industry in 2018. In a related blog post, Intel VP Chris Walker called Thunderbolt 3 “one of the most significant cable I/O updates since the advent of USB.” The company envisions not just a faster port, but “a simpler and more versatile port, available for everyone, coming to approximately 150 different PCs, Macs and peripherals by the end of this year,” said Walker.

So what can it do for you on set or in the studio? First, some thumbnail facts about what it does: with double the video bandwidth of Thunderbolt 2 and eight times faster than USB 3.0, Thunderbolt 3 clocks 40Gbps transfer speeds, twice as fast as the previous version. T3 also includes USB-C connectivity, which finally makes it usable with Windows-based workstations as well as with Macs. On top of those gains, a T3 port now lets you daisy-chain up to six devices and two 4K monitors — or one 5K monitor — to a laptop through a single connection. According to Intel’s Walker, “We envision a future where high-performance single-cable docks, stunning photos and 4K video, lifelike VR, and faster-than-ever storage are commonplace.” That’s an important piece of the puzzle for filmmakers who want their VR projects and 4K+ content to reach the widest possible audience.

The specification for Thunderbolt 3, first released in 2015, gave rise to a smattering of products in 2016, most importantly the MacBook Pro with Thunderbolt 3. At NAB this year, many more flexible RAID storage and improved T3 devices that connect directly to Mac and Windows computers joined their ranks. In June, Apple released iMacs with TB3.

For directors Jason and Josh Diamond, a.k.a. The Diamond Brothers, upgrading to new TB3-enabled laptops is their first priority. “When we look at the data we’re pushing around, be it 24 cameras from a VR shoot, or many TBs of 8K R3Ds from a Red Helium multicam shoot, one of the most important things in the end is data transfer speed. As we move into new computers, drives and peripherals, USB-C and TB3 finally have ubiquity across our Mac and PC systems that we either own or are looking to upgrade to. This makes for much easier integrations and less headaches as we design workflows and pathways for our projects,” says Jason Diamond, The Diamond Bros./Supersphere.

If you are also ready to upgrade, here are a sampling of recently released products that can add Thunderbolt 3 performance to your workflow.

CalDigit docking station

Clean Up the Clutter
CalDigit was one of the first to adopt the Thunderbolt interface when it came out in 2011, so it’s no surprise that the first shipment of the CalDigit Thunderbolt Station 3 (TS3) docking station introduced at NAB 2017 sold out quickly. The preorders taken at the show are expected to ship soon. TS3 is designed to be a streamlined, central charging hub for MacBook Pro, delivering 85W of laptop charging via USB 3.1 Type-A (plus audio in and out), along with two Thunderbolt ports, two eSATA ports, two USB 3.1 Type A ports, Gigabit Ethernet and a DisplayPort. DisplayPort lets users connect to a range of monitors with a DisplayPort to HDMI, DVI or VGA cable.

CalDigit also introduced the TS3 Lite, shipping now, which will work with any Thunderbolt 3 computer from PCs to iMacs or MacBook Pros and features two Thunderbolt 3 ports, Gigabit Ethernet, audio in and out, an AC power adapter and DisplayPort. It includes two USB 3.1 Type-A ports — one on the back and one on its face — that let you charge your iPhone even when the dock isn’t connected to your computer.

The Need for Speed
Like the other new T3 products on the market, LaCie‘s 6big and 12big Thunderbolt 3 RAID arrays feature both Thunderbolt 3 and USB 3.1 interfaces for Mac- or Windows-based connections.

LaCie 12Big

But as their names imply, the relatively compact “big” line ramps up to 120TB in the 12big desktop tower. The hardware RAID controller and 7200RPM drives inside the12big will give you speeds of up to 2600MB/s, and even 2400MB/s in RAID 5. This will significantly ramp up how quickly you ingest footage or move through an edit or grade in the course of your day (or late night!). Thanks to Thunderbolt 3, multiple streams of ProRes 422 (HQ), ProRes 4444 XQ and uncompressed HD 10-bit and 12-bit video are now much easier to handle at once. Preview render rates also get a welcome boost.

The new Pegasus3 R4, R6 and R8 RAIDs from Promise debuted at Apple’s WWDC 2017 in early June and were designed to integrate seamlessly with Apple’s latest Thunderbolt 3-enabled offerings, which will include the upcoming iMac Pro coming in December. They will deliver 16TB to 80TB of desktop storage and can also sync with the company’s Apollo Cloud personal storage device, which lets you share small clips or low-res review files with a group via mobile devices while in transit. When used with Promise’s SANLink Series, the new Pegasus3 models can also be shared over a LAN.

Lighten the Load on Set
If you regularly work with large media files on set, more than one G-Technology G-Drive ev series drives are likely on your cart. The latest version of the series so popular with DITs has a Thunderbolt 3-enabled drive for improved transfer speeds and an HDMI input so you can daisy-chain the drive and a monitor through a single connection on a laptop. Users of G-Tech ev series drives who need even more robust Thunderbolt 3 RAID on location — say to support multistream 8K and VR — now have another option: the 8-bay G|Speed Shuttle XL with ev Series Bay Adapters that G-Tech introduced at NAB. Shipping this month, it comes in RAID-0, -1, -5, -6 and -10 configurations, includes two T3 ports and ranges in price from $3,999.95 (24TB) to $6,599.95 (60TB).

Sonnet Cfast 2.0 Pro card reader

Transfer Faster on Location
One of the first card readers with a Thunderbolt interface is the SF3 Series — Cfast 2.0 Pro launched in May by Sonnet Technologies. Dual card slots let the reader ingest files simultaneously from Canon, Arri and Blackmagic cameras at concurrent data transfer speeds up to 1,000 MB/s, twice as fast as you can from a USB 3.0 reader. The lightweight, extruded aluminum shell is made to handle as much abuse as you can throw at it.

Stereoscopic-Ready
The Thunderbolt 3 version of Blackmagic’s UltraStudio 4K Extreme resolved two critical obstacles when it began shipping last year: it was finally fast enough to support RGB and stereoscopic footage while working in 4K and it could

Blackmagic UltraStudio 4K Extreme

be connected directly to color correction systems like DaVinci Resolve via its new Thunderbolt 3 port. The 40 Gbps transfer speeds are “fast enough for the most extreme, high bit-depth uncompressed RGB 4K and stereoscopic formats,” says Blackmagic’s Grant Petty.

Blackmagic introduced the UltraStudio HD Mini with Thunderbolt 3 at NAB this year. It adds 3G-SDI and HDMI along with analog connections for 10-bit recording up to 1080p60 and 2K DCI, likely making it the first of its kind. It’s aimed at the live broadcast graphics editing and archiving.

Connect Back to PCI-E and Be Eco-Friendly
OWC makes little black boxes that do two very important things: retrieve your PCI-Express card options, while also helping the planet. The zero emissions Mac and PC technology company began shipping the updated OWC Mercury Helios with Thunderbolt 3 expansion chassis in May. The box includes two Thunderbolt 3 ports, a PCI-E post, and a Mini DisplayPort, which lets you connect to high-bandwidth NIC cards, HBAs and RAID controllers and add video capture and processing cards and audio production PCIe cards. An energy saver mode also powers it on and off with your computer.

Boxx Apexx 4 features i9 X-Series procs, targets post apps

Boxx’s new Apexx 4 6201 workstation features the new 10-core Intel Core i9 X-Series processor. Intel’s most scalable desktop platform ever, X-Series processors offer significant performance increases over previous Intel technology.

“The Intel Core X-Series is the ultimate workstation platform,” reports Boxx VP of engineering Tim Lawrence. “The advantages of the new Intel Core i9, combined with Boxx innovation, will provide architects, engineers and motion media creators with an unprecedented level of performance.”

One of those key Intel X-Series advantages is Intel Turbo Boost 3.0. This technology identifies the two best cores to boost, making the new CPUs ideal for multitasking and virtual reality, as well as editing and rendering high-res 4K/VR video and effects with fast video transcode, image stabilization, 3D effects rendering and animation.

When comparing previous-generation Intel processors to X-Series processors (10-core vs.10-core), the X-Series is up to 14% faster in multi-threaded performance and up to 15% faster in single-threaded performance.

The first in a series of Boxx workstations featuring the new Intel X-Series processors, Apexx 4 6201 also includes up to three professional-grade Nvidia or AMD Radeon Pro graphics cards, and up to 128GB of system memory. The highly configurable Apexx 4 series workstations provide support for single-threaded applications, as well as multi-threaded tasks in applications like 3ds Max, Maya and Adobe CC.

“Professionals choose Boxx because they want to spend more time creating and less time waiting on their compute-intensive workloads,” says Lawrence. “Boxx Apexx workstations featuring new Intel X-Series processors will enable them to create without compromise, to megatask, support a bank of 4K monitors and immerse themselves in VR — all faster than before.”

 

BCPC names Kylee Peña president, expands leadership

The Blue Collar Post Collective (BCPC) has revised and expanded its leadership. Kylee Peña has been upped to president, having served as Los Angeles vice president. Ryan Penny will now serve as New York VP and Chris Visser will take over as LA VP.

“In the time since I’ve been directly involved with the leadership of BCPC, we’ve seen continued exponential growth, both in our membership and our scope,” says Peña (our main image). “Inclusiveness and accessibility are incredibly important to people, and they want to be involved with our mission.”

Chris Visser

The shift in leadership was prompted by the upcoming departure of co-president Janis Vogel, who will resign after nearly three years at the helm of the organization that consists entirely of full-time working professionals who volunteer their time to run its operations. Vogel will remain in the organization as an active member and sit on the Board. She will be spending the remainder of 2017 in London, where she will co-host a BCPC meet-up, marking the first extension of official on-the-ground activity for the organization outside of the US.

Co-president Felix Cabrera, who has served BCPC for the last year focusing on an “Intro to Post” training course in collaboration with the New York City Mayor’s Office of Media and Entertainment and Brooklyn Workforce Industries, will be stepping down from his role as well.

Peña has been at the helm of BCPC West for the last year, recruiting a committee and building the BCPC community from the ground up in Los Angeles. She is also an associate editor for Creative COW, active with SMPTE, and an outspoken advocate for gender equality and mental health in post production. By day, she is a workflow supervisor for Bling Digital, working on feature films and television.

Ryan Penny

Penny is an editor and currently serves as director for the newly launched “Made in NY Post Production Training Program,” partnering with the NYC Mayor’s Office for Media and Entertainment to train and provide job placement in the post production industry for low income and unemployed New Yorkers.

Visser is an assistant editor in Los Angeles, currently working on season two of Shooter for USA Network. Eager to expand his role on the original West planning committee, he took the lead on #TipJar, a weekly led discussion on BCPC’s Facebook page on important topics in the industry.

Peña says, “I’m excited about what’s on the horizon for BCPC. Funneling all this momentum into our mission to make all of post production more inclusive will have an explosive impact on the industry for years to come. People in our industry are ready to open their doors and help us change the face of what an expert looks like in post. They want to look outside their bubble, learn from people who don’t look like them, and mentor or hire emerging talent. We’re rising to meet that demand with action.”

Sound — Wonder Woman’s superpower

By Jennifer Walden

When director Patty Jenkins first met with supervising sound editor James Mather to discuss Warner Bros. Wonder Woman, they had a conversation about the physical effects of low-frequency sound energy on the human body, and how it could be used to manipulate an audience.

“The military spent a long time investigating sound cannons that could fire frequencies at groups of people and debilitate them,” explains Mather. “They found that the lower frequencies were far more effective than the very high frequencies. With the high frequencies, you can simply plug your ears and block the sound. The low-end frequencies, however, impact the fluid content of the human body. Frequencies around 5Hz-9Hz can’t be heard, but can have physiological, almost emotional effects on the human body. Patty was fascinated by all of that. So, we had a very good sound-nerd talk at our first meeting — before we even talked about the story of the film.”

Jenkins was fascinated by the idea of sound playing a physical role as well as a narrative one, and that direction informed all of Mather’s sound editorial choices for Wonder Woman. “I was amazed by Patty’s intent, from the very beginning, to veer away from very high-end sounds. She did not want to have those featured heavily in the film. She didn’t want too much top-end sonically,” says Mather, who handled sound editorial at his Soundbyte Studios in West London.

James Mather (far right) and crew take to the streets.

Soundbyte Studios offers creative supervision, sound design, Foley and dialog editing. The facility is equipped with Pro Tools 12 systems and Avid S6 and S3 consoles. Their client list includes top studios like Warner Bros., Disney, Fox, Paramount, DreamWorks, Aardman and Pathe. Mather’s team includes dialog supervisor Simon Chase, and sound effects editors Jed Loughran and Samir Fočo. When Mather begins a project, he likes to introduce his team to the director as soon as possible “so that they are recognized as contributors to the soundtrack,” he says. “It gives the team a better understanding of who they are working with and the kind of collaboration that is expected. I always find that if you can get everyone to work as a collaborative team and everyone has an emotional investment or personal investment in the project, then you get better work.”

Following Jenkins’s direction, Mather and his team designed a tranquil sound for the Amazonian paradise of Themyscira. They started with ambience tracks that the film’s sound recordist Chris Munro captured while they were on-location in Italy. Then Mather added Mediterranean ambiences that he and his team had personally collected over the years. Mather embellished the ambience with songbirds from Asia, Australasia and the Amazon. Since there are white peacocks roaming the island, he added in modified peacock sounds. Howler monkeys and domestic livestock, like sheep and goats, round out the track. Regarding the sheep and goats, Mather says, “We pitched them and manipulated them slightly so that they didn’t sound quite so ordinary, like a natural history film. It was very much a case of keeping the soundtrack relatively sparse. We did not use crickets or cicadas — although there were lots there while they were filming, because we wanted to stay away the high-frequency sounds.”

Waterfalls are another prominent feature of Themyscira, according to Mather, but thankfully they weren’t really on the island so the sound recordings were relatively clean. The post sound team had complete control over the volume, distance and frequency range of the waterfall sounds. “We very much wanted the low-end roar and rumble of the waterfalls rather than high-end hiss and white noise.”

The sound of paradise is serene in contrast to London and the front lines of World War I. Mather wanted to exaggerate that difference by overplaying the sound of boats, cars and crowds as Steve [Chris Pine] and Diana [Gal Gadot] arrived in London. “This was London at its busiest and most industria

l time. There were structures being built on a major scale so the environment was incredibly active. There were buses still being drawn by horses, but there were also cars. So, you have this whole mishmash of old and new. We wanted to see Diana’s reaction to being somewhere that she has never experienced before, with sounds that she has never heard and things she has never seen. The world is a complete barrage of sensory information.”

They recorded every vehicle they could in the film, from planes and boats to the motorcycle that Steve uses to chase after Diana later on in the film. “This motorcycle was like nothing we had ever seen before,” explains Mather. “We knew that we would have to go and record it because we didn’t have anything in our sound libraries for it.”

The studio spent days preparing the century-old motorcycle for the recording session. “We got about four minutes of recording with it before it fell apart,” admits Mather. “The chain fell off, the sprockets broke and then it went up in smoke. It was an antique and probably shouldn’t have been used! The funny thing is that it sounded like a lawnmower. We could have just recorded a lawnmower and it would’ve sounded the same!”

(Mather notes that the motorcycle Steve rides on-screen was a modern version of the century-old one they got to record.)

Goosing Sounds
Mather and his sound team have had numerous opportunities to record authentic weapons, cars, tanks, planes and other specific war-era machines and gear for projects they’ve worked on. While they always start with those recordings as their sound design base, Mather says the audience’s expectation of a sound is typically different from the real thing. “The real sound is very often disappointing. We start with the real gun or real car that we recorded, but then we start to work on them, changing the texture to give them a little bit more punch or bite. We might find that we need to add some gun mechanisms to make a gun sound a bit snappier or a bit brighter and not so dull. It’s the same with the cars. You want the car to have character, but you also want it to be slightly faster or more detailed than it actually sounds. By the nature of filmmaking, you will always end up slightly embellishing the real sound.”

Take the gun battles in Wonder Woman, for instance. They have an obvious sequentiality. The gun fires, the bullet travels toward its target and then there is a noticeable impact. “This film has a lot of slow-motion bullets firing, so we had to amp up the sense of what was propelling that very slow-motion bullet. Recording the sound of a moving bullet is very hard. All of that had to be designed for the film,” says Mather.

In addition to the real era-appropriate vehicles, Wonder Woman has imaginary, souped-up creations too, like a massive bomber. For the bomber’s sound, Mather sought out artist Joe Rush who builds custom Mad Max-style vehicles. They recorded all of Rush’s vehicles, which had a variety of different V8, V12 and V6 engines. “They all sound very different because the engines are on solid metal with no suspension,” explains Mather. “The sound was really big and beefy, loud and clunky and it gave you a sense of a giant war monster. They had this growl and weight and threat that worked well for the German machines, which were supposed to feel threatening. In London, you had these quaint buses being drawn by horses, and the counterpoint to that were these military machines that the Germans had, which had to be daunting and a bit terrifying.

“One of the limitations of the WWI-era soundscapes is the lack of some very useful atmospheric sounds. We used tannoy (loudspeaker) effects on the German bomb factory to hint at the background activity, but had to be very sparing as these were only just invented in that era. (Same thing with the machine guns — a far more mechanical version than the ‘retatatat’ of the familiar WWII versions).”

One of Mather’s favorite scenes to design starts on the frontlines as Diana makes her big reveal as Wonder Woman. She crosses No Man’s Land and deflects the enemies’ fire with her bulletproof bracelets and shield. “We played with that in so many different ways because the music was such an important part of Patty’s vision for the film. She very much wanted the music to carry the narrative. Sound effects were there to be literal in many ways. We were not trying to overemphasize the machismo of it. The story is about the people and not necessarily the action they were in. So that became a very musical-based moment, which was not the way I would have normally done it. I learned a lot from Patty about the different ways of telling the story.”

The Powers
Following that scene, Wonder Woman recaptured the Belgian village they were fighting for by running ahead and storming into the German barracks. Mather describes it as a Guy Ritchie-style fight, with Wonder Woman taking on 25 German soldiers. “This is the first time that we really get to see her use all of her powers: the lasso, her bracelets, her shield, and even her shin guards. As she dances her way around the room, it goes from realtime into slow motion and back into realtime. She is repelling bullets, smashing guns with her back, using her shield as a sliding mat and doing slow-motion kicks. It is a wonderfully choreographed scene and it is her first real action scene.”

The scene required a fluid combination of realistic sounds and subdued, slow-motion sounds. “It was like pushing and pulling the soundtrack as things slowed down and then sped back up. That was a lot of fun.”

The Lasso
Where would Wonder Woman be without her signature lasso of truth? In the film, she often uses the lasso as a physical weapon, but there was an important scene where the lasso was called upon for its truth-finding power. Early in the film, Steve’s plane crashes and he’s washed onto Themyscira’s shore. The Amazonians bind Steve with the lasso and interrogate him. Eventually the lasso of truth overpowers him and he divulges his secrets. “There is quite a lot of acting on Chris Pine’s part to signify that he’s uncomfortable and is struggling,” says Mather. “We initially went by his performance, which gave the impression that he was being burned. He says, ‘This is really hot,’ so we started with sizzling and hissing sounds as if the rope was burning him. Again, Patty felt strongly about not going into the high-frequency realm because it distracts from the dialogue, so we wanted to keep the sound in a lower, more menacing register.”

Mather and his team experimented with adding a multitude of different elements, including low whispering voices, to see if they added a sense of personality to the lasso. “We kept the sizzling, but we pitched it down to make it more watery and less high-end. Then we tried a dozen or so variations of themes. Eventually we stayed with this blood-flow sound, which is like an arterial blood flow. It has a slight rhythm to it and if you roll off the top end and keep it fairly muted then it’s quite an intriguing sound. It feels very visceral.”

The last elements Mather added to the lasso were recordings he captured of two stone slabs grinding against each other in a circular motion, like a mill. “It created this rotating, undulating sound that almost has a voice. So that created this identity, this personality. It was very challenging. We also struggled with this when we did the Harry Potter films, to make an inert object have a character without making it sound a bit goofy and a bit sci-fi. All of those last elements we put together, we kept that very low. We literally raised the volume as you see Steve’s discomfort and then let it peel away every time he revealed the truth. As he was fighting it, the sound would rise and build up. It became a very subtle, but very meaningful, vehicle to show that the rope was actually doing something. It wasn’t burning him but it was doing something that was making him uncomfortable.”

The Mix
Wonder Woman was mixed at De Lane Lea (Warner Bros. London) by re-recording mixers Chris Burdon and Gilbert Lake. Mather reveals that the mixing process was exhausting, but not because of the people involved. “Patty is a joy to work with,” he explains. “What I mean is that working with frequencies that are so low and so loud is exhausting. It wasn’t even the volume; it was being exposed to those low frequencies all day, every day for nine weeks or so. It was exhausting, and it really took its toll on everybody.”

In the mix, Jenkins chose to have Rupert Gregson-Williams’s score lead nearly all of the action sequences. “Patty’s sensitivity and vision for the soundtrack was very much about the music and the emotion of the characters,” says Mather. “She was very aware of the emotional narrative that the music would bring. She did not want to lean too heavily on the sound effects. She knew there would be scenes where there would be action and there would be opportunities to have sound design, but I found that we were not pushing those moments as hard as you would expect. The sound design highs weren’t so high that you felt bereft of momentum and pace when those sound design heavy scenes were finished. We ended up maintaining a far more interesting soundtrack that way.”

With DC films like Batman v Superman: Dawn of Justice and Spider-Man, the audience expects a sound design-heavy track, but Jenkins’s music-led approach to Wonder Woman provides a refreshing spin on superhero film soundtracks. “The soundtrack is less supernatural and more down to earth,” says Mather. “I don’t think it could’ve been any other way. It’s not a predictable soundtrack and I really enjoyed that.”

Mather really enjoys collaborating with people who have different ideas and different approaches. “What was exciting about doing this film was that I was able to work with someone who had an incredibly strong idea about the soundtrack and yet was very happy to let us try different routes and options. Patty was very open to listening to different ideas, and willing to take the best from those ideas while still retaining a very strong vision of how the soundtrack was going to play for the audience. This is Patty’s DC story, her opportunity to open up the DC universe and give the audience a new look at a character. She was an extraordinary person to work with and for me that was the best part of the process. In the time of remakes, it’s nice to have a film that is fresh and takes a different approach.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter at @AudioJeney

Arcade grows with creative editor Graham Chisholm

Edit house Arcade, with offices in New York and Santa Monica, has hired creative editor Graham Chisholm. He will be based in the LA studio, but is available to work on either coast.

Chisholm’s career began in Montreal, where he worked for three years before moving to Toronto. For over a decade, he worked with a variety of advertising agencies and brands, including Gatorade, Land Rover, Budweiser, Ford, Chevrolet and the Toronto Raptors, to name a few. He has earned several awards for his work, including multiple Cannes Lions and Best in Show at the AICE Awards. According to Arcade, Chisholm has become best known for his ability to tell compelling and persuasive stories, regardless of the brand or medium he’s working with.

“Graham’s influence and dedication on a project extend beyond the edit and into the finishing of the film,” notes Michael Lawrence, director of a Powerade spot that Chisholm edited. “In our case, he is involved in everything, a true collaborator on an intellectual level, as well as a gifted craftsman. Graham has earned my trust and heartfelt praise through our time working together and becoming friends along the way. He is a gifted storyteller and a great man.”

Chisholm is in the midst of working on a new project at Arcade for Adidas via ad agency 72andSunny. He had just completed his first Arcade project, a short film called LA2024, also via 72andSunny, promoting Los Angeles’ bid for the 2024 Olympic Games.

John Hughes, Helena Packer, Kevin Donovan open post collective

Three industry vets have combined to launch PHD, a Los Angeles-based full-service post collective. Led by John Hughes (founder of Rhythm & Hues), Helena Packer (VFX supervisor/producer) and Kevin Donovan (film/TV/commercials director), PHD works across the genres of VR/AR, independent films, documentaries, TV — including limited series and commercials. In addition to post production, including color grading, offline and online editorial, the visual effects and final delivery, they offer live-action production services. In addition to Los Angeles, PHD has locations in India, Malaysia and South Africa.

Hughes was the co-founder of the legendary VFX shop Rhythm & Hues (R&H) and led that studio for 26 years, earning three Academy Awards for “Best Visual Effects” (Babe, The Golden Compass, Life of Pi) as well as four scientific and engineering Academy Awards.

Packer was inducted into the Academy of Motion Picture Arts and Sciences (AMPAS) in 2008 for her creative contributions to filmmaking as an accomplished VFX artist, supervisor and producer. Her expertise extends beyond feature films to episodic TV, stereoscopic 3D and animation. Packer has been the VFX supervisor and Flame artist for hundreds of commercials and over 20 films, including 21 Jump Street and Charlie Wilson’s War.

Director Kevin Donovan is particularly well-versed in action and visual effects. He directed the feature film, The Tuxedo, and is currently producing the TV series What Would Trejo Do? He has shot over 700 commercials during the course of his career and is the winner of six Cannes Lions.

Since the company’s launch, PHD has worked on a number of projects — two PSAs for the Climate Change organization 5 To Do Today featuring Arnold Schwarzenegger and James Cameron called Don’t Buy It and Precipice
a PSA for the international animal advocacy group WildAid shot in Tanzania and Oregon called Talking Elephant, another for WildAid shot in Cape Town, South Africa called Talking Rhino, and two additional WildAid PSAs featuring actor Josh Duhamel called Souvenir and Situation.

“In a sense, our new company is a reconfigured version of R&H, but now we are much smarter, much more nimble and much more results driven,” says Hughes about PHD. “We have very little overhead to deal with. Our team has worked on hundreds of award-winning films and commercials…”

Main Photo: L-R:  John Hughes, Helena Packer and Kevin Donovan.

The long, strange trip of Amir Bar-Lev’s new Dead doc

Deadheads take note — Long Strange Trip, director Amir Bar-Lev’s four-hour documentary on rock’s original jam band, the Grateful Dead, is now available for viewing. While the film had a theatrical release in New York and Los Angeles on May 26, the doc was made available on Amazon Video as a six-episode series.

L-R: Jack Lewars and Keith Jenson.

Encompassing the band’s rise and decades-long career, the film, executive produced by Martin Scorsese, was itself 14 years in the making. That included three months of final post at Technicolor PostWorks New York, where colorist Jack Lewars and online editor Keith Jenson worked with Bar-Lev to finalize the film’s form and look.

The documentary features scores of interviews conducted by Bar-Lev with band members and their associates, as well as a mountain of concert footage and other archival media. All that made editorial conforming complex as Jenson (using Autodesk Flame) had to keep the diverse source material organized and make it fit properly into a single timeline. “We had conversions that were made from old analog tapes, archival band footage, DPX scans from film and everything in between,” he recalls. “There was a lot of cool stuff, which was great, but it required attention to detail to ensure it came out nice and smooth.”

The process was further complicated as creative editorial was ongoing throughout post. New material was arriving constantly. “We do a lot of documentary work here, so that’s something we’re used to,” Jenson says. “We have workflows and failsafes in place for all formats and know how to translate them for the Lustre platform Jack uses. Other than the sheer amount, nothing took us by surprise.”

Lewars faced a similar challenge during grading as he was tasked with bringing consistency to material produced over a long period of time by varying means. The overall visual style, he says, recalls the band’s origins in the psychedelic culture of the 1960s. “It’s a Grateful Dead movie, so there are a lot of references to their experiments with drugs,” he explains. “Some sections have a trippy feel where the visuals go in and out of different formats. It almost gives the viewer the sense of being on acid.”

The color palette, too, has a psychedelic feel, reflecting the free-spirited essence of the band and its co-founder. “Jerry Garcia’s life, his intention and his outlook, was to have fun,” Lewars observes. “And that’s the look we embraced. It’s very saturated, very colorful and very bright. We tried to make the movie as fun as possible.”

The narrative is frequently punctuated by animated sequences where still photographs, archival media and other elements are blended together in kaleidoscopic patterns. Finalizing those sequences required a few extra steps. “For the animation sequences, we had to cut in the plates and get them to Jack to grade,” explains Jenson. “We’d then send the color-corrected plates to the VFX and animation department for treatment. They’d come back as completed elements that we’d cut into the conform.”

The documentary climaxes with the death of Garcia and its aftermath. The guitarist suffered a heart attack in 1995 after years of struggling with diabetes and drug addiction. As those events unfold, the story undergoes a mood change that is mirrored in shifts in the color treatment. “There is a four-minute animated sequence in the last reel where Jerry has just passed and they are recapping the film,” Lewars says. “Images are overlaid on top of images. We colored those plates in hyper saturation, pushing it almost to the breaking point.

“It’s a very emotional moment,” he adds. “The earlier animated sequences introduced characters and were funny. But it’s tied together at the end in a way that’s sad. It’s a whiplash effect.”

Despite the length of the project and the complexity of its parts, it came together with few bumps. “Supervising producer Stuart Macphee and his team were amazing,” says Jenson. “They were very well organized, incredibly so. With so many formats and conversions coming from various sources, it could have snowballed quickly, but with this team it was a breeze.”

Lewars concurs. Long Strange Trip is an unusual documentary both in its narrative style and its looks, and that’s what makes it fascinating for Deadheads and non-fans alike. “It’s not a typical history doc,” Lewars notes. “A lot of documentaries go with a cold, bleach by-pass look and gritty feel. This was the opposite. We were bumping the saturation in parts where it felt unnatural, but, in the end, it was completely the right thing to do. It’s like candy.”

You can binge it now on Amazon Video.