NBCUni 7.26

Category Archives: post production

AI for M&E: Should you take the leap?

By Nick Gold

In Hollywood, the promise of artificial intelligence is all the rage. Who wouldn’t want a technology that adds the magic of AI to smarter computers for an instant solution to tedious, time-intensive problems? With artificial intelligence, anyone with abundant rich media assets can easily churn out more revenue or cut costs, while simplifying operations … or so we’re told.

If you attended IBC, you probably already heard the pitch: “It’s an ‘easy’ button that’s simple to add to the workflow and foolproof to operate, turning your massive amounts of uncategorized footage into metadata.”

But should you take the leap? Before you sign on the dotted line, take a closer look at the technology behind AI and what it can — and can’t — do for you.

First, it’s important to understand the bigger picture of artificial intelligence in today’s marketplace. Taking unstructured data and generating relevant metadata from it is something that other industries have been doing for some time. In fact, many of the tools we embrace today started off in other industries. But unlike banking, finance or healthcare, our industry prioritizes creativity, which is why we have always shied away from tools that automate. The idea that we can rely on the same technology as a hedge fund manager just doesn’t sit well with many people in our industry, and for good reason.

Nick Gold talks AI for a UCLA Annex panel.

In the media and entertainment industry, we’re looking for various types of metadata that could include a transcript of spoken words, important events within a period of time or information about the production (e.g., people, location, props), and currently there’s no single machine-learning algorithm that will solve for all these types of metadata parameters. For that reason, the best starting point is to define your problems and identify which machine learning tools may be able to solve them. Expecting to parse reams of untagged, uncategorized and unstructured media data is unrealistic until you know what you’re looking for.

What works for M&E?
AI has become pretty good at solving some specific problems for our industry. Speech-to-text is one of them. With AI, extracting data from a generally accurate transcription offers an automated solution that saves time. However, it’s important to note that AI tools still have limitations. An AI tool, known as “sentiment analysis,” could theoretically look for the emotional undertones described in spoken word, but it first requires another tool to generate a transcript for analysis.

But no matter how good the algorithms are, they won’t give you the qualitative data that a human observer would provide, such as the emotions expressed through body language. They won’t tell you the facial expressions of the people being spoken to, or the tone of voice, pacing and volume level of the speaker, or what is conveyed by a sarcastic tone or a wry expression. There are sentiment analysis engines that try to do this, but breaking down the components ensures the parameters you need will be addressed and solved.

Another task at which machine learning has progressed significantly is logo recognition. Certain engines are good at finding, for example, all the images with a Coke logo in 10,000 hours of video. That’s impressive and quite useful, but it’s another story if you want to also find footage of two people drinking what are clearly Coke-shaped bottles where the logo is obscured. That’s because machine-learning engines tend to have a narrow focus, which goes back to the need to define very specifically what you hope to get from it.

There are a bevy of algorithms and engines out there. If you license a service that will find a specific logo, then you haven’t solved your problem for finding objects that represent the product as well. Even with the right engine, you’ve got to think about how this information fits in your pipeline, and there are a lot of workflow questions to be explored.

Let’s say you’ve generated speech-to-text with audio media, but have you figured out how someone can search the results? There are several options. Sometimes vendors have their own front end for searching. Others may offer an export option from one engine into a MAM that you either already have on-premise or plan to purchase. There are also vendors that don’t provide machine learning themselves but act as a third-party service organizing the engines.

It’s important to remember that none of these AI solutions are accurate all the time. You might get a nudity detection filter, for example, but these vendors rely on probabilistic results. If having one nude image slip through is a huge problem for your company, then machine learning alone isn’t the right solution for you. It’s important to understand whether occasional inaccuracies will be acceptable or deal breakers for your company. Testing samples of your core content in different scenarios for which you need to solve becomes another crucial step. And many vendors are happy to test footage in their systems.

Although machine learning is still in its nascent stages, there is a lot of interest in learning how to make it work in the media workflow. It can do some magical things, but it’s not a magic “easy” button (yet, anyway). Exploring the options and understanding in detail what you need goes hand-in-hand with finding the right solution to integrate with your workflow.


Nick Gold is lead technologist for Baltimore’s Chesapeake Systems, which specializes in M&E workflows and solutions for the creation, distribution and preservation of content. Active in both SMPTE and the Association of Moving Image Archivists (AMIA), Gold speaks on a range of topics. He also co-hosts the Workflow Show Podcast.
 

Behind the Title: Pace Pictures owner Heath Ryan

NAME: Heath Ryan

COMPANY: Pace Pictures (@PacePictures)

CAN YOU DESCRIBE YOUR COMPANY?
We are a dailies-to-delivery post house, including audio mixing.

Pace’s Dolby Atmos stage.

WHAT’S YOUR JOB TITLE?
Owner and editor.

WHAT DOES THAT ENTAIL?
As owner, I need to make sure everyone is happy.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Psychology. I deal with a lot of producers, directors and artists that all have their own wants and needs. Sometimes what that entails is not strictly post production but managing personalities.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Editing. My company grew out of my love for editing. It’s the final draft of any film. In the over 30 years I have been editing, the power of what an editor can do has only grown.

WHAT’S YOUR LEAST FAVORITE?
Chasing unpaid invoices. It’s part of the job, but it’s not fun.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Late, late in the evening when there are no other people around and you can get some real work done.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Not by design but through sheer single mindedness, I have no other skill set but film production. My sense of direction is so bad that armed with a GPS super computer in my phone even Uber driver is not an option.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started making films in the single digit years. I won a few awards for my first short film in my teens and never looked back. I’m lucky to have found this passion early.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This year I edited the reboot to Daddy Daycare called Grand-Daddy Daycare (2019) for Universal. I got to work with director Ron Oliver and actor Danny Trejo, and it meant a lot to me. It deals with what we do with our elders as time creeps up on us all. Sadly, we lost Ron’s mom while we were editing the film so it took on extra special meaning to us both.

Lawless Range

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Lawless Range and The Producer. I produced and edited both projects with my dear friend and collaborator Sean McGinly. A modern-day Western and a behind-the-scenes of a Hollywood pilot. They were very satisfying projects because there was no one to blame but ourselves.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Meridian Sound system, the Internet and TV.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, I love it. I have always set the tone in the edit bay with music. Especially during dailies – I like to put music on, sometimes films scores, to set the mood of what we are making.

NBCUni 7.26

Behind the Title: Post supervisor Chloe Blackwell

NAME: Chloe Blackwell

COMPANY: UK-based Click Post Production

CAN YOU DESCRIBE YOUR COMPANY?
I provide bespoke post solutions, which include consultancy and development courses for production companies. I’m also currently working on an online TV series full time. More on that later!

WHAT’S YOUR JOB TITLE?
Post Production Supervisor

WHAT DOES THAT ENTAIL?
Each job that I take on is quite different, so my role will evolve to suit each company’s needs.

Usually my job starts at the early stages of production, so I will meet with the editorial team to work out what they are looking to achieve visually. From this I can ascertain how their post will work most effectively, and work back from their delivery dates to put an edit and finishing schedule together.

For every shoot I will oversee the rushes being ingested and investigate any technical issues that crop up. Once the post production phase starts, I will be in charge of managing the offline. This includes ensuring editors are aware of deadlines and working with executives and/or directors and producers to ensure smooth running of their show.

This also requires me to liaise with the post house, keeping them informed of production’s requirements and schedules, and trouble shooting any obstacles that inevitably crop up along the way.

I also deal directly with the broadcaster, ensuring delivery requirements are clear, ironing out any technical queries from both sides and ensuring the final masters are delivered in timely manner. This also means that I have to be meticulous about quality control of the final product, as any errors can cause huge delays. As the post supervisor managing the post production budget, efficiently is vital. I keep a constant eye on spending and keep the production team up to date with cost reports.

Alternatively, I also offer my services as a consultant, if all a production needs is some initial support. I’m also in the process of setting up courses for production teams that will help them gain a better understanding of the new 4KHDR world, and how they can work to realistic timing and budgets.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably the amount of decisions I have to make on a daily basis. There are so many different ways of doing things, from converting frame rates, working with archive and creating the workflows for editorial to work with.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I think I have the best job in the world! I am one of the very few people on any production that sees the show from early development, right through to delivery. It’s a very privileged position.

WHAT’S YOUR LEAST FAVORITE?
My role can be quite intensive, so there is usually a real lack of downtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
As I have quite a long commute, I find that first thing in the morning is my most productive time. From about 6am I have a few hours of uninterrupted work I can do to set my day up to run smoothly.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would have joined the military!

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
As cheesy as it sounds, post production actually found me! I was working for a production company very early in my career, and I was going to be made redundant. Luckily, I was a valued member of the company and was re-drafted into their post production team. At first I thought it was a disaster, however with lots of help, I hit my stride and fell in love with the job.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
For the last three years I have been working on The Grand Tour for Amazon Prime.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
That’s a hard question as I have worked on so many.

But The Grand Tour has been the most technically challenging. It was the first ever 4K HDR factual entertainment show! Coupled with the fact that it was all shot at 23.98 with elements shot as live. It was one of those jobs where you couldn’t really ask people for advice because it just hadn’t been done.

However, I am also really proud of some of the documentaries I have made, including Born to be Different, Power and the Women’s World and VE day.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My coffee machine, my toaster and the Avid Media Composer.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
All of them…I have to! Part of being in post is being aware of all the new technologies, shows and channels/online platforms out there. You have to keep ahead of the times.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, I love music! I have an eclectic, wide-ranging taste, which means I have a million playlists on Spotify! I love finding new music and playing it for Jess (Jessica Redman, my post production coordinator). We are often shimmying around the office. It keeps the job light, especially during the most demanding days.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I am fortunate enough to be able to take my dog Mouse with me to work. She keeps me sane and keeps me calm, whilst also providing those I work with, with a little joy too!

I am also an obsessive reader, so any down time I get I am often found curled up under a blanket with a good book.

My passion for television really knows no bounds, so I watch TV a lot too! I try to watch at least the first episode of all new TV programs. I rarely get to go to the cinema, but when I do it’s such a treat to watch films on the big screen.


Encore adds colorist Andrea Chlebak, ups Genevieve Fontaine to director of production

Encore has added colorist Andrea Chlebak to its roster and promoted veteran post producer Genevieve Fontaine to director of production. Chlebak brings a multidisciplinary background in feature films, docu-series and commercials across a range of aesthetics. Fontaine has been a post producer since joining the Encore team in early 2010.

Chlebak’s credits include award-winning indies Mandy and Prospect, Neill Blomkamp features Elysium and Chappie and animated adaptation Kahlil Gibran’s “The Prophet.” Having worked primarily in the digital landscape, her experience as an artist, still photographer, film technician, editor and compositor are evident in both her work and how she’s able to streamline communication with directors and cinematographers in delivering their vision.

In her new role, Fontaine’s responsibilities shift toward ensuring organized, efficient and future-proof workflows. Fontaine began her career as a telecine and dailies producer at Riot before moving to Encore, where she managed post for up to 11 shows at a time, including Marvel’s The Defenders series for Netflix. She understands all the building blocks necessary to keep a facility running smoothly and has been instrumental in establishing Encore, a Deluxe company, as a leader in advanced formats, helping coordinate 4K, HDR and IMF-based workflows.

Main Image: (L-R) Genevieve Fontaine and Andrea Chlebak.


A Conversation: 3P Studio founder Haley Stibbard

Australia’s 3P Studio is a post house founded and led by artisan Haley Stibbard. The company’s portfolio of work includes commercials for brands such as Subway, Allianz and Isuzu Motor Company as well as iconic shows like Sesame Street. Stibbard’s path to opening her own post house was based on necessity.

After going on maternity to have her first child in 2013, she returned to her job at a content studio to find that her role had been made redundant. She was subsequently let go. Needing and wanting to work, she began freelancing as an editor — working seven days a week and never turning down a job. Eventually she realized that she couldn’t keep up with that type of schedule and took her fate into her own hands. She launched 3P Studio, one of Brisbane’s few women-led post facilities.

We reached out to Stibbard to ask about her love of post and her path to 3P Studio.

What made you want to get into post production? School?
I had a strong love of film, which I got from my late dad, Ray. He was a big film buff and would always come home from work when I was a kid with a shopping bag full of $2 movies from the video store and he would watch them. He particularly liked the crime stories and thrillers! So I definitely got my love of film and television from him.

We did not have any film courses at high school in the ‘90s, so the closest I could get was photography. Without a show reel it was hard to get a place at university in the college of art; a portfolio was a requirement and I didn’t have one. I remember I had to talk my way into the film program, and in the end I think they just got sick of me and let me into the course through the back door without a show reel — I can be very persistent when I want to be. I always had enjoyed editing and I was good at it, so in group tasks I was always chosen as the editor and then my love of post came from there.

What was your first job?
My very first job was quite funny, actually. I was working in both a shoe store and a supermarket at the time, and two post positions became available one day, an in-house editor for a big furniture chain and a job as a production assistant for a large VFX company at Movie World on the Gold Coast. Anyone who knows me knows that I would be the worst PA in the world. So, luckily for that company director, I didn’t get the PA job and became the in-house editor for the furniture chain.

I’m glad that I took that job, as it taught me so much — how to work under pressure, how to use an Avid, how to work with deadlines, what a key number was, how to dispatch TVCS to the stations, be quick, be accurate, how to take constructive feedback.

I made every mistake known to man, including one weekend when I forgot to remove the 4×3 safe bars from a TVC and my boss saw it on TV. I ended up having to drive to the office, climb the fence that was locked to get into the office and pull it off air. So I’ve learned a lot of things the hard way, but my boss was a very patient and forgiving man, and 18 years later is now a client of mine!

What job did you hold when you went out on maternity leave?
Before I left on maternity leave to have my son Dashiell, I was an editor for a small content company. I have always been a jack-of-all-trades and I took care of everything from offline to online, grading in Resolve, motion graphics in After Effects and general design. I loved my job and I loved the variety that it brought. Doing something different every day was very enjoyable.

After leaving that job, you started freelancing as an editor. What systems did you edit on at the time and what types of projects? How difficult a time was that for you? New baby, working all the time, etc.
I started freelancing when my son was just past seven months old. I had a mortgage and had just come off six months of unpaid maternity leave, so I needed to make a living and I needed to make it quickly. I also had the added pressure of looking after a young child under the age of one who still needed his mother.

So I started contacting advertising agencies and production companies that I thought may be interested in my skill set. I just took every job that I could get my hands on, as I was always worried that every job that I took could potentially be my last for a while. I was lucky that I had an incredibly well-behaved baby! I never said “no” to a job.

As my client base started to grow, my clients would always book me since they knew that I would never say “no” (they know I still don’t say no!). It got to the point where I was working seven days a week. I worked all day when my son was in childcare and all night after he would go to bed. I would take the baby monitor downstairs where I worked out of my husband’s ‘man den.’

As my freelance business grew, I was so lucky that I had the most supportive husband in the world who was doing everything for me, the washing, the cleaning, the cooking, bath time, as well has holding down his own full-time job as an engineer. I wouldn’t have been able to do what I did for that period of time without his support and encouragement. This time really proved to be a huge stepping stone for 3P Studio.

Do you remember the moment you decided you would start your own business?
There wasn’t really a specific moment where I decided to start my own business. It was something that seemed to just naturally come together. The busier I became, the more opportunities came about, like having enough work through the door to build a space and hire staff. I have always been very strategic in regard to the people that I have brought on at 3P, and the timing in which they have come on board.

Can you walk us through that bear of a process?
At the start of 2016, I made the decision to get out of the house. My work life was starting to blend in with my home life and I needed to have that separation. I worked out of a small office for 12 months, and about six months into that it came to a point where I was able to purchase an office space that would become our studio today.

I went to work planning the fit out for the next six months. The studio was an investment in the business and I needed a place that my clients could also bring their clients for approvals, screenings and collaboration on jobs, as well as just generally enjoying the space.

The office space was an empty white shell, but the beauty of coming into a blank canvas was that I was able to create a studio that was specifically built for post production. I was lucky in that I had worked in some of the best post houses in the country as an editor, and this being a custom build I was able to take all the best bits out of all the places I had previously worked and put them into my studio without the restriction of existing walls.

I built up the walls, ripped down the ceilings and was able to design the edit suites and infrastructure all the way down to designing and laying the cable runs myself that I knew would work for us down the line. Then, we saved money and added more equipment to the studio bit by bit. It wasn’t 0 to 100 overnight, I had to work at the business development side of the company a lot, and I spent a lot of long days sitting by myself in those edit suites doing everything. Soon, word of mouth started to circulate and the business started to grow on the back of some nice jobs from my existing loyal clients.

What type of work do you do, and what gear do you call on?
3P Studio is a boutique post production studio that specializes in full-service post production, we also shoot content when required.

Our clients range anywhere from small content videos for the web all the way up to large commercial campaigns and everything in between.

There are currently six of us working full time in the studio, and we handle everything in-house from offline editing to VFX to videography and sound design. We work primarily in the Adobe Creative suite for offline editing in Premiere, mixed with Maxon Cinema 4D/Autodesk Maya for 3D work, Autodesk Flame and Side Effects Houdini for online compositing and VFX, Blackmagic Resolve for color grading and Pro Tools HD for sound mixing. We use EditShare EFS shared storage nodes for collaborative working and sharing of content between the mix of creative platforms we use.

This year we have invested in a Red Digital Cinema camera as well as an EditShare XStream 200 EFS scale-out single-node server so we can become that one-stop shop for our clients. We have been able to create an amazing creative space for our clients to come and work with us, be it from the bespoke design of our editorial suites or the high level of client service we offer.

How did you build 3P Studios to be different from other studios you’ve worked at?
From a personal perspective, the culture that we have been able to build in the studio is unlike anywhere else I have worked in that we genuinely work as a team and support each other. On the business side, we cater to clients of all sizes and budgets while offering uncompromising services and experience whether they be large or small. Making sure they walk away feeling that they have had great value and exemplary service for their budget means that they will end up being a customer of ours for life. This is the mantra that I have been able to grow the business on.

What is your hiring process like, and how do you protect employees who need to go out on maternity or family leave?
When I interview people to join 3P, attitude and willingness to learn is everything to me — hands down. You can be the most amazing operator on the planet, but if your attitude stinks then I’m really not interested. I’ve been incredibly lucky with the team that I have, and I have met them along the journey at exactly the right times. We have an amazing team culture and as the company grows our success is shared.

I always make it clear that it’s swings and roundabouts and that family is always number one. I am there to support my team if they need me to be, not just inside of work but outside as well and I receive the same support in return. We have flexible working hours, I have team members with young families who, at times, are able to work both in the studio and from home so that they can be there for their kids when they need to be. This flexibility works fine for us. Happy team members make for a happy, productive workplace, and I like to think that 3P is forward thinking in that respect.

Any tips for young women either breaking into the industry or in it that want to start a family but are scared it could cost them their job?
Well, for starters, we have laws in Australia that make it illegal for any woman in this country to be discriminated against for starting a family. 3P also supports the 18 weeks paid maternity leave available to women heading out to start a family. I would love to see more female workers in post production, especially in operator roles. We aren’t just going to be the coffee and tea girls, we are directors, VFX artists, sound designers, editors and cinematographers — the future is female!

Any tips for anyone starting a new business?
Work hard, be nice to people and stay humble because you’re only as good as your last job.

Main Image: Haley Stibbard (second from left) with her team.


IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.


Panavision, Sim, Saban Capital agree to merge

Saban Capital Acquisition Corp., a publicly traded special purpose acquisition company, Panavision and Sim Video International have agreed to combine their businesses to create a premier global provider of end-to-end production and post production services to the entertainment industry. Under the terms of the business combination agreement, Panavision and Sim will become wholly owned subsidiaries of Saban Capital Acquisition Corp. Upon completion, Saban Capital Acquisition Corp. will change its name to Panavision Holdings Inc. and is expected to continue to trade on the Nasdaq stock exchange. Kim Snyder, president and chief executive officer of Panavision, will serve as chairman and chief executive officer. Bill Roberts, chief financial officer of Panavision, will serve in that role for the combined company.

Panavision designs, manufactures and provides high-precision optics and camera technology for the entertainment industry and is a leading global provider of production equipment and services. Sim is a leading provider of production and post production solutions with facilities in Los Angeles, Vancouver, Atlanta, New York and Toronto.

“This acquisition will leverage the best of Panavision’s and Sim’s resources by providing comprehensive products and services to best address the ever-adapting needs of content creators globally,” says Snyder.

“We’re combining the talent and integrated services of Sim with two of the biggest names in the business, Panavision and Saban,” adds James Haggarty, president and CEO of Sim. “The resulting scale of the new combined enterprise will better serve our clients and help shape the content-creation landscape.”

The respective boards of directors of Saban Capital Acquisition Corp., Panavision and Sim have unanimously approved the merger with completion subject to Saban Capital Acquisition Corp. stockholder approval, certain regulatory approvals and other customary closing conditions. The parties expect that the process will be completed in the first quarter of 2019.


HPA Tech Retreat 2019 opens call for proposals

The Hollywood Professional Association has issued the call for proposals for the 2019 HPA Tech Retreat, the annual gathering of professionals from around the world who work at the intersection of technology and content creation. The main conference is determined by the proposals submitted during this process.

The HPA Tech Retreat is comprised of Tech Retreat Extra (TR-X), the Supersession, breakfast roundtables, an Innovation Zone and the main conference.  Also open now are submissions for the breakfast roundtables.

Now in its 24th year, the HPA Tech Retreat will take place February 11-15, 2019 at the JW Marriott Desert Springs Resort & Spa in Palm Desert, California, near Palm Springs.

The main program presentations are set for Wednesday, February 13 through Friday, February 15. These presentations are strictly reserved for marketing-free content.  Mark Schubin, who has programmed the Tech Retreat since its inception, notes that main program sessions can include a wide range of content. “We are looking for the most interesting, thought provoking, challenging and important ideas, diving into almost anything that is related to moving images and associated sounds. That includes, but is not limited to: alternative content for cinema, AR, broadcast in the age of broadband, content protection, dynamic range, enhanced cinema, frame rate, global mastering, higher immersion, international law, joke generation, kernel control, loss recovery, media management, night vision, optical advances, plug-‘n’-play, queasiness in VR, robo-post, surround imagery, Terabyte thumb drives, UHD II, verification, wilderness production, x-band Internet access, yield strength of lighting trusses and zoological holography.”

It is a far-ranging and creative call to the most innovative thinkers exploring the most interesting ideas and work. He concludes with his annual salvo, “Anything from scene to seen and gear to ear is fair game. So are haptic/tactile, olfactory and gustatory applications.”

Proposals, which are informal in nature and can be as short as a few sentences in length, must be submitted by the would-be presenter. Submitters will be contacted if the topic is of interest. Presentations in the main program are typically 30 minutes long, including set-up and Q&A. The deadline to submit main program proposals is end of day, Friday, October 26, 2018. Submissions should be sent to tvmark@earthlink.net.

Breakfast roundtables take place Wednesday to Friday, beginning at 7:30am. Unlike the main program, moderator-led breakfast roundtables can include marketing information. Schubin comments, “Table moderators are free to teach, preach, inquire, ask, call-to-task, sell or do anything else that keeps conversation flowing for an hour.”

There is no vetting process for breakfast roundtables. All breakfast roundtable moderators must be registered for the retreat, and there is no retreat registration discount conveyed by moderating a breakfast roundtable. Proposals for breakfast roundtables must be submitted by their proposed moderators, and once the maximum number of tables is reached (32 per day) no more can be accepted.

Further details for the 2019 HPA Tech Retreat will be announced in the coming weeks, including TR-X focus, supersession topics and Innovation Zone details, as well as seminars and meetings held in advance of the Tech Retreat.


Roundtable Post tackles HFR, UHD and HDR image processing

If you’re involved in post production, especially episodic TV, documentaries and feature films, then it’s highly probable that High Frame Rate (HFR), Ultra High Definition (UHD) and High Dynamic Range (HDR) have come your way.

“On any single project, the combination of HFR, UHD and HDR image-processing can be a pretty demanding, cutting-edge technical challenge, but it’s even more exacting when particular specs and tight turnarounds are involved,” says Jack Jones, digital colorist and CTO of full-service boutique facility Roundtable Post Production.

Among the central London facility’s credits are online virals for brands including Kellogg’s, Lurpak, Rolex and Ford, music films for Above & Beyond and John Mellencamp, plus broadcast TV series and feature documentaries for ITV, BBC, Sky, Netflix, Amazon, Discovery, BFI, Channel 4, Showtime and film festivals worldwide. These include Sean McAllister’s A Northern Soul, Germaine Bloody Greer (BBC) and White Right: Meeting The Enemy (ITV Exposure/Netflix).

“Yes, you can render-out HFR/UHD/HDR deliverables from a variety of editing and grading systems, but there are not many that can handle the simultaneous combination of these formats, never mind the detailed delivery stipulations and crunching deadlines that often accompany such projects,” says Jones.

Rewinding to the start of 2017, Jones says that, “Looking forward, to the future landscape of post, the proliferation of formats, resolutions, frame rates and color spaces involved in modern screened entertainment seemed an inevitability for our business. We realized that we were going to need to tackle the impending scenario head-on. Having assessed the alternatives, we took the plunge and gambled on Colorfront Transkoder.”

Transkoder is a standalone, automated system for fast digital file conversion. Roundtable Post’s initial use of Colorfront Transkoder turned out to be the creation of encrypted DCP masters and worldwide deliverables of a variety of long-form projects, such as Nick Broomfield’s Whitney: Can I Be Me, Noah Media Group’s Bobby Robson: More Than a Manager, Peter Medak’s upcoming feature The Ghost of Peter Sellers, and the Colombian feature-documentary To End A War, directed by Marc Silver.

“We discovered from these experiences that, along with incredible quality in terms of image science, color transforms and codecs, Transkoder is fast,” says Jones. “For example, the deliverables for To End A War, involved 10 different language versions, plus subtitles. It would have taken several days to complete these out straight of out of an Avid, but rendering in Transkoder took just four hours.”

More recently, Roundtable Post was faced with the task of delivering country-specific graphics packages, designed and created by production agency Noah Media Group, for use by FIFA rights holders and broadcasters during the 2018 World Cup.

The project involved delivering a mix of HFR, UHD, HDR and HD SDR formats, resulting in 240 bespoke animations, and the production of a mammoth 1,422 different deliverables. These included: 59.94p UHD HDR, 50p UHD HDR, 59.94p HD SDR, 50p HD SDR, 59.94i HD SDR and 50i HD SDR with a variety of clock, timecode, pre-roll, soundtrack, burn-in and metadata requirements as part of the overall specification. Furthermore, the job encompassed the final QC of all deliverables, and it had to be completed within a five-day work week.

“For a facility of our size, this was a significant job in terms of its scale and deadline,” says Jones. “Traditionally, projects like these would involve throwing a lot of people and time at them, and there’s always the chance of human error creeping in. Thankfully, we already had positive experiences with Transkoder, and were eager to see how we could harness its power.”

Using technical data from FIFA, Jones built an XML file containing timelines all of the relevant timecode, clock, image metadata, Wav audio and file-naming information of the required deliverables. He also liaised with Colorfront’s R&D team, and was quickly provided with an initial set of Python script templates that would help to automate the various requirements of the job in Transkoder.

Roundtable Post was able to complete the FIFA 2018 World Cup job, including the client-attend QC of the 1,422 different UHD HDR and HD SDR assets, in under three days.

The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.

MSI’s new Intel Core i9 ultra-thin WS65 mobile workstation, curved monitors

MSI has introduced its new WS65 mobile workstation and announced the availability of its PS42 professional laptop and Optix MAG241C and MAG271C gaming monitors.

The WS65 mobile workstation features a chassis similar to that of the GS65 Stealth Thin, with attractive styling and 15.6-inch, ultra-thin bezel display. With up to Intel’s 8th Generation Core i9 processor and up to Nvidia Quadro P4200 graphics, the WS65 is up to 40 percent faster than the previous-generation model. Although it is designed for portability, the WS65 also incorporates an 82Whr battery for up to eight hours of battery life.

The WS65 features a 15.6-inch Full HD IPS display with 72 percent coverage of the NTSC color gamut. For storage, the workstation offers one PCI-e SSD / SATA combo and one PCI-e SSD. Ports include three USB 3.1 Type-A, one USB 3.1 Type-C, one HDMI 2.0, one mDP 1.4, one mic-in and a headphone out. The WS65 will be available this September, and it will bear the new elegant and minimalistic MSI workstation logo tailored to the business environment.

The PS42 notebook is the newest member of the MSI Prestige series. Measuring 0.63 inches thick, weighing 2.6 pounds and featuring a nearly bezel-free screen, the notebook offers high performance. The PS42 is powered by an Intel 8th Generation Core i7 processor and an Nvidia MX150 GPU and provides 10 hours of battery life, plus a Windows Hello Certified fingerprint sensor. It is now available at major e-tailers, starting at $899.

The Optix MAG271C and MAG241C feature a 144Hz curved VA LED display and fast -ms response time. The series also uses MSI’s Gaming On-Screen Display software to allow users to control monitor settings, including contrast ratio and brightness, from their Windows desktops. The software also supports hotkey options, so users can switch profiles while in-game or use the MSI remote display app on their Android phones. The MAG271C and MAG241C are now available on Amazon for $299.99 and $229.99, respectively.

Alkemy X joins forces with Quietman, adds CD Megan Oepen

Creative content studio Alkemy X has entered into a joint venture with long-time New York City studio Quietman. In addition, Alkemy X has brought on director/creative director Megan Oepen.

The Quietman deal will see founder and creative director Johnnie Semerad moving the operations of his company into Alkemy X, where both parties will share all creative talent, resources and capabilities.

“Quietman’s reputation of high-end, award-winning work is a tribute to Johnnie’s creative and entrepreneurial spirit,” says Justin B. Wineburgh, Alkemy X president/CEO. “Over the course of two decades, he grew and evolved Quietman from a fledgling VFX boutique into one of the most renowned production companies in advertising and branded content. By joining forces with Alkemy X, we’ll no doubt build on each other’s legacies collectively.”

Semerad co-founded Quietman in 1996 as a Flame-based visual effects company. Since then, it has expanded into the full gamut of production and post production services, producing more than 100 Super Bowl spots, and earning a Cannes Grand Prix, two Emmy Awards and other honors along the way.

“What I’ve learned over the years is that you have to constantly reinvest and reinvent, especially as clients increasingly demand start-to-finish projects,” says Semerad. “Our partnership with Alkemy X will elevate how we serve existing and future clients together, while bolstering our creative and technical resources to reach our potential as commercial filmmakers. The best part of this venture? I’ve always been listed with the Qs, but now, I’m with the As!”

Alkemy X is also teaming up with Oepen, an award-winning creative director and live-action director with 20 years of broadcast, sports and consumer brand campaign experience. Notable clients include Google, the NBA, MLB, PGA, NASCAR, Dove Beauty, Gatorade, Sprite, ESPN, Delta Air Lines, Home Depot, Regal Cinemas, Chick-Fil-A and Yahoo! Sports. Oepen was formerly the executive producer and director for Red Bull’s Non-Live/Long Format Productions group, and headed Under Armour’s Content House. She was also the creator behind Under Armour Originals.

Behind the Title: Trollbäck+Company’s David Edelstein

NAME: David Edelstein

COMPANY: Trollbäck+Company (@trollback)

CAN YOU DESCRIBE YOUR COMPANY?
We are a creative agency that believes in the power of communication, craft and collaboration.
Our mission is to promote innovation, create beauty and foster a lasting partnership. We believe that the brands of the future will thrive on the constant spirit of invention. We apply the same principle to our work, always evolving our practice and reaching across disciplines to produce unexpected, original results.

WHAT’S YOUR JOB TITLE?
Executive Director of Client Partnerships

WHAT DOES THAT ENTAIL?
I’m responsible for building on current client relationships and bringing in new ones. I work closely with the team on our strategic approach to presenting us to a wide array of clients.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think you need to be in a position of doing business development to really understand that question. The goal is to land work that the company wants to do and balance that with the needs of running a business. It is not an easy task to juggle.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love working with a talented team, and being in a position to present a company with such a strong legacy.

WHAT’S YOUR LEAST FAVORITE?
Even after all these years, rejection still isn’t easy, but it’s something you deal with on a daily, sometimes hourly, basis.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I’m a morning person, so I find it’s the perfect time to reach out to people when they’re fresh — and before their day gets chaotic.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Are you trying to tell me something? (laughs) I actually think I’d be doing the same thing, but perhaps for a different industry. I truly enjoy the experience of developing relationships and the challenge of solving creative problems with others. I think it’s a valuable skill set that can be applied to other types of jobs.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
This career came about pretty organically for me. I had a traditional production background and grew up in LA. When I moved to New York, I wound up at Showtime as a producer and discovered motion graphics. When I left there, I was fortunate enough to launch a few small studios. Being an owner makes you the head of business development from the start. These experiences have certainly prepared me for where I’ve been and where I am today.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’m only a few months in, but we are currently spearheading branding for a Fortune 500 company. Trollbäck is also coming off a fantastic title sequence and package for the final episode of the Motion Conference, which just took place in June.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It’s tough to call out one particular project, but some career highlights have been a long relationship with Microsoft, as well as traveling the world with Marriott and Hilton.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Cell phone, computer/email and iPad.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook, LinkedIn and Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
I try to give different types of music a go, so Spotify works well for me. But, honestly, I’m still a Springsteen guy.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I go home to relax and then come back the next day and try to be positive and grateful. Repeat!

HP intros new entry-level HP Z lineup

HP is offering new entry-level workstations with their HP Z lineup, which is designed to help accelerate performance and secure pros’ workflows.

The HP Z2 Mini, HP Z2 Small Form Factor and HP Z2 Tower, as well as the HP EliteDesk 800 Workstation Edition, feature built-in end-to-end HP security services, providing protection from evolving malware threats with self-healing BIOS and an HP endpoint security controller. Users get protection from hardware-enforced security solutions, including HP Sure Start Gen4 and HP Sure Run, which help keep critical processes running, even if malware tries to stop them. Additionally, HP’s Manageability Kit Gen 2 manages multiple devices.

All HP Z2 workstations can now connect with Thunderbolt for fast device connections and offer an array of certifications for the apps pros are using in their day-to-day work lives. HP Performance Advisor is available to optimize software and drivers, and users can deploy Intel Xeon processors and ECC memory for added reliability. The customization, expandability, performance upgradeability and I/O options help future-proof HP Z workstation purchases.

Here are some details about the fourth-generation entry HP Z workstation family:

The HP Z2 Mini G4 workstation features what HP calls “next-level performance” in a small form factor (2.7 liters in total volume). Compared to the previous generation HP Z2 Mini, it offers two times more graphics power. Users can choose either the Nvidia Quadro P600 or Nvidia Quadro P1000 GPU. In addition, there is the option for AMD Radeon Pro WX4150 graphics.

Thanks to its size, users can mount it under a desk, behind a display or in a rack — up to 56 HP Z2 Mini workstations will fit in a standard 42U rack with the custom rackmount bracket accessory. With its flexible I/O, users can configure the system for connectivity of legacy serial ports, as well as support for up to six displays for peripheral and display connectivity needs. The HP Z2 G4 Mini comes with six core Intel Xeon Processors.

The HP Z2 Small Form Factor (SFF) G4 workstation offers 50 percent more processing power than the previous generation in the exact same compact size. The six-core CPU provides significant performance boosts. The HP Z2 SFF takes customization to the next level with flexible I/O options that free up valuable PCIe slots, while providing customization for legacy or specialized equipment, and for changing display needs.

The HP Z2 G4 SFF ships with four PCIe slots and dual M.2 storage slots. Its flexible I/O option enables users to customize networking, I/O or display needs without taking up PCIe slots or adding external adapters.

The HP Z2 Tower G4 workstation is designed for complex workloads like rendering with up to Ultra 3D graphics and the latest Intel Core or Intel Xeon processors. The HP Z2 tower can handle demanding 3D projects with over 60 percent more graphics power than the previous generation. With high clock speeds, users can get full, unthrottled performance, even with heavy workloads.

The HP EliteDesk 800 workstation Edition targets users who want to upgrade to a workstation-class desktop with integrated ISV certified applications experience.

Designed for 2D/3D design, it is also out-of-the box optimized for leading VR engines and features the Nvidia GeForce GTX 1080.

The HP Z2 Mini is expected to be available later this month for a starting price of $799; the HP Z2 Small Form Factor is expected to be available later this month for a starting price of
$749; the HP Z2 Tower is expected to be available later this month for a starting price of $769; and the HP EliteDesk 800 is expected to be available later this month for a starting price of $642, including Nvidia Quadro P400 graphics.

Sony creates sounds for Director X’s Superfly remake

Columbia Pictures’ Superfly is a reimagining of Gordon Parks Jr.’s classic 1972 blaxploitation film of the same name. Helmed by Director X and written by Alex Tse, this new version transports the story of Priest from Harlem to modern-day Atlanta.

Steven Ticknor

Superfly’s sound team from Sony Pictures Post Production Services — led by supervising sound editor Steven Ticknor, supervising sound editor and re-recording mixer Kevin O’Connell, re-recording mixer Greg Orloff and sound designer Tony Lamberti — was tasked with bringing the sonic elements of Priest’s world to life. That included everything from building soundscapes for Atlanta’s neighborhoods and nightclubs to supplying the sounds of fireworks, gun battles and car chases.

“Director X and Joel Silver — who produced the movie alongside hip-hop superstar Future, who also curated and produced the film’s soundtrack — wanted the film to have a big sound, as big and theatrical as possible,” says Ticknor. “The film is filled with fights and car chases, and we invested a lot of detail and creativity into each one to bring out their energy and emotion.”

One element that received special attention from the sound team was the Lexus LC500 that Priest (Trevor Jackson) drives in the film. As the sports car was brand new, no pre-recorded sounds were available, so Ticknor and Lamberti dispatched a recording crew and professional driver to the California desert to capture every aspect of its unique engine sounds, tire squeals, body mechanics and electronics. “Our job is to be authentic, so we couldn’t use a different Lexus,” Ticknor explains. “It had to be that car.”

In one of the film’s most thrilling scenes, Priest and the Lexus LC500 are involved in a high-speed chase with a Lamborghini and a Cadillac Escalade. Sound artists added to the excitement by preparing sounds for every screech, whine and gear shift made by the cars, as well as explosions and other events happening alongside them and movements made by the actors behind the wheels.

It’s all much larger than life, says Ticknor, but grounded in reality. “The richness of the sound is a result of all the elements that go into it, the way they are recorded, edited and mixed,” he explains. “We wanted to give each car its own identity, so when you cut from one car revving to another car revving, it sounds like they’re talking to each other. The audience may not be able to articulate it, but they feel the emotion.”

Fights received similarly detailed treatment. Lamberti points to an action sequence in a barber shop as one of several scenes rendered partially in extreme slow motion. “It starts off in realtime before gradually shifting to slo-mo through the finish,” he says. “We had fun slowing down sounds, and processing them in strange and interesting ways. In some instances, we used sounds that had no literal relation to what was happening on the screen but, when slowed down, added texture. Our aim was to support the visuals with the coolest possible sound.”

Re-recording mixing was accomplished in the 125-seat Anthony Quinn Theater on an Avid S6 console with O’Connell handling dialogue and music and Orloff tackling sound effects and Foley. Like its 1972 predecessor, which featured an iconic soundtrack from Curtis Mayfield, the new film employs music brilliantly. Atlanta-based rapper Future, who shares producer credit, assembled a soundtrack that features Young Thug, Lil Wayne, Miguel, H.E.R. and 21 Savage.

“We were fortunate to have in Kevin and Greg, a pair of Academy Award-winning mixers, who did a brilliant job in blending music, dialogue and sound effects,” says Ticknor. “The mix sessions were very collaborative, with a lot of experimentation to build intensity and make the movie feel bigger than life. Everyone was contributing ideas and challenging each other to make it better, and it all came together in the end.”

The score for YouTube Red’s Cobra Kai pays tribute to original Karate Kid

By Jennifer Walden

In the YouTube Red comedy series Cobra Kai, Daniel LaRusso (Ralph Macchio), the young hero of the Karate Kid movies, has grown up to be a prosperous car salesman, while his nemesis Johnny Lawrence (William Zabka) just can’t seem to shake that loser label he earned long ago. Johnny can’t hold down his handy-man job. He lives alone in a dingy apartment, and his personality hasn’t benefited from maturity at all. He lives a very sad reality until one day he finds himself sticking up for a kid being bullied, and that redeeming bit of character makes you root for him. It’s an interesting dynamic that the series writers/showrunners have crafted, and it works.

L-R: Composers Leo Birenberg and Zack Robinson

Fans of the 1980’s film franchise will appreciate the soundtrack of the new Cobra Kai series. Los Angeles-based composers Leo Birenberg and Zach Robinson were tasked with capturing the essence of both composer Bill Conti’s original film scores and the popular music tracks that also defined the sound of the films.

To find that Karate Kid essence, Birenberg and Robinson listened to the original films and identified what audiences were likely latching onto sonically. “We concluded that it was mostly a color palette connection that people have. They hear a certain type of orchestral music with a Japanese flute sound, and they hear ‘80s rock,” says Birenberg. “It’s that palette of sounds that people connect with more so than any particular melody or theme from the original movies.”

Even though Conti’s themes and melodies for Karate Kid don’t provide the strongest sonic link to the films, Birenberg and Robinson did incorporate a few of them into their tracks at appropriate moments to create a feeling of continuity between the films and the series. “For example, there were a couple of specific Japanese flute phrases that we redid. And we found a recurring motif of a simple pizzicato string melody,” explains Birenberg. “It’s so simple that it was easy to find moments to insert it into our cues. We thought that was a really cool way to tie everything together and make it feel like it is all part of the same universe.”

Birenberg and Robinson needed to write a wide range of music for the show, which can be heard en masse on the Cobra Kai OST. There are the ’80s rock tracks that take over for licensed songs by bands like Poison and The Alan Parsons Project. This direction, as heard on the tracks “Strike First” and “Quiver,” covered the score for Johnny’s character.

The composers also needed to write orchestral tracks that incorporated Eastern influences, like the Japanese flutes, to cover Daniel as a karate teacher and to comment on his memories of Miyagi. A great example of this style is called, fittingly, “Miyagi Memories.”

There’s a third direction that Birenberg and Robinson covered for the new Cobra Kai students. “Their sound is a mixture of modern EDM and dance music with the heavier ‘80s rock and metal aesthetics that we used for Johnny,” explains Robinson. “So it’s like Johnny is imbuing the new students with his musical values. This style is best represented in the track ‘Slither.’”

Birenberg and Robinson typically work as separate composers, but they’ve collaborated on several projects before Cobra Kai. What makes their collaborations so successful is that their workflows and musical aesthetics are intrinsically similar. Both use Steinberg’s Cubase as their main DAW, while running Ableton Live in ReWire mode. Both like to work with MIDI notes while composing, as opposed to recording and cutting audio tracks.

Says Birenberg, “We don’t like working with audio from the get-go because TV and film are such a notes-driven process. You’re not writing music as much as you are re-writing it to specification and creative input. You want to be able to easily change every aspect of a track without having to dial in the same guitar sound or re-record the toms that you recorded yesterday.”

Virtual Instruments
For Cobra Kai, they first created demo songs using MIDI and virtual instruments. Drums and percussion sounds came from XLN Audio’s Addictive Drums. Spectrasonics Trilian was used for bass lines and Keyscape and Omnisphere 2 provided many soft-synth and keyboard sounds. Virtual guitar sounds came from MusicLab’s RealStrat and RealLPC, Orange Tree, and Ilya Efimov virtual instrument libraries. The orchestral sections were created using Native Instruments Kontakt, with samples coming from companies such as Spitfire, Cinesamples, Cinematic Strings, and Orchestral Tools.

“Both Zach and I put a high premium on virtual instruments that are very playable,” reports Birenberg. “When you’re in this line of work, you have to work superfast and you don’t want a virtual instrument that you have to spend forever tweaking. You want to be able to just play it in so that you can write quickly.”

For the final tracks, they recorded live guitar, bass and drums on every episode, as well as Japanese flute and small percussion parts. For the season finale, they recorded a live orchestra. “But,” says Birenberg, “all the orchestra and some Japanese percussion you hear earlier in the series, for the most part, are virtual instruments.”

Live Musicians
For the live orchestra, Robinson says they wrote 35 minutes of music in six days and immediately sent that to get orchestrated and recorded across the world with the Prague Radio Symphony Orchestra. The composing team didn’t even have to leave Los Angeles. “They sent us a link to a private live stream so we could listen to the session as it was going on, and we typed notes to them as we were listening. It sounds crazy but it’s pretty common. We’ve done that on numerous projects and it always turns out great.”

When it comes to dividing up the episodes — deciding who should score what scenes — the composing team likes to “go with gut and enthusiasm,” explains Birenberg. “We would leave the spotting session with the showrunners, and usually each of us would have a few ideas for particular spots.”

Since they don’t work in the same studio, the composers would split up and start work on the sections they chose. Once they had an idea down, they’d record a quick video of the track playing back to picture and share that with the other composer. Then they would trade tracks so they each got an opportunity to add in parts. Birenberg says, “We did a lot of sending iPhone videos back and forth. If it sounds good over an iPhone video, then it probably sounds pretty good!”

Both composers have different and diverse musical backgrounds, so they both feel comfortable diving right in and scoring orchestral parts or writing bass lines, for instance. “For the scope of this show, we felt at home in every aspect of the score,” says Birenberg. “That’s how we knew this show was for both of us. This score covers a lot of ground musically, and that ground happened to fit things that we understand and are excited about.” Luckily, they’re both excited about ‘80s rock (particularly Robinson) because writing music in that style effectively isn’t easy. “You can’t fake it,” he says.

Recreating ‘80s Rock
A big part of capturing the magic of ‘80s rock happened in the mix. On the track “King Cobra,” mix engineer Sean O’Brien harnessed the ‘80s hair metal style by crafting a drum sound that evoked Motley Crew and Bon Jovi. “I wanted to make the drums as bombastic and ‘80s as possible, with a really snappy kick drum and big reverbs on the kick and snare,” says O’Brien.

Using Massey DRT — a drum sample replacement plug-in for Avid Pro Tools, he swapped out the live drum parts with drum samples. Then on the snare, he added a gated reverb using Valhalla VintageVerb. He also used Valhalla Room to add a short plate sound to thicken up the kick and snare drums.

To get the toms to match the cavernous punchiness of the kick and snare, O’Brien augmented the live toms with compression and EQ. “I chopped up the toms so there wasn’t any noise in between each hit and then I sent those to the nonlinear short reverbs in Valhalla Room,” he says. “Next, I did parallel compression using the Waves SSL E-Channel plug-in to really squash the tom hits so they’re big and in your face. With EQ, I added more top end then I normally would to help the toms compete with the other elements in the mix. You can make the close mics sound really crispy with those SSL EQs.”

Next, he bussed all the drum tracks to a group aux track, which had a Neve 33609 plug-in by UAD and a Waves C4 multi-band compressor “to control the whole drum kit after the reverbs were laid in to make sure those tracks fit in with the other instruments.”

Sean O’Brien

On “Slither,” O’Brien also focused on the drums, but since this track is more ‘80s dance than ‘80s rock, O’Brien says he was careful to emphasize the composers’ ‘80s drum machine sounds (rather than the live drum kit), because that is where the character of the track was coming from. “My job on this track was to enhance the electric drum sounds; to give the drum machine focus. I used UAD’s Neve 1081 plug-in on the electronic drum elements to brighten them up.”

“Slither” also features Taiko drums, which make the track feel cinematic and big. O’Brien used Soundtoys Devil-Loc to make the taiko drums feel more aggressive, and added distortion using Decapitator from Soundtoys to help them cut through the other drums in the track. “I think the drums were the big thing that Zach [Robinson] and Leo [Birenberg] were looking to me for because the guitars and synths were already recorded the way the composers wanted them to sound.”

The Mix
Mix engineer Phil McGowan, who was responsible for mixing “Strike First,” agrees. He says, “The ‘80s sound for me was really based on drum sounds, effects and tape saturation. Most of the synth and guitar sounds that came from Zach and Leo were already very stylized so there wasn’t a whole lot to do there. Although I did use a Helios 69 EQ and Fairchild compressor on the bass along with a little Neve 1081 and Kramer PIE compression on the guitars, which are all models of gear that would have been used back then. I used some Lexicon 224 and EMT 250 on the synths, but otherwise there really wasn’t a whole lot of processing from me on those elements.”

Phil McGowan’s ‘Strike First’ Pro Tools session.

To get an ‘80s gated reverb sound for the snare and toms on “Strike First,” McGowan used an AMS RMX16 nonlinear reverb plug-in in Pro Tools. For bus processing, he mainly relied on a Pultec EQ, adding a bit of punch with the classic “Pultec Low End Trick” —which involves boosting and attenuating at the same frequency — plus adding a little bump at 8k for some extra snap. Next in line, he used an SSL G-Master buss compressor before going into UAD’s Studer A800 tape plug-in set to 456 tape at 30 ips and calibrated to +3 dB.

“I did end up using some parallel compression using a Distressor plug-in by Empirical Labs, which was not around back then, but it’s my go-to parallel compressor and it sounded fine, so I left it in my template. I also used a little channel EQ from FabFilter Pro-Q2 and the Neve 88RS Channel Strip,” concludes McGowan.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Quick Chat: Technicolor’s new finishing artist, VP Pankaj Bajpai

By Randi Altman

Veteran colorist Pankaj Bajpai will be joining Technicolor’s Los Angeles studio in August as VP, finishing artist and business development. He comes to Technicolor from his long-tenured position at Encore.

Bajpai’s long list of television credits include House of Cards, Sex in the CityCarnivàle, The Newsroom, True Detective, Justified, Fear the Walking Dead, Genius: Einstein and Picasso, Snowfall and many more. He brings with him a background in both film cinematography and digital post.

Bajpai joins Technicolor’s roster of episodic colorists in Los Angeles who include Sparkle, Tim Vincent, Tony Dustin, Tom Forletta, Roy Vasich and Doug Delaney.

“I’m thrilled to start a new chapter at such a vibrant time in our industry’s landscape,” says Bajpai on joining Technicolor. “With the support of Sherri Potter (Technicolor’s president of worldwide post production), and the team of artists and engineers at Technicolor, I’m excited to continue to push the boundaries of technology and creativity to bring our clients’ vision and passion to all screens, in all formats, for all to enjoy.”

We reached out to Bajpai to find out more:

Why was now the right time to make this change, especially after being at one place for so long?
Consumers’ relationship with content has been disrupted, the entertainment industry has shifted, and as a result the dynamics of post are changing dramatically. Lines are blurring between “feature” and “episodic” content — the quality of the story and the production, the craft, the expectation by all stakeholders, etc. is now almost universally the same for all pieces of content regardless of distribution platform. I believe Technicolor understands this dynamic shift and is supporting the singular demand for stunning content regardless of distribution “genre,” and that made it the right time for me to join.

How do you divide your time between your colorist duties and your biz dev duties?
I believe that the role of the colorist is no longer a singular duty. It is my responsibility to be the center of collaboration across the post process — from a client perspective, a craft perspective and a workflow perspective. We no longer live in a silo’d industry with clear hand-offs. I must understand the demands that 4K, HDR and beyond have on workflows, the craft and the ever-tightening delivery deadlines.

I believe in being the catalyst for collaboration across the post process, uniting the technology and artistry to serve our clients’ visions. It’s not about wearing one hat at a time. It’s about taking my role as both artists and client ambassador seriously, ultimately ensuring that the experience is as flawless as possible, and the picture is stunning.

You are an artist first, but what do you get from doing the other parts as well?
We no longer work within independent processes. Being that center of collaboration that I referenced earlier influences my approach to color finishing as much as my role as an artist helps to bring perspective to the technology and operational demands of projects these days.

How does your background in cinematography inform you color work?
My work will always be informed by my clients, but my background in cinematography allows us to speak the same language — the language of lens and light, the language of photography. I find it is a very easy way of communicating visual ideas and gets us on the same page much faster. For instance, when a DP shares with me that they will be using a particular set of lenses and filters in combination with specific gels and lights, I’m able to visualize their creative intent quickly. Instinctively, we know what that image needs to be from the start without talking about it too much. Establishing such trust on demanding episodic shooting and finishing schedules is critical to stay true to my clients’ creative ideas.

Understanding and respecting the nuances of a cinematographer’s work in this way goes far in my ability to create a successful color finishing process in the end.

The world of color is thriving right now. How has the art changed since you started?
Art at its essence will always be about creative people seeing something come to life from within their own unique perspective. What has changed is the fact that the tools we now have at our disposal allow me as a finishing artist to create all new approaches to my craft. I can go deeper into an image and its color space now; it’s freeing and exciting because it allows for collaboration with cinematographers and directors on a continually deeper level.

What is the most exciting thing going on in color right now? HDR? Something else?
It really feels like the golden age of content across all platforms. Consumers’ expectations are understandably high across any type of content consumed in any environment or any screen. I think everyone involved on a show feels that and feels the excitement and continues to raise the bar for the quality of the storytelling, the craft and the overall consumer engagement. To be a contributor work, which is now easily seen globally, is very exciting.

Has the new technology changed the way you work or is your creative process essentially the same?
Technology will continue to change, workflows will be impacted and, as an industry, we’ll always be looking to challenge what is possible. My creative process continues to be influenced by the innovative tools that I get to explore.

For instance, it’s vital for me to understand an array of new digital cameras and the distinctive images they are capable of producing. I frequently use my toolset for creative options that can be deployed right within those cameras. To be able to help customize images non-destructively from the beginning of the shoot and to collaborate with directors and cinematographers to aid storytelling with a unique visual style all the way to the finish, is hugely satisfying. For innovation in the creative process today, the sky is the limit.

Review: HP DreamColor Z31x studio display for cinema 4K

By Mike McCarthy

Not long ago, HP sent me their newest high-end monitor to review, and I was eager to dig in. The DreamColor Z31x studio display is a 31-inch true 4K color-critical reference monitor. It has many new features that set it apart from its predecessors, which I have examined and will present here in as much depth as I can.

It is challenging to communicate the nuances of color quality through writing or any other form on the Internet, as some things can only be truly appreciated firsthand. But I will attempt to communicate the experience of using the new DreamColor as best I can.

First, we will start with a little context…

Some DreamColor History
HP revolutionized the world of color-critical displays with the release of the first DreamColor in June 2008. The LP2480zx was a 24-inch 1920×1200 display that had built-in color processing with profiles for standard color spaces and the ability to calibrate it to refine those profiles as the monitor aged. It was not the first display with any of these capabilities, but the first one that was affordable, by at least an order of magnitude.

It became very popular in the film industry, both sitting on desks in post facilities — as it was designed — and out in the field as a live camera monitor, which it was not designed for. It had a true 10-bit IPS pane and the ability to reproduce incredible detail in the darks. It could only display 10-bit sources from the brand-new DisplayPort input or the HDMI port, and the color gamut remapping only worked for non-interlaced RGB sources.

So many people using the DreamColor as a “video monitor” instead of a “computer monitor” weren’t even using the color engine — they were just taking advantage of the high-quality panel. It wasn’t just the color engine but the whole package, including the price, that led to its overwhelming success. This was helped by the lack of better options, even at much higher price points, since this was the period after CRT production ended but before OLED panels had reached the market. This was similar to (and in the same timeframe as) Canon’s 5D MarkII revolutionizing the world of independent filmmaking with its HDSLRs. The combination gave content creators amazing tools for moving into HD production at affordable price points.

It took six years for HP to release an update to the original model DreamColor in the form of the Z27x and Z24x. These had the same color engine but different panel technology. They never had the same impact on the industry as the original, because the panels didn’t “wow” people, and the competition was starting to catch up. Dell has PremierColor and Samsung and BenQ have models featuring color accuracy as well. The Z27x could display 4K sources by scaling them to its native 2560×1440 resolution, while the Z24x’s resolution was decreased to 1920×1080 with a panel that was even less impressive.

Fast forward a few more years, and the Z24x was updated to Gen2, and the Z32x was released with UHD resolution. This was four times the resolution of the original DreamColor and at half the price. But with lots of competition in the market, I don’t think it has had the reach of the original DreamColor, and the industry has matured to the point where people aren’t hooking them to 4K cameras because there are other options better suited to that environment, specifically battery powered OLED units.

DreamColor at 4K
Fast forward a bit and HP has released the Z31x DreamColor studio display. The big feature that this unit brings to the table is true cinema 4K resolution. The label 4K gets thrown around a lot these days, but most “4K” products are actually UHD resolution, at 3840×2160, instead of the full 4096×2160. This means that true 4K content is scaled to fit the UHD screen, or in the case of Sony TVs, cropped off the sides. When doing color critical work, you need to be able to see every pixel, with no scaling, which could hide issues. So the Z31x’s 4096×2160 native resolution will be an important feature for anyone working on modern feature films, from editing and VFX to grading and QC.

The 10-bit 4K Panel
The true 10-bit IPS panel is the cornerstone of what makes a DreamColor such a good monitor. IPS monitor prices have fallen dramatically since they were first introduced over a decade ago, and some of that is the natural progression of technology, but some of that has come at the expense of quality. Most displays offering 10-bit color are accomplishing that by flickering the pixels of an 8-bit panel in an attempt to fill in the remaining gradations with a technique called frame rate control (FRC). And cheaper panels are as low as 6-bit color with FRC to make them close to 8-bit. There are a variety of other ways to reduce cost with cheaper materials, and lower-quality backlights.

HP claims that the underlying architecture of this panel returns to the quality of the original IPS panel designs, but then adds the technological advances developed since then, without cutting any corners in the process. In order to fully take advantage of the 10-bit panel, you need to feed it 10-bit source content, which is easier than it used to be but not a forgone conclusion. Make sure you select 10-bit output color in your GPU settings.

In addition to a true 10-bit color display, it also natively refreshes at the rate of the source image, from 48Hz-60Hz, because displaying every frame at the right time is as important as displaying it in the right color. They say that the darker blacks are achieved by better crystal alignment in the LCD (Liquid Crystal Display) blocking out the backlight more fully. This also gives a wider viewing angle, since washing out the blacks is usually the main issue with off-axis viewing. I can move about 45 degrees off center, vertically or horizontally, without seeing any shift in the picture brightness or color. Past that I start to see the mid levels getting darker.

Speaking of brighter and darker, the backlight gives the display a native brightness of 250 nits. That is over twice the brightness needed to display SDR content, but this not an HDR display. It can be adjusted anywhere from 48 to 250 nits, depending on the usage requirements and environment. It is not designed to be the brightest display available, it is aiming to be the most accurate.

Much effort was put into the front surface, to get the proper balance of reducing glare and reflections as much as possible. I can’t independently verify some of their other claims without a microscope and more knowledge than I currently have, but I can easily see that the matte surface of the display is much better than other monitors in regards to fewer reflections and less glare for the surrounding environment, allowing you to better see the image on the screen. That is one of the most apparent strengths of the monitor, obviously visible at first glance.

Color Calibration
The other new headline feature is an integrated colorimeter for display calibration and verification, located in the top of the bezel. It can swing down and measure the color parameters of the true 10-bit IPS panel, to adjust the color space profiles, allowing the monitor to more accurately reproduce colors. This is a fully automatic feature, independent of any software or configuration on the host computer system. It can be controlled from the display’s menu interface, and the settings will persist between multiple systems. This can be used to create new color profiles, or optimize the included ones for DCI P3, BT.709, BT.2020, sRGB and Adobe RGB. It also includes some low-blue-light modes for use as an interface monitor, but this negates its color accurate functionality. It can also input and output color profiles and all other configuration settings through USB and its network connection.

The integrated color processor also supports using external colorimeters and spectroradiometers to calibrate the display, and even allows the integrated XYZ colorimeter itself to be calibrated by those external devices. And this is all accomplished internally in the display, independent of using any software on the workstation side. The supported external devices currently include:
– Klein Instruments: K10, K10-A (colorimeters)
– Photo Research: PR-655, PR-670, PR-680, PR-730, PR-740, PR-788 (spectroradiometers)
– Konica Minolta: CA-310 (colorimeter)
– X-Rite: i1Pro 2 (spectrophotometer), i1Display (colorimeter)
– Colorimetry Research: CR-250 (spectroradiometer)

Inputs and Ports
There are five main display inputs on the monitor: two DisplayPort 1.2, two HDMI 2.0 and one DisplayPort over USB-C. All support HDCP and full 4K resolution at up to 60 frames per second. It also has an 1/8-inch sound jack and a variety of USB options. There are four USB 3.0 ports that are shared via KVM switching technology between the USB-C host connection and a separate USB-B port to a host system. These are controlled by another dedicated USB keyboard port, giving the monitor direct access to the keystrokes. There are two more USB ports that connect to the integrated DreamColor hardware engine, for connecting external calibration instruments, and for loading settings from USB devices.

My only complaint is that while the many USB ports are well labeled, the video ports are not. I can tell which ones are HDMI without the existing labels, but what I really need is to know which one the display views as HDMI1 and which is HDMI2. The Video Input Menu doesn’t tell you which inputs are active, which is another oversight, given all of the other features they added to ease the process of sharing the display between multiple inputs. So I recommend labeling them yourself.

Full-Screen Monitoring Features
I expect the Z31x will most frequently be used as a dedicated full-resolution playback monitor, and HP has developed a bunch of new features that are very useful and applicable for that use case. The Z31x can overlay mattes (with variable opacity) for Flat and Scope cinema aspect ratios (1.85 and 2.39). It also can display onscreen markers for those sizes, as well as 16×9 or 3×4, including action and title safe, including further options for center and thirds markers with various colors available. The markers can be further customized with HP’s StudioCal.XML files. I created a preset that gives you 2.76:1 aspect ratio markers that you are welcome to download and use or modify. These customized XMLs are easy to create and are loaded automatically when you insert a USB stick containing them into the color engine port.

The display also gives users full control over the picture scaling, and has a unique 2:1 pixel scaling for reviewing 2K and HD images at pixel-for-pixel accuracy. It also offers compensation for video levels and overscan and controls for de-interlacing, cadence detection, panel overdrive and blue-channel-only output. You can even control the function of each bezel button, and their color and brightness. These image control features will definitely be significant to professional users in the film and video space. Combined with the accurate reproduction of color, resolution and frame rate, this makes for an ideal display for monitoring nearly any film or video content at the highest level of precision.

Interface Display Features
Most people won’t be using this as an interface monitor, due to the price and because the existing Z32x should suffice when not dealing with film content at full resolution. Even more than the original DreamColor, I expect it will primarily be used as a dedicated full-screen playback monitor and users will have other displays for their user interface and controls. That said, HP has included some amazing interface and sharing functionality in the monitor, integrating a KVM switch for controlling two systems on any of the five available inputs. They also have picture-in-picture and split screen modes that are both usable and useful. HD or 2K input can be displayed at full resolution over any corner of the 4K master shot.

The split view supports two full-resolution 2048×2160 inputs side by side and from separate sources. That resolution has been added as a default preset for the OS to use in that mode, but it is probably only worth configuring for extended use. (You won’t be flipping between full screen and split very easily in that mode.) The integrated KVM is even more useful in these configurations. It can also scale any other input sizes in either mode but at a decrease in visual fidelity.

HP has included every option that I could imagine needing for sharing a display between two systems. The only problem is that I need that functionality on my “other” monitor for the application UI, not on my color critical review monitor. When sharing a monitor like this, I would just want to be able to switch between inputs easily to always view them at full screen and full resolution. On a related note, I would recommend using DisplayPort over HDMI anytime you have a choice between the two, as HDMI 2.0 is pickier about 18Gb cables, occasionally preventing you from sending RGB input and other potential issues.

Other Functionality
The monitor has an RJ-45 port allowing it to be configured over the network. Normally, I would consider this to be overkill but with so many features to control and so many sub-menus to navigate through, this is actually more useful than it would be on any other display. I found myself wishing it came with a remote control as I was doing my various tests, until I realized the network configuration options would offer even better functionality than a remote control would have. I should have configured that feature first, as it would have made the rest of the tests much easier to execute. It offers simple HTTP access to the controls, with a variety of security options.

I also had some issues when using the monitor on a switched power outlet on my SmartUPS battery backup system, so I would recommend using an un-switched outlet whenever possible. The display will go to sleep automatically when the source feed is shut off, so power saving should be less of an issue that other peripherals.

Pricing and Options
The DreamColor Z31x is expected to retail for $4,000 in the US market. If that is a bit out of your price range, the other option is the new Z27x G2 for half of that price. While I have not tested it myself, I have been assured that the newly updated 27-inch model has all of the same processing functionality, just in a smaller form-factor, with a lower-resolution panel. The 2560×1440 panel is still 10-bit, with all of the same color and frame rate options, just at a lower resolution. They even plan to support scaling 4K inputs in the next firmware update, similar to the original Z27x.

The new DreamColor studio displays are top-quality monitors, and probably the most accurate SDR monitors in their price range. It is worth noting that with a native brightness of 250 nits, this is not an HDR display. While HDR is an important consideration when selecting a forward-looking display solution, there is still a need for accurate monitoring in SDR, regardless of whether your content is HDR compatible. And the Z31x would be my first choice for monitoring full 4K images in SDR, regardless of the color space you are working in.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Behind the Title: Sim LA’s VP of Post LA Greg Ciaccio

Name: Greg Ciaccio

Company: Sim

Can you describe your company?
We’re a full-service company providing studio space, lighting and grip, cameras, dailies and finishing in Los Angeles, New York, Toronto, Vancouver and Atlanta with outposts in New Mexico and Texas.

What’s your job title?
VP, Post Los Angeles

What does that entail?
Essentially, I’m the GM of our dailies and rentals and finishing businesses — the 2nd and 3rd floor of our building — formerly Kodak Cinesite. The first floor houses our camera rental business.

What would surprise people the most about what falls under that title?
I coproduce our SimLab industry events with Bill Russell in our camera department.

What’s your favorite part of the job?
Having camera, dailies, editorial and finishing under one roof — the workflows that tie them all together provide meaningful solutions for our clients.

What’s your least favorite?
Like most facility heads, business constraints. There’s not much of it, which is great, but running any successful company relies on managing the magic.

What is your favorite time of the day?
The early mornings when I can power through management work so I can spend time with staff and clients.

If you didn’t have this job, what would you be doing instead?
Probably a post sound mixer. I teach post production management one night a week at CSUN, so that provides a fresh perspective on my role in the industry.

How early on did you know this would be your path?
I really started back in the 4th grade in lighting. I then ran and designed lighting in high school and college, moving into radio-TV-film halfway through. I then moved into production sound. The move from production to post came out of a desire for (fairly) regular hours and consistent employment.

Can you name some recent projects you have worked on?
TV series: Game of Thrones, The Gifted, Krypton, The Son, Madam Secretary, Jane the Virgin. On the feature dailies and DI side: Amy Poehler’s Wine Country.

We’re also posting Netflix’ Best Worst Weekend Ever in ACES (Academy Color Encoding System) in UHD/Dolby Vision HDR.

Game of Thrones

What is the project that you are most proud of?
Game of Thrones. The quality bar which HBO has set is evident in the look of the show. It’s so well-produced — the production design, cinematography, editing and visual effects are stunning.

Name three pieces of technology that you can’t live without.
My iPhone X, my Sony Z9D HDR TV and my Apple Watch.

What social media channels do you follow?
Instagram for DP/other creative photography interests; LinkedIn for general socially/influencer-driven news; Facebook for peripheral news/personal insights; and channels, which include ETCentric — USC ETC; ACES Central for ACES-related community info; and Digital Cinema Society for industry events

Do you listen to music while you work? Care to share your favorite music to work to?
I listen to Pandora. The Thievery Corporation station.

What do you do to de-stress from it all?
Getting out for lunch and walking when possible. I visit our staff and clients throughout the day. Morning yoga. And the music helps!

Understanding and partnering on HDR workflows

By Karen Moltenbrey

Every now and then a new format or technology comes along that has a profound effect on post production. Currently, that tech is high dynamic range, or HDR, which offers a heightened visual experience through a greater dynamic range of luminosity.

Michel Suissa

So why is HDR important to the industry? “That is a massive question to answer, but to make a pretty long story relatively short, it is by far one of the recent technologies to emerge with the greatest potential to change how images are affecting audiences,” says Michel Suissa, manager of professional solutions at The Studio–B&H. “Regardless of the market and the medium used to distribute programming, irrelevant to where and how these images are consumed, it is a clearly noticeable enhancement, and at the same time a real marketing gold mine for manufacturers as well as content producers, since a premium can be attached to offering HDR as a feature.”

And he should know. Suissa has been helping a multitude of post studios navigate the HDR waters in their quest for the equipment necessary to meet their high dynamic range needs.

Suissa started seeing a growing appetite for HDR roughly three years ago, both in the consumer and professional markets and at about the same time. “Three years ago, if someone had said they were creating HDR content, a very small percentage of the community would have known what they were talking about,” he notes. “Now, if you don’t know what HDR is and you’re in the industry, then you are probably behind the times.”

Nevertheless, HDR is demanding in terms of the knowledge one needs to create HDR content and distribute it, as well as make sure people can consume it in a way that’s satisfying, Suissa points out. “And there’s still a lot of technical requirements that people have to carefully navigate through because it is hardly trivial,” he says.

How does a company like B&H go about helping a post studio select the right tools for their individual workflow needs? “The basic yet critically important task is understanding their workflow, their existing tool set and what is expected of them in terms of delivery to their clients,” says Suissa.

To assist studios and content creators working in post, The Studio–B&H team follows a blueprint that’s based on engaging customers about the nature of the work they do, asking questions like: Which camera material do they work from? In which form is the original camera material used? What platform do they use for editing? What is the preferred application to master HDR images? What is the storage and network infrastructure? What are the master delivery specifications they must adhere to (what flavor of HDR)?

“People have the most difficulty understanding the nature of the workflow: Do the images need to be captured differently from a camera? Do they need to be ingested in the post system differently? Do they need to be viewed differently? Do they need to be formatted differently? Do they need to be mastered differently? All those things created a new set of specifications that people have to learn, and this is where it has changed the way people handle post production,” Suissa contends. “There’s a lot of intricacies, and you have to understand what it is you’re looking at in order to make sure you’re making the correct decisions — not just technically, but creatively as well.”

When adding an HDR workflow, studios typically approach B&H looking for equipment across their entire pipeline. However, Suissa states that similar parameters apply for HDR work as for other high-performance environments. People will continue to need decent workstations, powerful GPUs, professional storage for performance and increased capacity, and an excellent understanding of monitoring. “Other aspects of a traditional pipeline can sometimes remain in play, but it is truly a case-by-case analysis,” he says.

The most critical aspect of working with HDR is the viewing experience, Suissa says, so selecting an appropriate monitoring solution is vital — as is knowing the output specifications that will be used for final delivery of the content.

Without question, Suissa has seen an increase in the number of studios asking about HDR equipment of late. “Generally speaking, the demand by people wanting to at least understand what they need in order to deliver HDR content is growing, and that’s because the demand for content is growing,” he says.

Yes, there are compromises that studios are making in terms of HDR that are based on budget. Nevertheless, there is a tipping point that can lead to the rejection of a project if it is not up to HDR standards. In fact, Suissa foresees in the next six months or so the tightening of standards on the delivery side, whether for Amazon, Netflix or the networks, and the issuance of mandates by over-the-air distribution channels in order for content to be approved as HDR.

B&H/Light Iron Collaboration
Among the studios that have purchased HDR equipment from B&H is Light Iron, a Panavision company with six facilities spanning the US that offer a range of post solutions, including dailies and DI. According to Light Iron co-founder Katie Fellion, the number of their clients requesting HDR finishing has increased in the past year. She estimates that one out of every three clients is considering HDR finishing, and in some cases, they are doing so even if they don’t have distribution in place yet.

Suissa and Light Iron SVP of innovation Michael Cioni gradually began forging a fruitful collaboration during the last few years, partnering a number of times at various industry events. “At the same time, we doubled up on our relationship of providing technology to them,” Suissa adds, whether for demonstrations or for Light Iron’s commercial production environment.

Katie Fellion

For some time, Light Iron has been moving toward HDR, purchasing equipment from various vendors along the way. In fact, Light Iron was one of the very first vendors to become involved with HDR finishing when Amazon introduced HDR-10 mastering for the second season of one of its flagship shows, Transparent, in 2015.

“Shortly after Transparent, we had several theatrical releases that also began to remaster in both HDR-10 and Dolby Vision, but the requests were not necessarily the norm,” says Fellion. “Over the last three years, that has steadily changed, as more studios are selling content to platforms that offer HDR distribution. Now, we have several shows that started their Season 1 with a traditional HD finish, but then transitioned to 4K HDR finishes in order to accommodate these additional distribution platform requirements.”

Some of the more recent HDR-finished projects at Light Iron include Glow (Season 2) and Thirteen Reasons Why (Season 2) for Netflix, Uncle Drew for Lionsgate, Life Itself for Amazon, Baskets (Season 3) and Better Things (Season 2) for FX and Action Point for Paramount.

Without question, HDR is important to today’s finishing, but one cannot just step blindly into this new, highly detailed world. There are important factors to consider. For instance, the source requirements for HDR mastering — 4K 16-bit files — require more robust tools and storage. “A show that was previously shot and mastered in 2K or HD may now require three or four times the amount of storage in a 4K HDR workflow. Since older post facilities had been previously designed around a 2K/HD infrastructure, newer companies that had fewer issues with legacy infrastructure were able to adopt 4K HDR faster,” says Fellion. Light Iron was designed around a 4K+ infrastructure from day one, she adds, allowing the post house to much more easily integrate HDR at a time when other facilities were still transitioning from 2K to 4K.

Nevertheless, this adoption required changes to the post house’s workflow. Fellion explains: “In a theatrical world, because HDR color is set in a much larger color gamut than P3, the technically correct way to master is to start with the HDR color first and then trim down for P3. However, since HDR theatrical exhibition is still in its infancy, there are not options for most feature films to monitor in a projected environment — which, in a feature workflow, is an expected part of the finishing process. As a result, we often use color-managed workflows that allow us to master first in a P3 theatrical projection environment and then to version for HDR as a secondary pass.”

Light-Iron-NY colorist-Steven Bodner grading music video Picture-Day in HDR on a Sony BVM X300.

In the episodic world, if a project is delivering in HDR, unless creative preference determines otherwise, Light Iron will typically start with the HDR version first and then trim down for the SDR Rec.709 versions.

For either, versioning and delivery have to be considered. For Dolby Vision, this starts with an analysis of the timeline to output an XML for the 709 derivative, explains Fellion of Light Iron’s workflow. And then from that 709 derivative, the colorist will review and tweak the XML values as necessary, sometimes going back to the HDR version and re-analyzing if a larger adjustment needs to be made for the Rec.709 version. For an HDR-10 workflow, this usually involves a different color pass and delivered file set, as well as analysis of the final HDR sequence, to create metadata values, she adds.

Needless to say, embracing HDR is not without challenges. Currently, HDR is only used in the final color process since there’s not many workflows to support HDR throughout the dailies or editorial process, says Fellion. “This can certainly be a challenge to creatives who have spent the past few months staring at images in SDR only to have a different reaction when they first view them in HDR.” Also, in HDR there may be elements on screen that weren’t previously visible in SDR dailies or offline (such as outside a window or production cables under a table), which creates new VFX requirements in order to adjust those elements.

“As more options are developed for on-set monitoring — such as Light Iron’s HDR Video Village System — productions are given an opportunity to see HDR earlier in the process and make mental and physical adjustments to help accommodate for the final HDR picture,” Fellion says.

Having an HDR monitor on set can aid in flagging potential issues that might not be seen in SDR. Currently, however, for dailies and editorial, HDR monitoring is not really used, according to Fellion, who hopes to see that change in the future. Conversely, in the finishing world, “an HDR monitor capable of a minimum 1,000-nit display, such as the Sony [BVM] X300, as well as a consumer-grade HDR UHD TV for client reviews, are part of our standard tool set for mastering,” she notes.

In fact, several months ago, Light Iron purchased new high-end HDR mastering monitors from B&H. The studio also sourced AJA Hi5 4K Plus converter boxes from B&H for its HDR workflow.

And, no doubt, there will be additional HDR equipment needs in Light Iron’s future, as delivery of HDR content continues to ramp up. But there’s a hefty cost involved in moving to HDR. Depending on whether a facility’s DI systems already had the capacity to play back 4K 16-bit files — a key requirement for HDR mastering — the cost can range from a few thousand dollars for a consumer-grade monitor to tens of thousands for professional reference monitoring, DI system, storage and network upgrades, as well as licensing and training for the Dolby Vision platform, according to Fellion.

That is one reason why it’s important for suppliers and vendors to form relationships. But there are other reasons, too. “Those leading the charge [in HDR] are innovators and people you want to be associated with,” Suissa explains. “You learn a lot by associating yourself with professionals on the other side of things. We provide technology. We understand it. We learn it. But we also practice it differently than people who create content. The exchange of knowledge is critical, and it enables us to help our customers better understand the technology they are purchasing.”

Main Image: Netflix’s Glow


Karen Maierhofer is a longtime technical writer with more than two decades of experience in segments of the CG and post industries.

Colorist Arianna Shining Star joins Apache

Santa Monica-based color and finishing boutique Apache has added colorist Arianna Shining Star to its roster at this Santa Monica color and finishing boutique. She is the studio’s first woman colorist.

Star’s commercial work includes spots and branded shorts for Apple, Nike, Porsche, Budweiser, Tommy Hilfiger, Spotify and Coca-Cola. Her music video credits include the MTV VMA-nominated videos Wild Thoughts for Rihanna and Justin Bieber’s visual album for Purpose. Her longform work includes newly released Netflix feature film Ibiza, a comedy co-produced by Adam McKay and Will Ferrell’s Gary Sanchez Productions.

After studying Cinematic Arts and Psychology at USC, Shining Star cut her teeth at Company 3 as an assistant colorist. She then worked as a Baselight specialist for FilmLight before joining Paramount Pictures, where she remastered feature films in HDR. She was then brought on as colorist at Velem to spearhead the post production department of Milk Studios.

“Arianna worked with us before, and we’ve always had our eye on her,” says managing partner LaRue Anderson. “She’s super-talented and a true go-getter who’s amassed an awesome body of work in a relatively short time.”

With Northern California roots, Arianna’s distinctive middle name (she goes by her first and middle names professionally) comes from her parents, who met at a Grateful Dead concert during a performance of the Jerry Garcia classic song, “Shining Star.” Something of a next-gen Dead Head herself, she admits to having seen the current iteration of the band over 30 times.

Her background and interest in psychology is clear as she explains what attracts her most to color grading: “It has the ability to elevate not only production value and overall aesthetic, but can help guide the viewers’ emotional journey through the piece,” Star says.  “I love the opportunity to put the finishing touches on a piece, too. After countless people have poured their heart and soul into crafting a film, it’s an immense privilege to have the last creative touch.”

On adding the first woman colorist to the Apache roster, Anderson says it’s a testament to Star’s creative skills that she’s flourished in what’s largely a male-dominated category of post production. “There’s a lack of role models for women coming up in the creative ranks of color and visual effects,” she explains. “Women have to work hard to get on the playing field. Arianna is not only on the field, she owns the field. She’s established herself as a specialist who DPs and directors lean on for creative collaboration.”

“I want to be seen for the quality of my work and nothing else,” she says. “What makes me unique as a colorist is not my gender, but my aesthetic and approach to collaboration — my style runs the gamut from big and bold to soft and subtle.”

She cites her work on Ibiza as an example of this versatility. “Comedies typically play it safe with color, but from day one we sought to do something different and color outside the lines,” she says. “Director Alex Richanbach and cinematographer Danny Modor set me up with an incredibly diverse palette that allowed us to go bold and use color to further enhance the three different worlds seen in the film: New York, Barcelona and Ibiza. Narrative work really allows you to take your viewer on a journey with the color grade.”

At Apache, Star says she’s found a home where she can continue to learn the craft. “They’re true veterans who know the ins and outs of this wild industry and are incredible leaders,” she says of Anderson and her partners, Shane Reed and Steve Rodriguez. “And their three key core tenets drew me. One, we’re a creatively driven company. Two, we’re consistently re-evaluating the playbook and figuring out what works and what we can improve. And three, we truly operate like a family and support one another. We’ve got a crew of talented artists, and it’s a privilege to work alongside them.”

Color for Television Series

By Karen Maierhofer

Several years ago I was lucky enough to see Van Gogh’s original The Starry Night oil on canvas at a museum and was awestruck by how rich and vibrant it really was. I had fallen in love with the painting years before after seeing reproductions/reprints, which paled in comparison to the original’s striking colors and beauty. No matter how well done, the reproductions could never duplicate the colors and richness of the original masterpiece.

Just as in the art world, stories told via television are transformed through the use of color. Color grading and color correction help establish a signature look for a series, though that can, and often does, change from one episode to another — or from one scene to another — based on the mood the DP and director want to portray.

Here we delve into this part of the post process and follow a trio of colorists as they set the tone for three very different television series.

Black-ish
Black-ish is an ABC series about a successful African-American couple raising their five children in an affluent, predominantly white neighborhood. Dre, an advertising executive, is proud of his heritage but fears that culture is lost when it comes to his kids.

There is no struggle, however, when it comes to color grading the show, a job that has fallen to colorist Phil Azenzer from The Foundation in Burbank starting with this past season (Season 4).

The show is shot using an Arri Alexa camera. The dailies are then produced by the show’s in-house editor. The files, including the assembly master, are sent to Azenzer, who uses the raw camera files for his color grading, which is done using Blackmagic’s Resolve.

Azenzer starts a scene by rolling into the establishing shot and sets the look there because “you can see all light sources and their color temperatures,” he says. “I get a feel for the composition of the shot and the gradation of shadow to light. I see what light each of the actors is standing in or walking through, and then know how to balance the surrounding coverage.”

In his opinion, networks, for the most part, like their half-hour comedies to be well lit, more chromatic, with less shadow and contrast than an average one-hour drama, in order to create a more inviting, light feel (less somber). “And Black-ish is no different, although because of the subject matter, I think of Black-ish as more of a ‘dramedy,’ and there are scenes where we go for a more dramatic feel,” Azenzer explains.

Black-ish’s main characters are African-American, and the actors’ skin tones vary. “Black-ish creator Kenya Barris is very particular about the black skin tones of the actors, which can be challenging because some tones are more absorbent and others more reflective,” says Azenzer. “You have to have a great balance so everyone’s skin tone feels natural and falls where it’s supposed to.”

Phil Azenzer

Azenzer notes that the makeup department does an excellent job, so he doesn’t have to struggle as much with pulling out the bounce coming off the actors’ skin as a result of their chromatic clothes. He also credits DP Rob Sweeney (with whom he has worked on Six Feet Under and Entourage) with “a beautiful job of lighting that makes my life easier in that regard.”

While color grading the series, Azenzer avoids any yellow in skin tones, per Barris’s direction. “He likes the skin tones to look more natural, more like what they actually are,” he says. “So, basically, the directive was to veer away from yellow and keep it neutral to cool.”

While the colorist follows that direction in most scenes, he also considers the time of day the scene takes place when coloring. “So, if the call is for the shot to be warm, I let it go warm, but more so for the environment than the skin tones,” explains Azenzer.

Most of the show is shot on set, with few outdoor sequences. However, the scenes move around the house (kitchen, living room, bedrooms) as well as at the ad agency where Dre works. “I have some preferred settings that I can usually use as a starting point because of the [general] consistency of the show’s lighting. So, I might ripple through a scene and then just tighten it up from there,” says Azenzer. But my preference as a colorist is not to take shortcuts. I don’t like to plug something in from another episode because I don’t know if, in fact, the lighting is exactly the same. Therefore, I always start from scratch to get a feel for what was shot.”

For instance, shots that take place in Dre’s office play out at various points in the day, so that lighting changes more often.

The office setting contains overhead lighting directly above the conference table, like one would find in a typical conference room. It’s a diffused lighting that is more intense directly over the table and diminishes in intensity as it feathers out over the actors, so the actors are often moving in and out of varying intensities of light on that set. “It’s a matter of finding the right balance so they don’t get washed out and they don’t get [too much shadow] when they are sitting back from the table,” explains Azenzer. “That’s probably the most challenging location for me.”

Alas, things changed somewhat during the last few episodes of the season. Dre and his wife, Rainbow, hit a rough patch in their marriage and separate. Dre moves into a sleek, ultra-modern house in the canyon, with two-story ceilings and 20-foot-tall floor-to-ceiling windows — resulting in a new location for Azenzer. “It was filled with natural light, so the image was a little flat in those scenes and awash with light and a cool aura,” he describes. Azenzer adjusted for this by “putting in extra contrast, double saturation nodes, and keying certain colors to create more color separation, which helps create overall separation and depth of field. It was a fun episode.”

In the prior episode, the show toggles back and forth from flashbacks of Bow and Dre from happier times in their marriage to present day. Azenzer describes the flashbacks as saturated with extremely high contrast, “pushing the boundaries of what would be acceptable.” When the scene switched to present day, instead of the typical look, it was shot with the movie Blue Valentine in mind, as the characters discussed separating and possibly divorcing.

“Those scenes were shot and color corrected with a very cool, desaturated look. I would latch onto maybe one thing in the shot and pop color back into that. So, it would be almost grayish blue, and if there was a Granny Smith apple on the counter, I grabbed that and popped it, made it chromatic,” explains Azenzer. “And Dre’s red sweatshirt, which was desaturated and cool along with the rest of the scene, I went back in there and keyed that and popped the red back in. It was one of the more creative episodes we did.”

When Azenzer first took over coloring the show, “everybody was involved,” he says. “I had a relationship with Rob Sweeney, but I was new to Kenya, the post team, and Tom Ragazzo, co-producer, so it was very collaborative at the beginning to nail the look they were going for, what Kenya wanted. Now we are at the point so when I finish an episode, I give Rob a heads-up and he’ll come over that day or whenever he can and bring lunch, and I play it back for him.”

It’s not as if the episodes are without change, though Azenzer estimates that 85 percent of the time Sweeney says, “‘Beautiful job,’ and is out the door.” When there are changes, they usually involve something nominal on just a shot or two. “We are never off-base to where we need to redo a scene. It’s usually something subjective, where he might ask me to add a Power Window to create a little shadow in a corner or create a light source that isn’t there.”

Azenzer enjoys working on Black-ish, particularly because of the close relationship he has with those working on the show. “They are all awesome, and we get along really well and collaborate well,” he says. Indeed, he has forged bonds with this new family of sorts on both a professional and personal level, and recently began working on Grown-ish, a spin-off of Black-ish that follows the family’s eldest daughter after she moves away to attend college.

The 100
Dan Judy, senior colorist at DigitalFilm Tree (DFT) in Hollywood, has been working on The CW’s The 100 starting with the pilot in 2014, and since then has helped evolve it into a gritty-looking show. “It started off with more of an Eden-type environment and has progressed into a much grittier, less friendly and dangerous place to live,” he says.

The 100 is a post-apocalyptic science-fiction drama that centers on a group of juvenile offenders from aboard a failing space station who are sent to Earth following a nuclear apocalypse there nearly a century earlier. Their mission: to determine whether the devastated planet is habitable. But, soon they encounter clans of humans who have survived the destruction.

“We have geographical locations that have a particular look to them, such as Polis (the capitol of the coalition),” says Judy of the environment set atop rolling hills lush with vegetation. “In this past season, we have the Eden environment — where after the planet incurs all this devastation, the group finds an oasis of thriving foliage and animated life. Then, gradually, we started backing off the prettiness of Eden and making it less colorful, a little more contrasty, a little harsher.”

The series is shot in Vancouver by DP Michael Blundell. The dailies are handled by Bling Digital’s Vancouver facility, which applies color with the dailies cut. As an episode is cut, Bling then ships drives containing the camera master media and the edit decision list to DFT, which assembles the show with a clip-based approach, using the full-resolution camera masters as its base source.

“We aren’t doing a transcode of the media. We actually work directly, 100 percent of the time, from the client camera master,” says Judy, noting this approach eliminates the possibility of errors, such as dropouts or digital hits that can result from transcoding. “It also gives me handles on either end of a shot if it was trimmed.”

Dan Judy

Vancouver-based Blundell sets the palette, but he conveys his ideas and concepts to Tim Scanlan, director and supervising producer on the show, with whom Judy has a longstanding relationship — they worked together years before on Smallville. “Then Tim and I will sit down and spot the show, setting looks for the scenes, and after the spotting session, I will fill in the gaps to give it a consistent look,” says Judy. Although Scanlan is in nearby Santa Monica, due to LA’s traffic, he and Hollywood-based Judy collaborate remotely, to save valuable time.

“I can remote into [Scanlan’s] system and color correct with him in full resolution and in realtime,” explains Judy. “I can play back the reference file with the dailies color on it, and I can split-screen that with him in realtime if he wants to reference the dailies color for that particular scene.”

For coloring the show, Judy uses Blackmagic’s DaVinci Resolve, which is also used to conform the series. Using Resolve’s Project Management tools, the editors and colorists “can all work on the project and contribute to it live, in realtime, simultaneously,” Judy points out. “So, I can be color correcting at the same time the editor is building the show, and getting all of his updates in mere seconds.”

Scanlan uses a remote Resolve system with a monitor that is calibrated to Judy’s, “so what he is seeing on his end is an exact replica of what I’m seeing in my room,” Judy says.

One scene in The 100 that stands out for Judy occurs early in the episode during the premiere of Season 5, which finds Clarke Griffin, one of the prisoners, trapped in a wasteland. He explains: “We had several different evolutions of what that look was going to be. I gave them a few designs, and they gave me some notes. Before the show was cut, they gave me little snippets of scenes to look at, and I did test looks. They came back and decided to go with one of those test looks at first, and then as the show progressed, we decided, collaboratively, to redesign the look of the scene and go with more of a sepia tone.”

Much of The 100 is filmed outdoors, and as everyone knows, nature does not always cooperate during shoots. “They deal with a lot of different weather conditions in Vancouver, unlike LA. They’ll get rain in the middle of a scene. Suddenly, clouds appear, and you have shadows that didn’t exist before. So, when that’s the only footage you have, you need to make it all blend together,” explains Judy. “Another challenge is making these amazing-looking sets look more natural by shadowing off the edges of the frame with power windows and darkening parts of the frame so it looks like the natural environment.”

Judy points to the character Becca’s abandoned lab — an elaborate set from last year’s season — as a scene that stands out for him. “It was an amazing set, and in wide shots, we would shape that picture with power windows and use color levels and desaturation to darken it, and then color levels and saturation to brighten up other areas,” he says. “This would make the room look more cavernous than it was, even though it was large to begin with, to give it more scope and vastness. It also made the room look dramatic yet inviting at the same time.”

All in all, Judy describes The 100 as a very edgy, dramatic show. “There’s a lot going on. It’s not your standard television fare. It’s very creative,” he says. “Tim and I did a lot of color design on Smallville, and we’re carrying on that tradition in The 100. It’s more feature-esque, more theatrical, than most television shows. We add grain on the picture to give it texture; it’s almost imperceptible, but it gives a slightly different feel than other shows. It’s nice to be part of something where I’m not just copying color for a standardized, formulaic show. This series gives me the opportunity to be creative, which is awesome.”

Dear White People
Sometimes color grading decisions are fairly standard on television shows. Black and white, so to speak. Not so for the Netflix series Dear White People, a comedy-drama spin-off from the 2014 film of the same name, which follows students of color at a predominantly white Ivy League college as they navigate various forms of discrimination — racial and otherwise.

Helping achieve the desired look for the series fell to senior colorist Scott Gregory from NBCUniversal StudioPost. Starting with Season 1, day one, “the show’s creator, Justin Simien, DP Jeffrey Waldron, executive producer Yvette Lee Bowser and I huddled in my bay and experimented with different ‘overall’ looks for the show,” notes Gregory.

Simien then settled on the “feel” that is present throughout most of the series. Once he had locked a base look, the group then discussed how to use color to facilitate the storytelling. “We created looks for title cards, flashbacks, historical footage, locations and even specific characters,” Gregory says.

Using stills he had saved during those creative meetings as a guide, he then color corrects each show. Once the show is ready for review, the executive producers and DP provide notes — during the same session if schedules permit, or separately, as is often the case. If any of the creatives cannot be present, stills and color review files are uploaded for review via the Internet.

According to Gregory, his workflow starts after he receives a pre-conformed 4:4:4 MXF video assembled master (VAM) and an EDL supplied by online editor Ian Lamb. Gregory then performs a process pass on the VAM using Resolve, whereby he re-renders the VAM, applying grain and two Digital Film Tools (DFT) optical filters. This gives the Red camera footage a more weathered, filmic look. This processing, however, is not applied to the full-frame television show inserts to better separate them from the visual palette created for the show by Simien, Bowser and DPs Waldron and Topher Osborn.

Scott Gregory

Once the VAM is processed, Gregory creates a timeline using the EDL, the processed VAM, and the temp audio, applies a one-light correction to all of the shots, and gets to work. As the color progresses, he drops in the visual effects, cleaned shots, composited elements, and some titles as they are delivered. Once the show is locked for color and VFX approval, he renders out a 3840×2160 UHD final 4:4:4 MXF color-timed master, which then goes back to the online editor for titling and delivery.

“Blue contaminated and lifted blacks, strong vignettes, film-grain emulation and warm, compressed filmic highlights are characteristics present in most of the show,” says Gregory. “We also created looks for Technicolor two-strip, sepia, black-and-white silent-era damaged print, and even an oversaturated, diffused, psychedelic drug trip scene.”

The looks for the flashback or “historical” sequences, usually somewhere in Act I, were created for the most part in Resolve. Many of these sequences or montages jump through different time periods. “I created a black-and-white damaged film look for the 1800s, Technicolor two-strip for the early 1900s, a faded-emulsion [Kodak] Ektachrome [film] look for the ’70s, and a more straightforward but chromatic look for the ’80s,” says Gregory.

Simien also wanted to use color “themes” for specific characters. This was reflected in not only the scenes that included the featured character for that particular show, but also in the title card at the head of the show. (The title card for each show has a unique color corresponding to the featured character of that episode.)

When coloring the series, Gregory inevitably encounters processing issues. “Using all the filters and VFX plug-ins that I do on this show and being in UHD resolution both eat up a lot of processing power. This slows down the software significantly, no matter what platform or GPUs are being used,” he says. In order to keep things up to speed, he decided to pre-render, or bake in, the grain and some of the filters that were to be used throughout each show.

“I then create a new timeline using the pre-rendered VAM and the EDL, and set a base correction,” Gregory explains. “This workflow frees up the hardware, so I can still get realtime playback, even with multiple color layers, composites and new effects plug-ins.”

Gregory is hardly new to color grading, having a long list of credits, including television series, full-length movies and short films. And while working on Seasons 1 and the recently released Season 2 of Dear White People, he appreciated the collaborative environment. “Justin is obviously very creative and has a discerning eye. I have really enjoyed the collaborative space in which he, Yvette, Jeffrey and Topher like to work,” he says. “Justin likes to experiment and go big. He wants the artists he works with to be a part of the creative process, and I think he believes that in the end, his final product will benefit from it. It makes for good times in the color bay and a show we are all very proud of.”


Karen Maierhofer is a longtime technical writer with more than two decades of experience in segments of the CG and post industries.

Behind the Title: Deluxe Senior Finishing Editor Samantha Uber

NAME: Samantha Uber (@samanthauber)

COMPANY: Deluxe NY

CAN YOU DESCRIBE YOUR COMPANY?
Deluxe NY is the New York City branch of the classic film lab founded in 1915. Today, we are a huge multimedia international empire for all types of content creation and delivery. My favorite part of working for this company is that we manage to serve our clients in a personalized, boutique environment but with the support of a worldwide network of both technology and ideas.

WHAT’S YOUR JOB TITLE?
Senior Finishing Editor

CAN YOU EXPLAIN WHAT YOU DO?
I am a Flame finishing editor/VFX artist, and I come from an Avid online and offline editorial background. I also use Blackmagic Resolve, Adobe Premiere and Apple FCP for their various abilities for different projects. While I always fully finish (conform/online) episodic and film projects in Flame, I also always use a unique mix of those applications listed above for each project to get me to that point in the highest quality and most efficient way possible. I am very interested in the building of the computer I am working on, the specialized scripts to make data organized, the debayer/color science process and, of course, the actual editing and delivery of the project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
In my job as a finishing editor, I am surprisingly super-involved in dailies, mainly because I know what will make the job easier on the finishing editor if certain metadata is retained and organized in dailies. Seeing how the metadata coming from the dailies process is actually implemented in finishing allows me to have a unique perspective, and I teach dailies techs about this to give them a better understanding of how their work is being used.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Everyone who knows me, knows my favorite thing is a reconform. I love them. They are challenging, like giant Tetris puzzles — my favorite game growing up was Tetris. I love getting in the zone for hours and hours, moving the pieces of the timeline around, relying on the metadata the Flame gives me to do it more efficiently, and sometimes, not even looking at the actual picture until the end.

WHAT’S YOUR LEAST FAVORITE?
For me, my least favorite thing is working on something that doesn’t challenge me. I like to constantly be thinking about ways to process new camera formats and new workflows, and understanding/being involved in the entire online process from start to finish. I love the “hard” jobs… the tough ones to figure out, even if that means I lose quite a bit of sleep (she laughs). There is always a limit to that, of course, but if I’m not involved in research and development on a project, I’m not happy. For this reason, I love working in episodic television the most because I can R&D a workflow and then use it and perfect it over time, all while building a close relationship with my clients and feeling ownership of my show.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I’d say mid-afternoon and around 9pm at night. After the morning fires are put out and everything gets going, the middle of the afternoon gets a lot of work done. Also, around 9pm I enjoy working because the formal working day has pretty much ended and I can just zero in on a project and work quietly, without distractions.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I really love restoring antiques, whether it’s furniture or the 100-year-old Victorian home I live in. I am always working with my hands — either at work or at home — building, painting, grooming dogs, veggie-gardening, cooking, sculpting, etc. I appreciate the craftsmanship that went into antique pieces. I feel that type of work is lost in today’s disposable world.

What I do for films as a finishing editor is quite like the restoration work I do at home — taking something and realizing it to its full potential and giving it a new life. For these reasons I think I could possibly be an architect/designer, specializing in the mostly period-accurate restoration of antique homes. I still may end up doing this many years from now.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I knew very early on that I wanted to be a film editor of some sort. I was 16 yrs old when the film Moulin Rouge came out, and my best friend Michelle and I saw it in the theater. We both knew we wanted to do something technical and creative from that point. She became a computer engineer, and I became a senior finishing editor. I loved the editing and pacing of that film, how it was so much like the music videos I grew up watching, and I wanted to be able to tell a story with VFX and editing. I actually practiced on the Moulin Rouge DVD extras re-editing the scenes on the ISOs of the cameras they provided.

I was 16 when I applied to NYU’s Tisch School of the Arts. It was my only choice for college. I initially went for a summer between my junior and senior year of high school and continued after high school for three more years until I graduated. I was working as a freelance editor for students, working at MTV as a junior editor, and teaching Avid editing at NYU during that time — always working!

Moulin Rouge is still my favorite film, and my dream is to work with director Baz Lurhmann one day.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I have worked as senior finishing editor on Paterno, High Maintenance, Girls, Vinyl and Boardwalk Empire for HBO, The Knick for Cinemax, Blue Bloods for CBS, The Americans for FX, Jesus Christ Superstar for NBC and Mr. Robot for USA. I worked on the film All These Small Moments for the 2018 Tribeca Film Festival, as well as the films Beasts of No Nation and Moonrise Kingdom in recent years.

YOU HAVE WORKED ON ALL SORTS OF PROJECTS. DO YOU PUT ON A DIFFERENT HAT WHEN CUTTING FOR A SPECIFIC GENRE?
I certainly put on a different workflow hat for the different parts of my job. It actually feels like different jobs sometimes —  painting a visual effect, building a computer, making a finishing workflow, conforming a show, debayering footage, designing a dailies workflow, etc. I think that keeps it interesting; doing something different every day.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project I am most proud of is The Knick. I was involved in the process of creating the workflow of the show with Steven Soderbergh’s team for one year before it actually began. I believe it was the first show to use the Red Dragon camera at 5K, finishing at UHD. I worked intensely with the Red team to develop the software, color workflow and computer for debayering the footage.

I also worked closely with colorist Martin Zeichner and Steven’s team to retain the exact onset look of color immediately and efficiently, while also giving them the full latitude of the Red format in the DI. The result was beautiful, and I really enjoyed the show. I felt like the plot of the show — innovation in the surgical field — was being mirrored in the innovation in the actual finishing of the show, which was super awesome!

CAN YOU TALK MORE ABOUT THE TOOLS YOU USE?
For all final finishing, I use Autodesk Flame. I am proficient in nearly all platforms, but to me, nothing is better than the unique timeline in Flame, where layers see each other and tracks do not. This allows you to have many versions of a cut in one timeline, and is ideal for finishing. Also, the VFX capability of the Flame is unparalleled in an editing system, and it allows me to start working on anything in moments at the client’s request. However, Avid will always be my favorite for metadata and database management, and I usually start every project with a peek at the metadata in the Avid, and frequently a full reorganization.

WHAT IS YOUR FAVORITE PLUGIN?
My favorite and most frequently used plugin is Re:Vision’s Twixtor, for the tons and tons of timewarps I do. This plugin helps me paint less frames than most. Close runners-up are Autodesk’s Autostabilize, which is actually highly customizable, and Furnace’s F-WireRemoval for all sorts of purposes.

ARE YOU OFTEN ASKED TO DO MORE THAN EDIT? 
Being a finishing editor means you are the last person to touch the project before it airs, so you are the last stop in everything. For that reason, I am often asked to anything and everything in session — re-mix sound, creatively re-edit, give advice on VFX shots and deliverables, do VFX shots, make masters, QC masters. You name it and I do it in session. I think that’s what the job really entails; being able to give the client what they are looking for at the last possible moment, especially now that they are seeing the final product in high-resolution and color corrected.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I could not live without my iPhone, as it connects me to the outside world as well as my job. It’s like my whole life is on my phone. I could also not live without my Wacom tablet. Finishing an edit is a whole lot easier on a tablet. Also, my super-fast cylinder Mac, outfitted so that every application and high-resolution footage can be processed extremely quickly. I still do wish my Mac was square, however, (she laughs), for more equipment compatibility, but I cannot complain about its high-speed processing ability. Engineering has kindly given me a Mac that I can play on and try new software, often before it is rolled into production throughout the facility. Th is keeps me in the know on new developments in our industry. This computer is totally separate from my super powerful Linux Flame system.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Yes, this is a high-stress job! I feel very responsible for all of the people who have put their hard work into a project to make sure it is shown in its best light and everything is as perfect as possible on often-tight deadlines. After a project leaves my hands it goes to QC, and my final work is what they see and what airs.

Because everything I do is on computers, I try to spend as little time on a computer outside of work as possible. As I mentioned before, I live in a 100-year-old house that I am restoring myself. What is nice is that I feel like I’m using the same part of my brain as I do at my job, however it is usually outdoors and involving physical labor. That is a great de-stressor from working on a computer in a windowless and darkened room all week.

I live far outside the city by the beach, and when I’m home, I’m really home and work seems a world away. I have two beautiful Afghan Hound sister dogs, Ginny and Trill, and a 1974 VW bus named Buddy. I honestly don’t like to rest. I always like to be working on my projects and pushing forward in my life, and I am just your typical Jersey girl at heart.

Sim and the ASC partner on educational events, more

During Cine Gear recently, Sim announced a 30-year sponsorship with the American Society of Cinematographers (ASC). Sim offers end-to-end solutions for creatives in film and television, and the ASC is a nonprofit focusing on the art of cinematography. As part of the relationship, the ASC Clubhouse courtyard will now be renamed Sim Plaza.

Sim and the ASC have worked together frequently on events that educate industry professionals on current technology and its application to their evolving craft. As part of this sponsorship, Sim will expand its involvement with the ASC Master Classes, SimLabs, and conferences and seminars in Hollywood and beyond.

During an official ceremony, a commemorative plaque was unveiled and embedded into the walkway of what is now Sim Plaza in Hollywood. Sim will also host a celebration of the ASC’s 100th anniversary in 2019 at Sim’s Hollywood location.

What else does this partnership entail?
• The two organizations will work together closely over the next 30 years on educational events for the cinematography community. Sim’s sponsorship will help fund society programs and events to educate industry professionals (both practicing and aspiring) on current technology and its application to the evolving craft.
• The ASC Master Class program, SimLabs and other conferences and seminars will continue on over these 30 years with Sim increasing its involvement. Sim is not telling the ASC what kind of initiatives they should be doing, but is rather lending a helping hand to drive visual storytelling forward. For example, they have already hosted ASC Master Class sessions in Toronto and Hollywood, sponsored the annual ASC BBQ for the last couple of years, and founder Rob Sim himself is an ASC associate member.

How will the partnership will increase programming and resources to support the film and television community for the long term?
• It has a large focus on three things: financial resources, programming assistance and facility support.
• It will provide access and training with world-class technology in film and television.
• It will offer training directly from industry leaders in Hollywood and beyond
• It will develop new programs for people who can’t attend ASC Master Class sessions, such as an online experience, which is something ASC and Sim are working on together.
• It will expand SimLabs beyond Hollywood —with the potential to bring it to Vancouver, Atlanta, New York and Toronto with the goal of creating new avenues for people who are associated with the ASC and who know they can call on Sim.
• It will bring volunteers. Sim has many volunteers on ASC committees, including the Motion Imaging Technology Council and its Lens committee.

Main Image: L-R: Sim President/CEO James Haggarty, Sim founder and ASC associate member Rob Sim,ASC events coordinator Patty Armacost and ASC president Kees van Oostrum.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

ACES adds new companies to Logo Program 

Six new companies have been added to the ACES Logo Program. Membership in the program signals that the companies are committed to implementing ACES into their hardware and software products in conformance with program specifications for consistency and quality. ACES is the global standard for color management, digital image interchange and archiving.

“ACES has given us a solid framework to efficiently solve issues, along with options, to maintain creative control for our productions. It provides much needed transparency, connecting and standardizing the workflows from on-set, dailies, VFX, DI and archival,” says Victoria Alonso, EVP, Physical Production, Marvel Studios. “Standards are important — they help studios protect and monetize our films for years, and they also help create a healthy ecosystem of suppliers and professionals who can reliably work on sophisticated productions because they know the infrastructure beneath them is solid. Standards like ACES give us a common language for applications and pipelines to connect without compromising translations and misunderstandings, and to protect the creative workflow established by filmmakers. We see ACES as an important component in allowing the industry to innovate and work at the highest levels of our craft.”

The new ACES Logo Program partner companies are:
– Color Trix, makers of Color Finale, a color correction add-on to Final Cut Pro X.
– DJI, makers of drones, camera accessories and systems, including the Zenmuse X7, a Super35mm cinema-grade camera.
– In2Core, makers of the QTake video assist system.
– Laser Graphics, makers of film scanning systems: DirectorScanner, Director10K scanner and ScanStation.
– Vision Research, makers of the Phantom line of high-speed cameras.
– WowWow Entertainment, makers of the IS Mini LUT Box.

These six companies join the existing manufacturers of cinema cameras, color correctors, monitors, on-set tools, animation and compositing software who are already part of ACES Product Partners.

Tom Dunlap, Gui Borchert lead Hecho Studios, formerly Hecho En 72

Hecho En 72 has expanded its North American operations to allow for continued growth in both Los Angeles and New York. It will now be known as Hecho Studios under the MDC Partners network of companies. Hecho, is a content development and production company that originally was grown as part of 72andSunny, but is now a separate entity. While they still work closely, and share the same space in New York and LA, they operate independently.

As part of this growth, Tom Dunlap, who was previously chief production officer at 72andSunny, has assumed the position of managing director of the company. He will be responsible for driving all aspects of growth, operations and production.

Hecho Studios will be led creatively by executive creative director, Gui Borchert, who most recently served as group creative director at 72andSunny for the last four years, working on projects for Starbucks, Sonos and the LA Olympic bid.

“In a time where marketing takes any form, how brands make content is just as important as what they make,” says Dunlap. “We have built and are growing Hecho Studios to help brands amplify creative opportunities and modernize production and content creation to maximize quality, efficiency and impact,” says Borchert.

Hecho’s past work includes the production of Sugar Coated, a short documentary featured in 18 film festivals in partnership with 72U, two Emmy nominations for their work on Google — Year in Search, editorial, print and product design for the award-winning LA Original campaign, and the recent short parody film featuring Will Ferrell and Joel McHale for The Hammer Museum at UCLA’s latest exhibition, Stories of Almost Everyone.

Hecho’s production offerings includes building models based on the client’s creative needs. This can range anywhere from a small table-top product shoot to a large scale narrative film and include live-action, photography, stop-motion, time-lapse, narrative, documentary and both short and long form.

Hecho offers full service audio post, specializing in commercial and branded content for broadcast TV, cinema, digital and radio. Their studios are equipped with two Avid S6 mixing consoles and JBL-M2 surround stage (7.1, 5.1) and spatial audio (VR) capabilities. Services include sound design, mixing, field recording, Foley, voice-over/ADR recording, Source-Connect/ISDN and music editorial and licensing.

Hecho’s post offerings include editorial, motion graphics and finishing. Their edit suites integrate all editing platforms, including Adobe Premiere, Avid and Final Cut Pro. Their motion graphics team uses After Effects, Cinema 4D and Maya animations. Hecho has two Flame suites to address requests for clean-up, conform and VFX. They also use Blackmagic Resolve for color grade needs.

Main Image Caption: (L-R) Gui Borchert and Tom Dunlap.

Review: The PNY PrevailPro mobile workstation

By Mike McCarthy

PNY, a company best known in the media and entertainment industry as the manufacturer of Nvidia’s Quadro line of professional graphics cards, is now offering a powerful mobile workstation. While PNY makes a variety of other products, mostly centered around memory and graphics cards, the PrevailPro is their first move into offering complete systems.

Let’s take a look at what’s inside. The PrevailPro is based on Intel’s 7th generation Core i7 7700HQ Quad-Core Hyperthreaded CPU, running at 2.8-3.8GHz. It has an HM175 chipset and 32GB of dual-channel DDR4 RAM. At less than ¾-inch thick and 4.8 pounds, it also has an SD card slot, fingerprint reader, five USB ports, Gigabit Ethernet, Intel 8265 WiFi, and audio I/O. It might not be the lightest 15-inch laptop, but it is one of the most powerful. At 107 cubic inches, it has half the volume of my 17-inch Lenovo P71.

The model I am reviewing is their top option, with a 512GB NVMe SSD, as well as a 2TB HDD for storage. The display is a 15.6-inch UHD panel, driven by the headline feature, a Quadro P4000 GPU in Max-Q configuration. With 1792 CUDA cores, and 8GB of GDDR memory, the GPU retains 80% of the power of the desktop version of the P4000, at 4.4 TFlops. Someone I showed the system to joked that it was a PNY Quadro graphics card with a screen, which isn’t necessarily inaccurate. The Nvidia Pascal-based Quadro P4000 Max-Q GPU is the key unique feature of the product, being the only system I am aware of in its class — 15-inch workstations — with that much graphics horsepower.

Display Connectivity
This top-end PrevailPro system is ProVR certified by Nvidia and comes with a full complement of ports, offering more display options than any other system its size. It can drive three external 4K displays plus its attached UHD panel, an 8K monitor at 60Hz or anything in between. I originally requested to review this unit when it was announced last fall because I was working on a number of Barco Escape three-screen cinema projects. The system’s set of display outputs would allow me to natively drive the three TVs or projectors required for live editing and playback at a theater, without having to lug my full-sized workstation to the site. This is less of an issue now that the Escape format has been discontinued, but there are many other applications that involve multi-screen content creation, usually related to advertising as opposed to cinema.

I had also been looking for a more portable device to drive my 8K monitor — I wanted to do some on-set tests, reviewing footage from 8K cameras, without dragging my 50-pound workstation around with me — even my 17-inch P71 didn’t support it. Its DisplayPort connection is limited to Version 1.2, due to being attached to the Intel side of the hybrid graphics system. Dell’s Precision mobile workstations can drive their 8K display at 30Hz, but none of the other major manufacturers have implemented DisplayPort 1.3, favoring the power savings of using Intel’s 1.2 port in the chipset. The PrevailPro by comparison has dual mini-DisplayPort 01.3 ports, connected directly to the Nvidia GPU, which can be used together to drive an 8K monitor at 60Hz for the ultimate high-res viewing experience. It also has an HDMI 2.0 port supporting 4Kp60 with HDCP to connect your 4K TV.

It can connect three external displays, or a fourth with MST if you turn off the integrated panel. The one feature that is missing is Thunderbolt, which may be related to the DisplayPort issue. (Thunderbolt 3 was officially limited to DisplayPort 1.2) This doesn’t affect me personally, and USB 3.1 has much of the same functionality, but it will be an issue for many users in the M&E space — it limits its flexibility.

User Experience
The integrated display is a UHD LCD panel with a matte finish. It seems middle of the line. There is nothing wrong with it, and it appears to be accurate, but it doesn’t really pop the way some nicer displays do, possibly due to the blacks not being as dark as they could be.

The audio performance is not too impressive either. The speaker located at the top of the keyboard aren’t very loud, even at maximum volume, and they occasionally crackle a bit. This is probably the system’s most serious deficiency, although a decent pair of headphones can improve that experience significantly. The keyboard is well laid out, and felt natural to use, and the trackpad worked great for me. Switching between laptops frequently, I sometimes have difficulty adjusting to changes in the function and arrow key positioning, but everything was where my fingers expected them to be.

Performance wise, I am not comparing it to other 15-inch laptops, because I don’t have any to test it against, and that is not the point of this article. The users who need this kind of performance have previously been limited to 17-inch systems, and this one might allow them to lighten their load — more portable without sacrificing much performance. I will be comparing it to my 17-inch and 13-inch laptops, for context, as well as my 20-core Dell workstation.

Storage Performance
First off, with synthetic benchmarks, the SSD reports 1400MB/s write and 2000MB/s read performance, but the write is throttled to half of that over sustained periods. This is slower than some new SSDs, but probably sufficient because without Thunderbolt there is no way to feed the system data any faster than that. (USB 3.1 tops out around 800MB/s in the real world.)

The read speed allowed me to playback 6K DPX files in Adobe Premiere, and that is nothing to scoff at. The HDD tops out at 125MB/s as should be expected for a 2.5-inch SATA drive, so it will perform just like any other system. The spinning disk seems out of place in a device like this, where a second M.2 slot would have allowed the same capacity, at higher speeds, with size and power savings.

Here are its Cinebench scores, compared to my other systems:
System OpenGL CPU
PNY PrevailPro (P4000) 109.94 738
Lenovo P71 (P5000) 153.34 859
Dell 7910 Desktop (P6000) 179.98 3060Aorus X3 Plus (GF870) 47.00 520

The P4000 is a VR-certified solution, so I hooked up my Lenovo Explorer HMD and tried editing some 360 video in Premiere Pro 12.1. Everything works as expected, and I was able to get my GoPro Fusion footage to play back 3Kp60 at full resolution, and 5Kp30 at half resolution. Playing back exported clips in WMR worked in full resolution, even at 5K.

8K Playback
One of the unique features of this system is its support for an 8K display. Now, that makes for an awfully nice UI monitor, but most people buying it to drive an 8K display will probably want to view 8K content on it. To that end, 8K playback was one of the first things I tested. Within Premiere, DNxHR-LB files were the only ones I could get to play without dropping frames at full resolution, and even then only when they were scope aspect ratio. The fewer pixels to process due to the letterboxing works in its favor. All of the other options wouldn’t playback at full resolution, which defeats the purpose of an 8K display. The Windows 10 media player did playback 8K HEVC files at full resolution without issue, due to the hardware decoder on the Quadro GPU, which explicitly supports 8K playback. So that is probably the best way to experience 8K media on a system like this.

Now obviously 8K is pushing our luck with a laptop in the first place. My 6K Red files play back at quarter res, and most of my other 4K and 6K test assets play smoothly. I rendered a complex 5K comp in Adobe After Effects, and at 28 minutes, it was four minutes slower than my larger 17-inch system, and twice as fast as my 13-inch gaming notebook. Encoding a 10-minute file in DCP-O-Matic took 47 minutes in 2K, and 189 minutes in 4K, which is 15% slower than my 17-inch laptop.

Conclusion
The new 15-inch PrevailPro is not as fast as my huge 17-inch P71, as to be expected, but it is close in most tests, and many users would never notice the difference. It supports 8K monitors and takes up half the space in my bag. It blows my 13-inch gaming notebook out of the water and does many media tasks just as fast as my desktop workstation. It seems like an ideal choice for a power user who needs strong graphics performance but doesn’t want to lug around a 17-inch monster of a system.

The steps to improve it would be the addition of Thunderbolt support, better speakers, and an upgrade to Intel’s new 8th Gen CPUs. If I was still working on multi-screen theatrical projects, this would be the perfect system for taking my projects with me — same if I was working in VR more. I believe the configuration I tested has an MSRP of $4,500, but I find it online for around $4100. So it is clearly not the cheap option, but it is one of the most powerful 15-inch laptops available, especially if your processing needs are GPU intense. It is a well-balanced solution, for demanding users who need performance, but want to limit size and weight.

Update-September 27, 2018
I have had the opportunity to use the PrevailPro as my primary workstation while on the road for the last three months, and I have been very happy with the performance. The Wi-Fi range and battery life are significantly better than my previous system, although I wouldn’t bank on more than two hours of serious media editing work before needing to plug in.

I was able to process 7K R3D test shoot files for my next project in Adobe Media Encoder, and it converts them in full quality at around a quarter of realtime, so four minutes to convert one minute of footage, which is fast enough for my mobile needs. (So it could theoretically export six hours of dailies per day, but I wouldn’t usually recommend using a laptop for that kind of processing.) It renders my edited 5K project assets to H.264 faster than realtime, and the UHD screen has been great for all of my Photoshop work.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Chimney opens in New York City, hires team of post vets

Chimney, an independent content company specializing in film, television, spots and digital media, has opened a new facility in New York City. For over 20 years, the group has been producing and posting campaigns for brands, such as Ikea, Audi, H&M, Chanel, Nike, HP, UBS and more. Chimney was also the post partner for the feature films Chappaquiddick, Her, Atomic Blonde and Tinker Tailor Soldier Spy.

With this New York opening, Chimney now with 14 offices worldwide. Founded in Stockholm in 1995, they opened their first US studio in Los Angeles last year. In addition to Stockholm, New York and LA, Chimney also has facilities in Singapore, Copenhagen, Berlin and Sydney among other cities. For a full location list click here.

“Launching in New York is a benchmark long in the making, and the ultimate expression of our philosophy of ‘boutique-thinking with global power,’” says Henric Larsson, Chimney founder and COO. “Having a meaningful presence in all of the world’s economic centers with diverse cultural perspectives means we can create and execute at the highest level in partnership with our clients.”

The New York opening supports Chimney’s mission to connect its global talent and resources, effectively operating as a 24-hour, full-service content partner to brand, entertainment and agency clients, no matter where they are in the world.

Chimney has signed on several industry vets to spearhead the New York office. Leading the US presence is CEO North America Marcelo Gandola. His previous roles include COO at Harbor Picture Company; EVP at Hogarth; SVP of creative services at Deluxe Entertainment Services Group; and VP of operations at Company 3.

Colorist and director Lez Rudge serves as Chimney’s head of color North America. He is a former partner and senior colorist at Nice Shoes in New York. He has worked alongside Spike Lee and Darren Aronofsky, and on major brand campaigns for Maybelline, Revlon, NHL, Jeep, Humira, Spectrum and Budweiser.

Managing director Ed Rilli will spearhead the day-to-day logistics of the New York office. As the former head of production of Nice Shoes, his resume includes producing major campaigns for such brands as NFL, Ford, Jagermeister and Chase.

Sam O’Hare, chief creative officer and lead VFX artist, will oversee the VFX team. Bringing experience in live-action directing, VFX supervision, still photography and architecture, O’Hare’s interdisciplinary background makes him well suited for photorealistic CGI production.

In addition, Chimney has brought on cinematographer and colorist Vincent Taylor, who joins from MPC Shanghai, where he worked with brands such as Coca-Cola, Porsche, New Balance, Airbnb, BMW, Nike and L’Oréal.

The 6,000-square-foot office will feature Blackmagic Resolve color rooms, Autodesk Flame suites and a VFX bullpen, as well as multiple edit rooms, a DI theater and a Dolby Atmos mix stage through a joint venture with Gigantic Studios.

Main Image: (L-R) Ed Rilli, Sam O’Hare, Marcelo Gandola and Lez Rudge.

Pace Pictures opens large audio post and finishing studio in Hollywood

Pace Pictures has opened a new sound and picture finishing facility in Hollywood. The 20,000-square-foot site offers editorial finishing, color grading, visual effects, titling, sound editorial and sound mixing services. Key resources include a 20-seat 4K color grading theater, two additional HDR color grading suites and 10 editorial finishing suites. It also features a Dolby Atmos mix stage designed by three-time Academy Award-winning re-recording mixer Michael Minkler, who is a partner in the company’s sound division.

The new independently-owned facility is located within IgnitedSpaces, a co-working site whose 45,000 square feet span three floors along Hollywood Boulevard. IgnitedSpaces targets media and entertainment professionals and creatives with executive offices, editorial suites, conference rooms and hospitality-driven office services. Pace Pictures has formed a strategic partnership with IgnitedSpaces to provide film and television productions service packages encompassing the entire production lifecycle.

“We’re offering a turnkey solution where everything is on-demand,” says Pace Pictures founder Heath Ryan. “A producer can start out at IgnitedSpaces with a single desk and add offices as the production grows. When they move into post production, they can use our facilities to manage their media and finish their projects. When the production is over, their footprint shrinks, overnight.”

Pace Pictures is currently providing sound services for the upcoming Universal Pictures release Mamma Mia! Here We Go Again. It is also handling post work for a VR concert film from this year’s Coachella Valley Music and Arts Festival.

Completed projects include the independent features Silver Lake, Flower and The Resurrection of Gavin Stone, the TV series iZombie, VR Concerts for the band Coldplay, Austin City Limits and Lollapalooza, and a Mariah Carey music video related to Sony Pictures’ animated feature Star.

Technical features of the new facility include three DaVinci Resolve Studio color grading suites with professional color consoles, a Barco 4K HDR digital cinema projector in the finishing theater, and dual Avid Pro Tools S6 consoles in the Dolby Atmos mix stage, which also includes four Pro Tools HDX systems. The site features facilities for sound design, ADR and voiceover recording, title design and insert shooting. Onsite media management includes a robust SAN network, as well as LTO7 archiving and dailies services, and cold storage.

Ryan is an editor who has operated Pace Pictures as an editorial service for more than 15 years. His many credits include the films Woody Woodpecker, Veronica Mars, The Little Rascals, Lawless Range and The Lookalike, as well as numerous concert films, music clips, television specials and virtual reality productions. He has also served as a producer on projects for Hallmark, Mariah Carey, Queen Latifah and others. Originally from Australia, he began his career with the Australian Broadcasting Corporation.

Ryan notes that the goal of the new venture is to break from the traditional facility model and provide producers with flexible solutions tailored to their budgets and creative needs. “Clients do not have to use our talent; they can bring in their own colorists, editors and mixers,” he says. “We can be a small part of the production, or we can be the backbone.”

DigitalGlue’s Creative.Space intros all-Flash 1RU OPMS storage

Creative.Space, a division of DigitalGlue that provides on-premise managed storage (OPMS) as a service for production and post companies as well as broadcast networks, has added the Breathless system to its offerings. The product will make its debut at Cine Gear in LA next month.

The Breathless Next Generation Small Form Factor (NGSFF) media storage system offers 36 front-serviceable NVMe SSD bays in 1RU. It is designed for 4K, 6K and 8K uncompressed workflows using JPEG2000, DPX and multi-channel OpenEXR. There are 4TB of NVMe SSDs currently available, but a 16TB version will be available in later this year, allowing 576TB of Flash storage to fit in 1RU. Breathless performs 10 million random read IOPS (Input/Output Operations per Second) of storage performance (up to 475,000 per drive).

Each of the 36 NGSFF SSD bays connects to the motherboard directly over PCIe to deliver maximum potential performance. With dual Intel Skylake-SP CPUs and 24 DDR4 DIMMs of memory, this system is perfect for I/O intensive local workloads, not just for high-end VFX, but also realtime analytics, database and OTT content delivery servers.

Breathless’ OPMS features 24/7 monitoring, technical support and next-day repairs for an all-inclusive, affordable fixed monthly rate of $2,495.00, based on a three-year contract (16TB of SSD).

Breathless is the second Creative.Space system to launch, joining Auteur, which offers 120TB RAW capacity across 12 drives in a 24-bay 4 RU chassis. Every system is custom-built to address each client’s needs. Entry level systems are designed for small to medium workgroups using compressed 4K, 6K and 8K workflows and can scale for 4K uncompressed workflows (including 4K OpenEXR) and large multi-user environments.

DigitalGlue, an equipment, integration and software development provider, also designs and implements turnkey solutions for content creation, post and distribution.

 

AlphaDogs’ Terence Curren is on a quest: to prove why pros matter

By Randi Altman

Many of you might already know Terence Curren, owner of Burbank’s AlphaDogs, from his hosting of the monthly Editor’s Lounge, or his podcast The Terence and Philip Show, which he co-hosts with Philip Hodgetts. He’s also taken to producing fun, educational videos that break down the importance of color or ADR, for example.

He has a knack for offering simple explanations for necessary parts of the post workflow while hammering home what post pros bring to the table. You can watch them here:

I reached out to Terry to find out more.

How do you pick the topics you are going to tackle? Is it based on questions you get from clients? Those just starting in the industry?
Good question. It isn’t about clients as they already know most of this stuff. It’s actually a much deeper project surrounding a much deeper subject. As you well know, the media creation tools that used to be so expensive, and acted as a barrier to entry, are now ubiquitous and inexpensive. So the question becomes, “When everyone has editing software, why should someone pay a lot for an editor, colorist, audio mixer, etc.?”

ADR engineer Juan-Lucas Benavidez

Most folks realize there is a value to knowledge accrued from experience. How do you get the viewers to recognize and appreciate the difference in craftsmanship between a polished show or movie and a typical YouTube video? What I realized is there are very few people on the planet who can’t afford a pencil and some paper, and yet how many great writers are there? How many folks make a decent living writing, and why are readers willing to pay for good writing?

The answer I came up with is that almost anyone can recognize the difference between a paper written by a 5th grader and one written by a college graduate. Why? Well, from the time we are very little, adults start reading to us. Then we spend every school day learning more about writing. When you realize the hard work that goes into developing as a good writer, you are more inclined to pay a master at that craft. So how do we get folks to realize the value we bring to our craft?

Our biggest problem comes from the “magician” aspect of what we do. For most of the history of Hollywood, the tricks of the trade were kept hidden to help sell the illusion. Why should we get paid when the average viewer has a 4K camera phone with editing software on it?

That is what has spurred my mission. Educating the average viewer to the value we bring to the table. Making them aware of bad sound, poor lighting, a lack of color correction, etc. If they are aware of poorer quality, maybe they will begin to reject it, and we can continue to be gainfully employed exercising our hard-earned skills.

Boom operator Sam Vargas.

How often is your studio brought in to fix a project done by someone with access to the tools, but not the experience?
This actually happens a lot, and it is usually harder to fix something that has been done incorrectly than it is to just do it right from the beginning. However, at least they tried, and that is the point of my quest: to get folks to recognize and want a better product. I would rather see that they tried to make it better and failed than just accepted poor quality as “good enough.”

Your most recent video tackles ADR. So let’s talk about that for a bit. How complicated a task is ADR, specifically matching of new audio to the existing video?
We do a fair amount of ADR recording, which isn’t that hard for the experienced audio mixer. That said, I found out how hard it is being the talent doing ADR. It sounds a lot easier than it actually is when you are trying to match your delivery from the original recording.

What do you use for ADR?
We use Avid Pro Tools as our primary audio tool, but there are some additional tools in Fairlight (included free in Blackmagic’s Resolve now) that make ADR even easier for the mixer and the talent. Our mic is Sennheiser long shotgun, but we try to match mics to the field mic when possible for ADR.

I suppose Resolve proves your point — professional tools accessible for free to the masses?
Yeah. I can afford to buy a paint brush and some paint. It would take me a lot of years of practice to be a Michelangelo. Maybe Malcolm Gladwell, who posits that it takes 10,000 hours of practice to master something, is not too far off target.

What about for those clients who don’t think you need ADR and instead can use a noise reduction tool to remove the offensive noise?
We showed some noise reduction tools in another video in the series, but they are better at removing consistent sounds like air conditioner hum. We chose the freeway location as the background noise would be much harder to remove. In this case, ADR was the best choice.

It’s also good for replacing fumbled dialogue or something that was rewritten after production was completed. Often you can get away with cheating a new line of dialogue over a cutaway of another actor. To make the new line match perfectly, you would rerecord all the dialogue.

What did you shoot the video with? What about editing and color?
We shot with a Blackmagic Cinema Camera in RAW so we could fix more in post. Editing was done in Avid Media Composer with final color in Blackmagic’s Resolve. All the audio was handled in Avid’s Pro Tools.

What other topics have you covered in this series?
So far we’ve covered some audio issues and the need for color correction. We are in the planning stages for more videos, but we’re always looking for suggestions. Hint, hint.

Ok, letting you go, but is there anything I haven’t asked that’s important?
I am hoping that others who are more talented than I am pick up the mantle and continue the quest to educate the viewers. The goal is to prevent us all becoming “starving artists” in a world of mediocre media content.

VFX vet Andrew Bell Joins Method Advertising

Long-time VFX executive Andrew Bell has joined LA-based Method Studios as senior EP/VP of its Advertising division. He will report to Method Advertising MD/EVP Stuart Robinson.

Bell moves to Method after spending nearly two decades with MPC, first as a producer in London and then as head of production and managing director, spearheading MPC’s initial foray, then expansion and relocation, into Los Angeles. There he oversaw all operations, from bidding to building to managing the talent and client rosters in addition to working with directors producing large-scale VFX projects for Coca-Cola, Nike, DirecTV and other brands. Bell also previously served as managing director for Brickyard VFX in Boston and has consulted on VFX operations for Apple.

In LA, Bell will work alongside Method Commercials VFX senior EP/VP Stephanie Gilgar and Digital Studio head Jeff Werner to drive operations, curate talent and bring clients on the West Coast. Method is a Deluxe company.

Embracing production in the cloud

By Igor Boshoer

Cloud technology is set to revolutionize film production. That is if studios can be convinced. But since this century-old industry is reluctant to change, these new technologies and promising innovation trends are integrating at a slower pace.

Tried-and-true production methods are steadily becoming outdated. Bringing innovation, a cloud platform offers accessibility to both small and large studios. In the not-so-distant future, what may now be merely a competitive edge will become industry standard practices. But until then, some studios are apprehensive. And the reasons are mostly myth.

The Need for Transition
Core video production applications, computing, storage and other IT services are moving to the cloud at a rapid pace. A variety of industries and businesses — not just film — are being challenged by new customer expectations, which are heavily influenced by consumer applications powered by the cloud.

In visual effects, film and XR, application vendors such as Autodesk, Avere and Aspera are all updating their software to support these cloud workflows. Studios are recognizing that more focus should be placed on creating high-quality content, and far less on in-house software development and infrastructure maintenance. But to grow the topline and stand apart from the competition, it’s imperative for our industry to be proactive and re-imagine the workflow. Cloud providers offer a much faster pace with this innovation than what a studio can internally provide.

In the grand scheme of things, the industry wants to make studio operations more efficient, cost-effective and quantifiable to better serve their customers. And by taking advantage of cloud-based services, studios can increase agility, while decreasing their cost and risk.

Common Misconceptions of the Cloud
Many believe the cloud to be insecure. But there are many very successful and striving startups, even in the finance and healthcare industries. Our industry’s MPAA regulations are much less stringently regulated than their industry’s HIPPA compliance. To the contrary, the cloud providers offer vastly stronger securities than a studio’s very own internal security measures.

Some studios are reluctant because the transfer of mass amounts of data into a cloud platform can prove challenging. But there are still ways to speed up these transfers, including the use of caching and custom UDP-based transport protocols. While this reluctance is valid, it’s still entirely manageable.

Studios also assume that cloud technology is expensive. It is… however, when you truly break down the costs to maintain infrastructure — adding internal storage, hardware setup, multi-year equipment leases, not to mention the ongoing support team — it, in fact, proves more expensive. While the cloud appears costly, it actually saves money and lets you quantify the cost of production. Moreover, studios can scale resources as production demands fluctuate, instead of relying on the typical static, in-house model.

How the Cloud Better Serves Customers
While some are still apprehensive of cloud-based integration, studios that have shifted production pipelines to cloud-based platforms — and embraced it — are finding positive results and success. The cloud can serve customers in a variety of ways. It can deliver a richer, more consistent and personalized experience for a studio’s content creators, as well as offer a collaborative community.

The latest digital technologies are guaranteed to reshape economics, production, and distribution of the entertainment industry. But to be on their game and remain competitive, studios must adapt to these new Internet and computer technologies.

If our industry is willing to push itself through these myths and preconceived assumptions, cloud technology can indeed revolutionize film production. When that begins to happen, more and more studios will adopt this competitive edge, and it will make for an exciting shift.


Igor Boshoer is a media technologist with feature film VFX credits, including The Revenant and The Wolf of Wall Street. His experience building studio technology inspired his company, Linc, a studio platform as a service. He also hosts the monthly media technology meetup Filmologic in the Bay Area.

Point 360 grows team with senior colorist Charlie Tucker

Senior colorist Charlie Tucker has joined Burbank’s Point 360. He comes to the facility from Technicolor, and brings with him over 20 years of color grading experience.

The UK-born Tucker’s credits include TV shows such as The Vampire Diaries and The Originals on CW, Wet Hot American Summer and A Futile & Stupid Gesture on Netflix, as well as Amazon’s Lore. He also just completed YouTube Red’s show Cobra Kai. Tucker, who joined the company just last week, will be working on Blackmagic Resolve.

Now at Point 360, Tucker reteams with Jason Kavner, who took the helm as senior VP of episodic sales in 2017. Tucker also joins fellow senior colorist Aidan Stanford, whose recent credits include the Academy Award-winning feature Get Out and the film Happy Death Day. Stanford’s recent episodic work includes the FX series You’re the Worst and ABC’s Fresh Off the Boat.

When prodded to sum up his feelings regarding joining Point 360, Tucker said, “I am chuffed to bits to now be part of and call Point 360 my home. It is a bloody lovely facility that has a welcoming, collaborative feel, which is refreshing to find within this pressure cooker we call Hollywood. The team I am privileged to join is a brilliant, talented and very experienced group of industry professionals who truly enjoy what they do, and I know my clients will love my new coloring bay and the creative vibe that Point 360 has created.”

Creative editorial and post boutique Hiatus opens in Detroit

Hiatus, a full-service, post production studio with in-house creative editorial, original music composition and motion graphics departments, has opened in Detroit. Their creative content offerings cover categories such as documentary, narrative, conceptual, music videos and advertising media for all video platforms.

Led by founder/senior editor Shane Patrick Ford, the new company includes executive producer/partner Catherine Pink, and executive producer Joshua Magee, who joins Hiatus from the animation studio Lunar North. Additional talents feature editor Josh Beebe, composer/editor David Chapdelaine and animator James Naugle.

The roots of Hiatus began with The Factory, a music venue founded by Ford while he was still in college. It provided a venue for local Detroit musicians to play, as well as touring bands. Ford, along with a small group of creatives, then formed The Work – a production company focused on commercial and advertising projects. For Ford, the launch of Hiatus is an opportunity to focus solely on his editorial projects and to expand his creative reach and that of his team nationally.

Leading up to the launch of Hiatus, the team has worked on projects for brands such as Sony, Ford Motor Company, Acura and Bush’s, as well as recent music videos for Lord Huron, Parquet Courts and the Wombats.

The Hiatus team is also putting the finishing touches on the company’s first original feature film Dare to Struggle, Dare to Win. The film uncovers a Detroit Police decoy unit named STRESS and the efforts made to restore civil order in 1970s post-rebellion Detroit. Dare to Struggle, Dare to Win makes its debut at the Indy Film Festival on Sunday April 29th and Tuesday May 1st in Indianapolis, before it hits the film festival circuit.

“Launching Hiatus was a natural evolution for me,” says Ford. “It was time to give my creative team even more opportunities, to expand our network and to collaborate with people across the country that I’ve made great connections with. As the post team evolved within The Work, we outgrew the original role it played within a production company. We began to develop our own team, culture, offerings and our own processes. With the launch of Hiatus, we are poised to better serve the visual arts community, to continue to grow and to be recognized for the talented creative team we are.”

“Instead of having a post house stacked with people, we’d prefer to stay small and choose the right personal fit for each project when it comes to color, VFX and heavy finishing,” explains Hiatus EP Catherine Pink. “We have a network of like-minded artists that we can call on, so each project gets the right creative attention and touch it deserves. Also, the lower overhead allows us to remain nimble and work with a variety of budget needs and all kinds of clients.”

NAB 2018: A closer look at Firefly Cinema’s suite of products

By Molly Hill

Firefly Cinema, a French company that produces a full set of post production tools, premiered Version 7 of its products at NAB 2018. I visited with co-founder Philippe Reinaudo and head of business development Morgan Angove at the Flanders Scientific booth. They were knowledgeable and friendly, and they helped me to better understand their software.

Firefly’s suite includes FirePlay, FireDay, FirePost and the brand-new FireVision. All the products share the same database and Éclair color management, making for a smooth and complete workflow. However, Reinaudo says their programs were designed with specific UI/UXs to better support each product’s purpose.

Here is how they break down:
FirePlay: This is an on-set media player that supports most any format or file. The player is free to use, but there’s a paid option to include live color grading.

FireDay: Firefly Cinema’s dailies software includes a render tree for multiple versions and supports parallel processing.

FirePost: This is Firefly Cinema’s proprietary color grading software. One of its features was a set of “digital filters,” which were effects with adjustable parameters (not just pre-set LUTs). I was also excited to see the inclusion of curve controls similar to Adobe Lightroom’s Vibrance setting, which increases the saturation of just the more muted colors.

FireVision: This new product is a cloud-based review platform, with smooth integration into FirePost. Not only do tags and comments automatically move between FirePost and FireVision, but if you make a grading change in the former and hit render, the version in FireVision automatically updates. While other products such as Frame.io have this feature, Firefly Cinema offers all of these in the same package. The process was simple and impressive.

One of the downsides of their software package is its lack of support for HDR, but Raynaud says that’s a work in progress. I believe this will likely begin with ÉclairColor HDR, as Reinaudo and his co-founder Luc Geunard are both former Éclair employees. It’s also interesting that they have products for every step after shooting except audio and editing, but perhaps given the popularity of Avid Media Composer, Adobe Premiere and Avid Pro Tools, those are less of a priority for a young company.

Overall, their set of products was professional, comprehensive and smooth to operate, and I look forward to seeing what comes next for Firefly Cinema.


Molly Hill is a motion picture scientist and color nerd, soon-to-be based out of San Francisco. You can follow her on Twitter @mollymh4.

AlterMedia rolling out rebuild of its Studio Suite 12 at NAB

At this year’s NAB, AlterMedia is showing Studio Suite 12, a ground-up rebuild of its studio, production and post management application. The rebuilt codebase and streamlined interface have made the application lighter, faster and more intuitive; it functions as a web application and yet still has the ability to be customized easily to adapt to varying workflows.

“We literally started over with a blank slate with this version,” says AlterMedia founder Joel Stoner. “The goal was really to reconsider everything. We took the opportunity to shed tons of old code and tired interface paradigms. That said, we maintained the basic structure and flow so existing users would feel comfortable jumping right in. Although there are countless new features, the biggest is that every user can now access Studio Suite 12 through a browser from anywhere.”

Studio Suite 12 now provides better integration within the Internet ecosystem by connecting with Slack and Twillio (for messaging), as well as Google Calendar, Exchange Calendar, Apple Calendar, IMDB, Google Maps, Ebay, QuickBooks and Xero accounting software and more.

Editor Dylan Tichenor to headline SuperMeet at NAB 2018

For those of you heading out to Las Vegas for NAB 2018, the 17th annual SuperMeet will take place on Tuesday, April 10 at the Rio Hotel. Speaking this year will be Oscar-nominated film editor Dylan Tichenor (There Will be Blood, Zero Dark Thirty). Additionally, there will be presentations from Blackmagic, Adobe, Frame.io, HP/Nvidia, Atomos and filmmaker Bradley Olsen, who will walk the audience through his workflow on Off the Tracks, a documentary about Final Cut Pro X.

Blackmagic Resolve designers Paul Saccone, Mary Plummer, Peter Chamberlain and Rohit Gupta will answer all questions on all things DaVinci Resolve, Fusion or Fairlight Audio.

Adobe Premiere Pro product manager Patrick Palmer will reveal new features in Adobe video solutions for their editing, color, graphics and audio editing workflows.

Frame.io CEO Emery Wells will preview the next generation of its collaboration and workflow tool, which will be released this summer.

Atomos’ Jeromy Young will talk about some of their new partners. He says, “It involves software and camera makers alike.”

As always, the evening will round out with the SuperMeet’s ”World Famous Raffle,” where the total value of prizes has now reached over $101,000. Part of that total includes a Blackmagic Advanced Control Panel, worth $29,995.

Doors will open at 4:30pm with the SuperMeet Vendor Showcase, which features 23 software and hardware developers. Those attending can enjoy a few cocktails and mingle with industry peers.

To purchase tickets, and for complete daily updates on the SuperMeet, including agenda updates, directions, transportation options and a current list of raffle prizes, visit the SuperMeet website.

B&H expands its NAB footprint to target multiple workflows

By Randi Altman

In a short time, many in our industry will be making the pilgrimage to Las Vegas for NAB. They will come (if they are smart) with their comfy shoes, Chapstick and the NAB Show app and plot a course for the most efficient way to see all they need to see.

NAB is a big show that spans a large footprint, and typically companies showing their wares need to pick a hall — Central, South Lower, South Upper or North. This year, however, The Studio-B&H made some pros’ lives a bit easier by adding a booth in South Lower in addition to their usual presence in Central Hall.

B&H’s business and services have grown, so it made perfect sense to Michel Suissa, managing director at The Studio-B&H, to grow their NAB presence to include many of the digital workflows the company has been servicing.

We reached out to Suissa to find out more.

This year B&H and its Studio division are in the South Lower. Why was it important for you guys to have a presence in both the Central and South Halls this year?
The Central Hall has been our home for a long time and it remains our home with our largest footprint, but we felt we needed to have a presence in South Hall as well.

Production and post workflows merge and converge constantly and we need to be knowledgeable in both. The simple fact is that we serve all segments of our industry, not just image acquisition and camera equipment. Our presence in image and data centric workflows has grown leaps and bounds.

This world is a familiar one for you personally.
That’s true. The post and VFX worlds are very dear to me. I was an editor, Flame artist and colorist for 25 years. This background certainly plays a role in expanding our reach and services to these communities. The Studio-B&H team is part of a company-wide effort to grow our presence in these markets. From a business standpoint, the South Hall attendees are also our customers, and we needed to show we are here to assist and support them.

What kind of workflows should people expect to see at both your NAB locations?
At the South Hall, we will show a whole range of solutions to show the breadth and diversity of what we have to offer. That includes VR post workflow, color grading, animation and VFX, editing and high-performance Flash storage.

In addition to the new booth in South Hall, we have two in Central. One is for B&H’s main product offerings, including our camera shootout, which is a pillar of our NAB presence.

This Studio-B&H booth features a digital cinema and broadcast acquisition technology showcase, including hybrid SDI/IP switching, 4K studio cameras, a gyro-stabilized camera car, the most recent full-frame cinema cameras, and our lightweight cable cam, the DynamiCam.

Our other Central Hall location is where our corporate team can discuss all business opportunities with new and existing B2B customers

How has The Studio-B&H changed along with the industry over the past year or two?
We have changed quite a bit. With our services and tools, we have re-invented our image from equipment providers to solution providers.

Our services now range from system design to installation and deployment. One of the more notable recent examples is our recent collaboration with HBO Sports on World Championship Boxing. The Studio-B&H team was instrumental in deploying our DynamiCam system to cover several live fights in different venues and integrating with NEP’s mobile production team. This is part of an entirely new type of service —  something the company had never offered its customers before. It is a true game-changer for our presence in the media and entertainment industry.

What do you expect the “big thing” to be at NAB this year?
That’s hard to say. Markets are in transition with a number of new technology advancements: machine learning and AI, cloud-based environments, momentum for the IP transition, AR/VR, etc.

On the acquisition side, full frame/large sensor cameras have captured a lot of attention. And, of course, HDR will be everywhere. It’s almost not a novelty anymore. If you’re not taking advantage of HDR, you are living in the past.

Netflix’s Altered Carbon: the look, the feel, the post

By Randi Altman

Netflix’s Altered Carbon is a new sci-fi series set in a dystopian future where people are immortal thanks to something called “stacks,” which contain their entire essence — their personalities, their memories, everything. The one setback is that unless you are a Meth (one of the rich and powerful), you need to buy a “sleeve” (a body) for your stack, and it might not have any resemblance to your former self. It could be a different color, a different sex, a different age, a different everything. You have to take what you can get.

Based on a 2002 novel by Richard K. Morgan, it stars Swedish actor Joel Kinnaman.

Jill Bogdanowicz

We reached out to the show’s colorist, Jill Bogdanowicz, as well as post producer Allen Marshall Palmer to find out more about the show’s varied and distinctive looks.

The look has a very Blade Runner-type feel. Was that in homage to the films?
Bogdanowicz: The creators wanted a film noir look. Blade Runner is the same genre, but the show isn’t specifically an homage to Blade Runner.

Palmer: I’ll leave that for fans to dissect.

Jill, can you talk about your process? What tools did you use?
Bogdanowicz: I designed a LUT to create that film noir look before shooting. I actually provided a few options, and they chose my favorite one and used it throughout. After they shot everything and I had all 10 episodes in my bay, I got familiar with the content, wrapped my head around the story and came up with ideas to tell that story with color.

The show covers many different times and places so scenes needed to be treated visually to show audiences where the story is and what’s happened. I colored both HDR (Dolby Vision) and SDR passes using DaVinci Resolve.

I worked very closely with both DPs — Martin Ahlgren and Neville Kidd — in pre-timing the show, and they gave me a nice idea of what they were looking for so I had a great starting point. They were very close knit. The entire team on this project was an absolute pleasure, and it was a great creative collaboration, which comes through in the final product of the show.

The show is shot and posted like a feature and has a feature feel. Was that part of your marching orders?
Bogdanowicz: I’m primarily a features colorist, so I’m very familiar with the film noir look and heavy VFX, and that’s one reason I was included on this project. It was right up my alley.

Palmer: We approached Altered Carbon as a 10-part feature rather than a television series. I coined the term “feature episodic entertainment,” which describes what we were aspiring to — destination viewing instead of something merely disposable. In a world with so many viewing options, we wanted to command the viewer’s full attention, and fans are rewarded for that attention.

We were very concerned about how images, especially VFX, were going to look in HDR so we had weekly VFX approval sessions with Jill, our mastering colorist, in her color timing bay.

Executive producers and studio along with the VFX and post teams were able to sit together — adjusting color corrections if needed before giving final approval on shots. This gave us really good technical and creative quality control. Despite our initial concerns about VFX shots in HDR, we found that with vendors like Double Negative and Milk with their robust 16-bit EXR pipelines we weren’t “breaking” VFX shots when color correcting for HDR.

How did the VFX affect the workflow?
Bogdanowicz: Because I was brought on so early, the LUT I created was shared with the VFX vendors so they had a good estimation of the show’s contrast. That really helped them visualize the look of the show so that the look of the shots was pretty darn close by the time I got them in my bay.

Was there a favorite scene or scenes?
Bogdanowicz: There are so many spectacular moments, but the emotional core for me is in episode 104 when we see the beginning of the Kovacs and Quell love story in the past and how that love gives Kovacs the strength to survive in the present day.

Palmer: That’s a tough question! There are so many, it’s hard to choose. I think the episode that really jumps out is the one in which Joel Kinnaman’s character is being tortured and the content skips back and forth in time, changes and alternates between VR and reality. It was fun to create a different visual language for each space.

Can you talk about challenges in the process and how you overcame them?
Bogdanowicz: The show features a lot of VFX and they all need to look as real as possible, so I had to make sure they felt part of the worlds. Fortunately, VFX supervisor Everett Burrell and his team are amazing and the VFX is top notch. Coming up with different ideas and collaborating with producers James Middleton and Laeta Kalogridis on those ideas was a really fun creative challenge. I used the Sapphire VFX plugin for Resolve to heavily treat and texture VR looks in different ways.

Palmer: In addition to the data management challenges on the picture side, we were dealing with mixing in Dolby Atmos. It was very easy to get distracted with how great the Atmos mix sounds — the downmixes generally translated very well, but monitoring in 5.1 and 2.0 did reveal some small details that we wanted to adjust. Generally, we’re very happy with how both the picture and sound is translating into viewer’s homes.

Dolby Vision HDR is great at taking what’s in the color bay into the home viewing environment, but there are still so many variables in viewing set-ups that you can still end up chasing your own tail. It was great to see the behind the scenes of Netflix’s dedication to providing the best picture and sound quality through the service.

The look of the AI hotel was so warm. I wanted to live there. Can you talk about that look?
Bogdanowicz: The AI hotel look was mostly done in design and lighting. I saw the warm practical lights and rich details in the architecture and throughout the hotel and ran with it. I just aimed to keep the look filmic and inviting.

What about the look of where the wealthy people lived?
Bogdanowicz: The Meth houses are above the clouds, so we kept the look very clean and cool with a lot of true whites and elegant color separation.

Seems like there were a few different looks within the show?
Bogdanowicz: The same LUT for the film noir look is used throughout the show, but the VR looks are very different. I used Sapphire to come up with different concepts and textures for the different VR looks, from rich quality of the high-end VR to the cheap VR found underneath a noodle bar.

Allen, can you walk us through the workflow from production to post?
Palmer: With the exception of specialty shots, the show was photographed on Alexa 65 — mostly in 5K mode, but occasionally in 6.5K and 4K for certain lenses. The camera is beautiful and a large part of the show’s cinematic look, but it generates a lot of data (about 1.9TB/hour for 5K) so this was the first challenge. The camera dictates using the Codex Vault system, and Encore Vancouver was up to the task for handling this material. We wanted to get the amount of data down for post, so we generated 4096×2304 ProRes 4444XQ “mezzanine” files, which we used for almost all of the show assembly and VFX pulls.

During production and post, all of our 4K files were kept online at Efilm using their portal system. This allowed us fast, automated access to the material so we could quickly do VFX pulls, manage color, generate 16-bit EXR frames and send those off to VFX vendors. We knew that time saved there was going to give us more time on the back end to work creatively on the shots so the Portal was a very valuable tool.

How many VFX shots did you average per episode? Seems like a ton, especially with the AI characters. Who provided those and what were those turnarounds like?
Palmer: There were around 2,300 visual effects shots during this season — probably less than most people would think because we built a large Bay City street inside a former newspaper printing facility outside of Vancouver. The shot turnaround varied depending on the complexity and where we were in the schedule. We were lucky that something like episode 1’s “limo ride” sequence was started very early on because it gave us a lot of time to refine our first grand views of Bay City. Our VFX supervisor Everett Burrell and VFX producer Tony Meagher were able to get us out in front of a lot of challenges like the amount of 3D work in the last two episodes by starting that work early on since we knew we would need those shots from the script and prep phase.

Review: HP’s lower-cost DreamColor Z24x display

By Dariush Derakhshani

So, we all know how important a color-accurate monitor is in making professional-level graphics, right? Right?!? Even at the most basic level, when you’re stalking online for the perfect watch band for your holiday present of a smart watch, you want the orange band you see in the online ad to be what you get when it arrives a few days later. Even if your wife thinks orange doesn’t suit you, and makes you look like “you’re trying too hard.”

Especially as a content developer, you want to know what you’re looking at is an accurate representation of the image. Ever walk into a Best Buy and see multiple screens showing the same content but with wild ranging differences in color? You can’t have that discrepancy working as a pro, especially in collaboration; you need color accuracy. In my own experience, that position has been filled by HP’s 10-bit DreamColor displays for many years now, but not everyone is awash in bitcoins, and justifying a price tag of over $1,200 is sometimes hard to justify, even for a studio professional.

Enter HP’s DreamColor Z24x display at half the price, coming in around $550 online. Yes, DreamColor for half the cost. That’s pretty significant. For the record, I haven’t used a 24-inch monitor since the dark ages; when Lost was the hot TV show. I’ve been fortunate enough to be running at 27-inch and higher, so there was a little shock when I started using the Z24x HP sent me for review. But this is something I quickly got used to.

With my regular 32-inch 4K display still my primary — so I can fit loads of windows all over the place — I used this DreamColor screen as my secondary display, primarily to check output for my Adobe After Effects comps, Adobe Premiere Pro edits and to hold my render view window as I develop shaders and lighting in Autodesk Maya. I felt comfortable knowing the images I shared with my colleagues across town would be seen as I intended them, evening the playing field when working collaboratively (as long as everyone is on the same LUT and color space). Speaking of color spaces, the Z24x hits 100% of sRGB, 99% of AdobeRGB and 96% of DCI P3, which is just slightly under HP’s Z27x DreamColor. It is, however, slightly faster with a 6ms response rate.

The Z24x has a 24-inch IPS panel from LG that exhibits color in 10-bit, like its bigger 27-inch Z27x sibling. This gives you over a billion colors, which I have personally verified by counting them all —that was one, long weekend, I can tell you. Unlike the highest-end DreamColor screens though, the Z24x dithers up from 8-bit to 10-bit (called an 8-bit+FRC). This means it’s better than an 8-bit color display, for sure, but not quite up to real 10-bit, making it color accurate but not color critical. HP’s implementation of dithering is quite good, when subjectively compared to my full 10-bit main display. Frankly, a lot of screens that claim 10-bit may actually be 8-bit+FRC anyway!

While the Z27x gives you 2560×1440 as you expect of most 27inch displays, if not full on 4K, the Z24x is at a comfortable 1920×1200, just enough for a full 1080p image and a little room for a slider or info bar. Being the res snob that I am, I had wondered if that was just too low, but at 24-inches I don’t think you would want a higher resolution, even if you’re sitting only 14-inches away from it. And this is a sentiment echoed by the folks at HP who consulted with so many of their professional clients to build this display. That gives a pixel density of about 94PPI, a bit lower than the 109PPI of the Z27x. This density is about the same as a 1080p HD display at 27-inch, so it’s still crisp and clean.

Viewing angles are good at about 178 degrees, and the screen is matte, with an anti-glare coating, making it easier to stare at without blinking for 10 hours at a clip, as digital artists usually do. Compared to my primary display, this HP’s coating was more matte and still gave me a richer black in comparison, which I liked to see.

Connection options are fairly standard with two DisplayPorts, one HDMI, and one DVI dual link for anyone still living in the past. You also get four USB ports and an analog 3.5mm audio jack if you want to drive some speakers, since you can’t from your phone anymore (Apple, I’m looking at you).

Summing Up
So while 24-inches is a bit small for my tastes for a display, I am seriously impressed at the street price of the Z24x, allowing a lot more pros and semi-pros to get the DreamColor accuracy HP offers at half the price. While I wouldn’t recommend color grading a show on the Z24x, this DreamColor does a nice job of bringing a higher level of color confidence at an attractive price. As a secondary display, the z24x is a nice addition to an artist workflow with budget in mind — or who has a mean, orange-watch-band-hating spouse.


Dariush Derakhshani is a VFX supervisor and educator in Southern California. You can follow his random tweets at @koosh3d.

HPA Tech Retreat: The production budget vs. project struggle

“Executive producers often don’t speak tech language,” said Aaron Semmel, CEO and head of BoomBoomBooya, in addressing the HPA Tech Retreat audience Palm Springs in late February. “When people come to us with requests and spout all sorts of tech mumbo jumbo, it’s very easy for us to say no,” he continued. “Trust me, you need to speak to us in our language.”

Semmel was part of a four-person HPA panel that included Cirina Catania, The Catania Group; Larry O’Connor, OWC Digital; and Jeff Stansfield, Advantage Video Systems. Moderated by Andy Marken of Marken Communications, the panel explored solutions that can bring the executive and line producers and the production/post teams closer together to implement the right solutions for every project and satisfy everyone, including accounting.

An executive and co-producer on more than a dozen film and TV series projects, Semmel said his job is to bring together the money and then work with the best creative people possible. He added that the team’s job was to make certain the below-the-line items — actual production and post production elements — stay on or below budget.

Semmel noted that most executive producers often work off of the top sheet of the budget, typically an overview of the budget. He explained that executive producers may go through all of the budget and play with numbers here and there but leave the actual handling of the budget to the line producer and supervising producer. In this way, they can “back into” a budget number set by the executive producer.

“I understand the technologies at a higher level and could probably take a highlighter and mark budget areas where we could reduce our costs, but I also know I have very experienced people on the team who know the technologies better than I do to make effective cuts.

L-R: Jeff Stansfield, Aaron Semmel, Cirina Catania

“For example, in talking with many of you in the audience here at the Retreat, I learned that there’s no such thing as an SSD hard drive,” he said. “I now know there are SSDs and there are hard drives and they’re totally different.”

Leaning into her mic, Catania got a laugh when she said, “One of the first things we all have to do is bring our production workflows into the 21st century. But seriously, the production and post teams are occasionally not consulted during the lengthy budgeting process. Our keys can make some valuable contributions if they have a seat at the table during the initial stages. In terms of technology, we have some exciting new tools we’d like to put to work on the project that could save you valuable time, help you organize your media and metadata, and have a direct and immediate positive impact on the budget. What if I told you that you could save endless hours in post if you had software that helped your team enter metadata and prep for post during the early phase — and hardware that worked much faster, more securely and more reliably.”

With wide agreement from the audience, Catania emphasized that it is imperative for all departments involved in prep/production/post and distribution to be involved in the budget process from the outset.

“We know the biggest part of your budget might be above-the-line costs,” she continued. “But production, post and distribution are where much of the critical work also gets done. And if we’re involved at the outset, and that includes with people like Jeff (Stansfield), who can help us come up with creative workflow and financing options, that will save you and the investors’ money, we will surely turn a profit.”

Semmel said the production/post team could probably be of assistance in the early budget stages to pinpoint where work could be done more efficiently to actually improve the overall quality and ensure EPs do what they need to do for their reputation… deliver the best and be under budget.

The Hatfields and the McCoys via History Channel

“But for some items, there seem to be real constraints,” he emphasized. “For example, we were shooting America’s Feud: Hatfields & McCoys, a historical documentary in Romania — yes, Romania,” he grinned; “and we were behind schedule. We shot the farmhouse attack on day one, shot the burning of the house on day two and on day three we received our dailies to review for day one’s work. We were certain we had everything we needed so we took a calculated risk and burned the building,” he recalled. “But no one exhaled until we had a chance to go through the dailies.”

“What if I told you there’s a solution that will transfer your data at 2800MB/s and enable you to turn around your dailies in a couple of hours instead of a couple of days?” O’Connor asked.

Semmel replied, “I don’t understand the 2800MB/s stuff, but you clearly got my attention by saying dailies in a couple of hours instead of days. If there had been anything wrong with the content we had shot, we would have been faced with the huge added expense of rebuilding and reshooting everything,” he added. “Even accounting can understand the savings in hours vs. days.”

Semmel pointed out that because films and TV shows start and end digital, there’s always a concern about frames and segments being lost when you’re on location and a long distance from the safety net of your production facilities.

“No one likes that risk, including production/post leaders, integrators or manufacturers,” said O’Connor. “In fact, a lot of crews go to extraordinary lengths to ensure nothing is lost; and frankly, I don’t blame them.”

He recalled a film crew going to Haiti to shoot a documentary that was told by the airline they were over their limit on baggage for the trip.

“They put their clothes in an airport locker and put their three RAID storage systems in their backpacks. They wanted to make certain they could store, backup and backup their work again to ensure they had all of the content they needed when they got back to their production/post facility.”

Stansfield and Catania said they had seen and heard of similar gut-level decisions made by executive and line producers. They encouraged the production/post audience not to simply accept the line item budgets they are given to work with but be more involved at the beginning of the project to explore and define all of the below-the-line budget to minimize risk and provide alternative plans just in case unexpected challenges arise.

“An EP and line producer’s mantra for TV and film projects is you only get two out of three things: time, money and quality,” Semmel said. “If you can deliver all three, then we’ll listen, but you have to approach it from our perspective.

“Our budgets aren’t open purses,” he continued. “You have to make recommendations and deliver products and solutions that enable us to stay under budget, because no matter how neat they are or how gee-whiz technical they are, they aren’t going to be accepted. We have two very fickle masters — finance and viewer — so you have to give us the tools and solutions that satisfy both of them. Don’t give us bits, bytes and specs, just focus on meeting our needs in words we can understand.

“When you do that, we all win; and we can all work on the next project together,” Semmel concluded. “We only surround ourselves with people who will help us through the project. People who deliver.”

Xytech Dash: Cloud-based management for small studios

Xytech, makers of facility management software, is targeting smaller facilities with its newly launched cloud-based software, Dash. The subscription-based app takes just three days to install and uses security offered by the Microsoft Azure Managed Cloud.

With Dash, Xytech can now manage the end-to-end business cycle for small- and medium-sized studios. These customers range from boutique post facilities to large universities with sophisticated media departments to corporate communication departments.

The monthly subscription model for Dash offers access to all dashboards, graphs and charts, plus customers can manage resources, handle scheduling tasks, cost forecasting, invoicing and reporting all on one system. Dash also offers the option of a built-in library management program as well as a bidding module, enabling users to bid on a project and have it accepted on the spot.

The new web interface allows users easy access to the Dash applications from any supported web browser. “We listened to our clients and adapted our software into a series of directed workflows allowing users to schedule, raise a bid and generate an invoice,” says Xytech COO Greg Dolan. “Additionally, we’ve made installation support fast and seamless on Dash, so our team can easily teach our clients and get them up and running in just a few days.”

The software has a low per-user price and is available on a monthly subscription basis. The company is offering early adopters of Dash an early-bird discount, which will be announced shortly.

The challenges of creating a shared storage ‘spec’

By James McKenna

The specification — used in a bid, tender, RFQ or simply to provide vendors with a starting point — has been the source of frustration for many a sales engineer. Not because we wish that we could provide all the features that are listed, but because we can’t help but wonder what the author of those specs was thinking.

Creating a spec should be like designing your ideal product on paper and asking a vendor to come as close as they can to that ideal. Unlike most other forms of shopping, you avoid the sales process until the salesperson knows exactly what you want. This is good in some ways, but very limiting in others.

I dislike analogies with the auto industry because cars are personal and subjective, but in this way, you can see the difference in spec versus evaluation and research. Imagine writing down all the things you want in a car and showing up at the dealership looking for a match. You want power, beauty, technology, sports-car handling and room for five?

Your chances of finding the exact car you want are slim, unless you’re willing to compromise or adjust your budget. The same goes for facility shared storage. Many customers get hung up on the details and refuse to prioritize important aspects, like usability and sustainability, and as a result end up looking at quotes that are two to three times their cost expectations for systems that don’t perform the day-to-day work any better (and often perform worse).

There are three ways to design a specification:

Based On Your Workflow
By far, this is the best method and will result in the easiest path to getting what you want. Go ahead and plan for years down the road and challenge the vendors to keep up with your trajectory. Keep it grounded in what you believe is important to your business. This should include data security, usable administration and efficient management. Lay out your needs for backup strategy and how you’d like that to be automated, and be sure to prioritize these requests so the vendor can focus on what’s most important to you.

Be sure to clearly state the applications you’ll be using, what they will be requiring from the storage and how you expect them to work with the storage. The highest priority and true test of a successful shared storage deployment is: Can you work reliably and consistently to generate revenue? These are my favorite types of specs.

Based On Committee
Some facilities are the victim of their own size or budget. When there’s an active presence from the IT department, or the dollar amounts get too high, it’s not just up to the creative folks to select the right product. The committee can include consultants, system administrators, finance and production management, and everyone wants to justify their existence at the table. People with experience in enterprise storage and “big iron” systems will lean on their past knowledge and add terms like “Five-9s uptime,” “No SPOF,” “single namespace,” “multi-path” and “magic quadrant.”

In the enterprise storage world these would be important, but they don’t force vendors to take responsibility for prioritizing the interactions between the creative applications and the storage, and the usability and sustainability of a solution in the long term. The performance necessary to smoothly deliver a 4K program master, on time and on budget, might not even be considered. I see these types of specifications and I know that there will be a rude awakening when the quotes are distributed, usually leading to some modifications of the spec.

Based On A Product
The most limiting way to design a spec is by copying the feature list of a single product to create your requirements. I should mention that I have helped our customers to do this on some occasions, so I’m guilty here. When a customer really knows the market, and wants to avoid being bid an inferior product, this can be justified. However, you have better completed your research beforehand because there may be something out there that could change your opinion, and you don’t want to find out about it after you’re locked into the status quo. If you choose to do this but want to stay on the lookout for another option, simply prioritize the features list by what’s most important to you.

If you really like something about your storage, prioritize that and see if another vendor has something similar. When I respond to these bid specs, I always provide details on our solution and how we can achieve better results than the one that is obviously being requested. Sometimes it works, sometimes not, but at least now they’re educated.

The primary frustration with specifications that miss the mark is the waste of money and time. Enterprise storage features come with enterprise storage complexity and enterprise storage price tags. This requires training or reliance upon the IT staff to manage, or in some cases completely control the network for you. Cost savings in the infrastructure can be repurposed to revenue-generating workstations and artists can be employed instead of full-time techs. There’s a reason that scrappy, grassroots facilities produce faster growth and larger facilities tend to stagnate. They focus on generating content, invest only where needed and scale the storage as the bigger jobs and larger formats arrive.

Stick with a company that makes the process easy and ensures that you’ll never be without a support person that knows your daily grind.


James McKenna is VP of marketing and sales at shared storage company Facilis.

DigitalFilm Tree’s Ramy Katrib talks trends and keynoting BMD conference

By Randi Altman

Blackmagic, which makes tools for all parts of the production and post workflow, is holding its very first Blackmagic Design Conference and Expo, produced with FMC and NAB Show. This three-day event takes place on February 11-13 in Los Angeles. The event includes a paid conference featuring over 35 sessions, as well as a free expo on February 12, which includes special guests, speakers and production and post companies.

Ramy Katrib, founder and CEO of Hollywood-based post house and software development company DigitalFilm Tree, is the keynote speaker for the conference. FotoKem DI colorist Walter Volpatto and color scientist Joseph Slomka will be keynoting the free expo on the 12th.

We reached out to Katrib to find out what he’ll be focusing on in his keynote, as well as pick his brains about technology and trends.

Can you talk about the theme of your keynote?
Resolve has grown mightily over the past few years, and is the foundation of DigitalFilm Tree’s post finishing efforts. I’ll discuss the how Resolve is becoming an essential post tool. And with Resolve 14, folks who are coloring, editing, conforming and doing VFX and audio work are now collaborating on the same timeline, and that is huge development for TV, film and every media industry creative and technician.

Why was it important for you to keynote this event?
DaVinci was part of my life when I was a colorist 25 years ago, and today BMD is relevant to me while I run my own post company, DigitalFilm Tree. On a personal note, I’ve known Grant Petty since 1999 and work with many folks at BMD who develop Resolve and the hardware products we use, like I/O cards and Teranex converters. This relationship involves us sharing our post production pain points and workflow suggestions, while BMD has provided very relevant software and hardware solutions.

Can you give us a sample of something you might talk about?
I’m looking forward to providing an overview of how Resolve is now part of our color, VFX, editorial, conform and deliverables effort, while having artists provide micro demos on stage.

You alluded to the addition of collaboration in Resolve. How important is this for users?
Resolve 14’s new collaboration tools are a huge development for the post industry, specifically in this golden age of TV where binge delivery of multiple episodes at the same time is common place. As the complexity of production and post increases, greater collaboration across multiple disciplines is a refreshing turn — it allows multiple artists and technicians to work in one timeline instead of 10 timelines and round tripping across multiple applications.

Blackmagic has ramped up their NLE offerings with Resolve 14. Do you see more and more editors embracing this tool for editing?
Absolutely. It always takes a little time to ramp up in professional communities. It reminds me of when the editors on Scrubs used Final Cut Pro for the first time and that ushered FCP into the TV arena. We’re already working with scripted TV editors who are in the process of transitioning to Resolve. Also, DigitalFilm Tree’s editors are now using Resolve for creative editing.

What about the Fairlight audio offerings within? Will you guys take advantage of that in any way? Do you see others embracing it?
For simple audio work like mapping audio tracks, creating multi mixes for 5.1 and 7.1 delivery and mapping various audio tracks, we are talking advantage of Fairlight and audio functionality within Resolve. We’re not an audio house, yet it’s great to have a tool like this for convenience and workflow efficiency.

What trends did you see in 2017 and where do you think things will land in 2018?
Last year was about the acceptance of cloud-based production and post process. This year is about the wider use of cloud-based production and post process. In short, what used to be file-based workflows will give way to cloud-based solutions and products.

postPerspective readers can get $50 off of Registration for the Blackmagic Design Conference & Expo by using Code: POST18. Click here to register

Made in NY’s free post training program continues in 2018

New York City’s post production industry continues to grow thanks to the creation of New York State’s Post Production Film Tax Credit, which was established in 2010. Since then, over 1,000 productions have applied for the credit, creating almost a million new jobs.

“While this creates more pathways for New York City residents to get into the industry, there is evidence that this growth is not equally distributed among women and people of color. In response to this need, the NYC Mayor’s Office of Media and Entertainment decided to create the Made in New York Post Production Training Program, which built on the success of the Made in New York PA Training Program, which for the last 11 years has trained over 700 production assistants for work on TV and film sets,” explains Ryan Penny, program director of the Made In NY Post Production Training Program.

The Post Production Training Program seeks to diversify New York’s post industry by training low-income and unemployed New Yorkers in the basics of editing, animation and visual effects. Created in partnership with the Blue Collar Post Collective, BRIC Media Arts and Borough of Manhattan Community College, the course is free to participants and consists of a five-week, full-time skills training and job placement program administered by workforce development non-profit Brooklyn Workforce Innovations.

Trainees take part in classroom training covering the history and theory of post production, as well as technical training in Avid Media Composer, Adobe’s Premiere, After Effects and Photoshop, as well as Foundry’s Nuke. “Upon successful completion of the training, our staff will work with graduates to identify job opportunities for a period of two years,” says Penny.

Ryan Penny, far left with the most recent graduating class.

Launched in June 2017, the Made in New York Post Production Training Program graduated its second cycle of trainees in January 2018 and is now busy establishing partnerships with New York City post houses and productions who are interested in hiring graduates of the program as post PAs, receptionists, client service representatives, media management technicians and more.

“Employers can expect entry-level employees who are passionate about post and hungry to continue learning on the job,” reports Penny. “As an added incentive, the city has created a work-based learning program specifically for MiNY Post graduates, which allows qualified employers to be reimbursed for up to 80% of the first 280 hours of a trainee’s wages. This results in a win-win for employers and employees alike.”

The Made in New York Post Production Training Program will be conducting further cycles throughout the year, beginning with Cycle 3 planned for spring 2018. More information on the program and how to hire program graduates can be found here.

Sim Post LA beefs up with Greg Ciaccio and Paul Chapman

It’s always nice when good things happen to good people. Recently, long-time industry post pros Greg Ciaccio and Paul Chapman joined Sim Post LA — Greg as VP of post and Paul as VP of engineering and technology.

postPerspective has known both Greg and Paul for years and often call on them to pick their brains about technology, so having them end up working together warms our hearts.

Sim Post is a division of Sim, which provides end-to-end solutions for TV and feature film production and post production in LA, Vancouver, Toronto, New York and Atlanta.

“I’ll be working with the operations, sales, technology and finance teams to ensure tight integration between departments — always in the service of our clients,” reports Ciaccio. “Our ability to offer end-to-end services is a great advantage in the industry. I’ve admired the work produced by the talented group at Sim Post LA (formerly Chainsaw and Bling), and now I’m pleased to be a part of the team.”

Ciaccio’s resume includes executive operations management positions for creative service divisions at Ascent, Technicolor and Deluxe, and has led product development teams creating products. He also serves as chair of the ASC Motion Imaging Technology Council’s Workflow Committee, currently focused on ACES education and enlightenment, and is a member of the UHD/HDR Committee and Joint ASC/ICG/VES/PGA VR Committee.

Chapman, a Fellow of SMPTE, has held executive technology and engineering positions over the last 30 years, including his long-time role at FotoKem, as well as stints at Unitel Video and others. His skillset includes expertise in storage and networking infrastructure, facility engineering and operations.

“Sim has a lot of potential, and when the opportunity was presented to lead their engineering and technology departments, it really intrigued me,” says Chapman. “The LA facility itself is well constructed from the ground up. I’m looking forward to working with the creative and technical teams across the organization to enhance our technical operations, foster innovation and elevate performance for our clients.”

Greg and Paul are based at Sim’s operations in Hollywood.

Main Caption: (L-R) Greg Ciaccio and Paul Chapman

Industry mainstay Click3X purchased by Industrial Color Studios

Established New York City post house Click3X has been bought by Industrial Color Studios. Click3X is a 25-year-old facility that specializes in new media formats such as VR, AR, CGI and live streaming. Industrial Color Studios is a visual content production company. Founded in 1992, Industrial Color’s services range from full image capture and e-commerce photography to production support and post services, including creative editorial, color grading and CG.

With offices in New York and LA, Industrial Color has developed its own proprietary systems to support online digital asset management for video editing and high-speed file transfers for its clients working in broadcast and print media. The company is an end-to-end visual content production provider, partnering with top brands, agencies and creative professionals to accelerate multi-channel creative content.

Click3X was founded in 1993 by Peter Corbett, co-founder of numerous companies specializing in both traditional and emerging forms of media.  These include Media Circus (a digital production and web design company), IllusionFusion, Full Blue, ClickFire Media, Reason2Be, Sound Lounge and Heard City. A long-time member of the DGA as a commercial film director, Corbett emigrated to the US from Australia to pursue a career as a commercial director and, shortly thereafter, segued into integrated media and mixed media, becoming one of the first established film directors to do so.

Projects produced at Click3X have been honored with the industry’s top awards, including Cannes Lions, Clios, Andy Awards and others. Click3X also was presented with the Crystal Apple Award, presented by the New York City Mayor’s Office of Media and Entertainment, in recognition of its contributions to the city’s media landscape.

Corbett will remain in place at Click3X and eventually the companies will share the ICS space on 6th Avenue in NYC.

“We’ve seen a growing need for video production capabilities and have been in the market for a partner that would not only enhance our video offering, but one that provided a truly integrated and complementary suite of services,” says Steve Kalalian, CEO of Industrial Color Studios. “And Click3X was the ideal fit. While the industry continues to evolve at lightning speed, I’ve long admired Click3X as a company that’s consistently been on the cutting edge of technology as it pertains to creative film, digital video and new media solutions. Our respective companies share a passion for creativity and innovation, and I’m incredibly excited to share this unique new offering with our clients.”

“When Steve and I first entered into talks to align on the state of our clients’ future, we were immediately on the same page,” says Corbett, president of Click3X. “We share a vision for creating compelling content in all formats. As complementary production providers, we will now have the exciting opportunity to collaborate on a robust and highly-regarded client roster, but also expand the company’s creative and new media capabilities, using over 200,000 square feet of state-of-the-art facilities in New York, Los Angeles and Philadelphia.”

The added capabilities Click3X gives Industrial Color in video production and new media mirrors its growth in the field of e-commerce photography and image capture. The company has recently opened a new 30,000 square-foot studio in downtown Los Angeles designed to produce high-volume, high-quality product photography for advertisers. That studio complements the company’s existing e-commerce photography hub in Philadelphia.

Main Image: (L-R) Peter Corbett and Steve Kalalian