NBCUni 7.26

Category Archives: Digging Deeper

Director Sasha Levinson talks about her Las Vegas tourism spots

After seeing some of her previous work, Humble director Sasha Levinson was approached by agency R&R Partners and the Las Vegas Convention and Visitors Authority with a vision of creating four short films of personal transformation, each set across one weekend in Las Vegas, where the city was the catalyst in each narrative. The spots play more like a film trailer than a commercial.

Initial scripts were already written, when Levinson won project. “Right out of the gate I started to develop ideas about how I would bring these stories to life in a way that created a human mystique around Las Vegas while showcasing how the city could have an authentically positive impact on each character’s life,” she says.

We reached out to Levinson to find out more about her process and the project.

Who did you work with the most from the agency?
R&R Partners’ Arnie DiGeorge (ECD), Scott Murray (CD) and Gerri Angelo (Producer) were the primary people I interacted with. Once I was officially on board, the collaboration stretched into a place I hadn’t seen in my commercial work thus far. The team stressed their desire to have a filmmaker that could truly bring the films to life, and I think I did just that. They trusted my process whole-heartedly.

Now and Then

Can you walk us through the production process?
While working on readying the scripts for production, I flew out to Las Vegas and spent several days location scouting in and around the city. It was very inspiring to start to feel the environment and begin to envision exactly how our scenes might play out.

We scouted many incredible spaces, both interior and exterior. My process was to put myself into the mindset of each of the characters and decide where they would want to spend their weekend.

The next step was casting, which was done in Las Vegas and Los Angeles. We all took a collective breath when we found our actors because they became the characters that had only been living on the page for quite a while.

It took a dedicated team and the full support of Humble, our production company, to pull this off. Line producer Trevor Allen, AD Scott Murray, AD Todd Martin, costume designer Karmen Dann and location scout/manager Kim Houser-Amaral were a huge help during this two-week shoot.

Two weeks?
Yes, we shot for eight days over the course of two weeks, filming mostly at night in restaurants, nightclubs and suites both on and off The Strip. For interiors alone we shot at over nine different hotel properties, and then filmed various exteriors, driving shots and desert scenes.

Can you talk about each of the four spots?
For the Now and Then spot we used a handheld and Steadicam to stay intimate with the characters. The film lives in a dreamy, indie space, making the audience feel like they are inside the story, and as if the flashbacks are their own memories.

Party of One

The Anniversary was the flirtiest, most luxurious of the films. We were initially planning to film in a different bar, but I changed the schedule when I saw the personality of the final location. It was perfect.

To me, Party of One was a quirky romantic comedy, but the real romance is with herself. This film had a cleaner look with colors that really popped, a playful wardrobe and fun music. The Meetup has the most obvious references to the iconic Oceans and Bond films, and I wanted to do this genre justice.

Victor, the actor in The Meetup has a comedy background, so he was able to give us something different and great each take. We used cleaner lenses and slower-paced, more precise movements using a dolly.

What did you shoot on, and why did you (and assuming the DP) feel this was the best camera?
We shot on the Alexa Mini for its stunning Alexa image and the smaller body. The choice was due to all of the handheld and Steadicam we knew we would be working with. I really love the Alexa and of course we played with a lot of filtration and elements in front of the lens.

Where did you find the inspiration for this campaign?
I was really inspired by the film Paris, je t’aime, which became a reference we really stayed true to with this campaign. We kept saying, “If these four films feel like a love letter to Las Vegas, then we’ve done our job.”

After we finished casting in LA, I spent a weekend in Vegas visiting Valley of Fire State Park, seeing Elton John in concert, riding on the High Roller, eating at some amazing restaurants and overall experiencing the city in a new and inspiring way. I’ve always loved the desert, so there is a built-in romance for me about Vegas being a desert oasis that you just feel, looking out the windows wherever you are. I wanted to capture that essence in this campaign.

Sasha on set

What was the biggest obstacle in directing these four spots?
Las Vegas is all about tourism, so we had to keep the visitor experience in mind when planning our schedules and shooting. It was a constantly changing puzzle of a schedule, but each location had a filming liaison that worked with us. They helped us film across iconic Las Vegas locations like the High Roller at the LinQ Hotel & Casino and the Bellagio Fountains, where we even had an engineer help us control the fountains, starting them at just the right moment in the shot.

Where was this project edited, and how did you work with the editors?
I worked extremely closely with the editors Erin Nordstrom and Nick Pezillo at Spot Welders in Venice, California. They cut on Adobe Premiere, and during the edit process we cut in side-by-side rooms. As we were cutting the first cuts I would hop between rooms as the edits were evolving.

Music was a huge point of discussion. Josh Baron was our music supervisor, and Human created some original pieces. Early in the edit process there was a lot of conversation about tone and feel of the soundtracks. Getting the music to encourage the cinematic feeling was very important to all of us.

Can you talk about what you wanted from the color grade?
Las Vegas has so much light and spectacle, and most of the films took place at night, so I wanted to capture that essence and make sure we didn’t go too far. The idea was that the character of Las Vegas should be cohesive across all four films, but each of the storylines should feel visually dedicated to their respective characters.

Dave Hussey at Company 3 too the reigns on color and hit the perfect balance between fantasy and reality.

We’ve heard a lot about making the ad and production industries more inclusive. How do you see the industry changing? Do you think organizations like Free the Bid are doing their jobs to help female directors get the work they deserve?
I think that filmmaking and content creation is about telling stories that accurately reflect reality, but this isn’t possible without diverse creators showcasing their own unique realities. I’ve spent a lot of time on scouts and shoots where I am the only woman, but this is changing, and there are more and more times where the van is all women, and it’s amazing!

However, just when I think diversity is becoming the norm, I’ll be on set with a crew member or client who says they’ve never worked with a female director. Free the Bid has been an incredible initiative and because it’s so action oriented, you can feel the rumblings of change in realtime. That’s exciting.

Do you have any tips for aspiring female directors?
Being a filmmaker in film or advertising is an incredible career path. My best advice is to develop your authentic voice and find projects that resonate with it. Don’t get lost in what you think you should be doing, and don’t just follow the trends. Be authentic in your work and speak from the heart. And when it comes to starting a reel, donate your time. Find brands or projects that cover the costs and give them your creative skills in exchange for footage to build a reel.

There has been a trend around branded content and spots, that look and like films, like this campaign. Do you think this trend will continue?
I think this trend will continue and extend deeper into newer mediums like immersive storytelling and interactivity. Brands are constantly searching for ways to connect with consumers, and I believe the art of storytelling is an age-old unifier and connector. So it makes sense. Personally, I love the three-act structure and any opportunity to work with this, whether it be 90 seconds or 90 minutes, inspires me.

Are you working on any upcoming projects that we should be on the lookout for?
I’m currently taking my recent film Welcome to Grandville through the festival circuit. It made its premiere at the Cleveland International Film Festival, Perspectives Exhibition, and more recently in New York City at the Soho International Film Festival.

I’ve also just completed a commercial project for Whirlpool, Google and Amazon’s Alexa. Currently, I’m writing a film called the The Discomfort of Skin, about the human discomfort with nudity and sexuality.

How being a special needs dad helps me be a better editor at Conan

By Robert James Ashe

I have been working in late night television for Conan O’Brien for nearly 10 years, currently as the lead editor for Conan on the TBS network. Late night television has an extraordinarily demanding pace. An old colleague of mine used to refer to it as the “speed chess” of editing. It demands that your first instincts when editing are the best ones. The pace also puts extraordinary pressure on your writers and producers. I like to think of editors as the pilots hired to bring the plane in for a landing that may have already lost an engine, so it’s important that you maintain balance and focus.

I am the father to three amazing kiddos with special needs. My first daughter was born with the amyoplasia form of arthrogryposis multiplex congenita. She is also nonverbal. My youngest daughter was born with amniotic banding syndrome. For her, it means she only has a few fully developed fingers and a prosthesis on one of her legs. We’ve addressed her physical challenges through surgery and she has lots of fun sprinting around with her “robot leg,” which is what we call her prosthesis. We are in the middle of adopting our son and hope to bring him home in the fall. He has similar orthopedic challenges to our second daughter.

I take my jobs as editor and as a father very seriously, but it is also important to note that I am happy. Here are some things that I have learned over the years. I have made mistakes in every one of these rules, but I try every day to be better.

1. You will reach a new normal
I like to think of an editor’s job as a client’s spirit guide of sorts. A guardian of the story you are helping to tell. Once you get all of the footage, and you have a good idea of what you are dealing with, your job is to advocate for the story your client is trying to tell while handling various tech issues so you can remain creative. It took me a long time to make this adjustment. Now I try every day to make it my new normal.

Once we got through the first few weeks of my first daughter’s life and received a diagnosis, we decided to not live our lives with a cloud over our heads and to instead look for the sunshine. We refused to consider our lives to be a tragedy. My job is to advocate for my children while making sure they can remain kids throughout the doctor’s appointments and surgeries. I want them to feel happy about their lives.

2. Know Your Role
It’s important to know that the story you are being hired to tell for your client is not yours. I am very trusted at my job to work on pieces with little supervision. I have earned this trust because the writers (my client) know that I will put together segments based on their sensibilities. I am there to help tell their story and to solve any tech problems that may arise in doing so. I am not reinterpreting the story to fit my own sensibilities (plus, I’m not very funny so it works out).

I am a player in my children’s life story. I deal with insurance. My wife takes them to appointments on workdays. But, we are not the ones receiving the therapy or medical services, so our story is different than our children’s. You must know how to separate the two. I am there to guide them. I am there to protect them but it is their story.

Rob (center) with his co-editors Chris Heller and Matt Shaw.

3. Attitude monitors everything
I have to be mindful of my attitude. I am a large, intimidating looking man. The slightest expression of negativity is read to be much larger because of my size. Your attitude can affect an entire workspace. People will recommend a decent editor who is nice over a grumpy “professional” any day of the week. I’ve made this mistake many times. I would start on a new project so passionate and personally invested in the story that I was hired to tell I would be arrogantly offended if I felt that anyone I was working with didn’t give their absolute best. The truth of it is most people try to do their best with the circumstances they have been given, and the more I’d complain the more I’d become the real problem. Give people more credit. You don’t know the kinds of things they have had to deal with.

Dealing with the medical industry can be daunting. It’s easy to feel frustrated on calls with insurance or scheduling appointments. I try to have empathy for the other person I am dealing with as they have to deal with frustrated and frightened people all day. You don’t know the kinds of things they have to deal with. I also have to be very mindful of my attitude around my kids. My wife figured out quickly that if our lives were going to revolve around going to the Children’s Hospital that we were going to make it fun. Our kids actually love going. They have a playground and so many things for the kids to enjoy. If we acted depressed around our children, it would affect them. Before my youngest daughter’s prosthesis, we would talk about all the things she would be able to do and all the fun she’d be able to have once she got her robot leg.

4. The world isn’t fair
Not everyone is going to recognize what you contribute, even when you are at your absolute best. You must try to not take it personally. I try to remind myself that often we are working for people who have their own issues to worry about and don’t always understand the technical challenges of what we do. I have seen hundreds of all sorts of people passed over for promotions they deserve or recognition that they have earned. As someone who has been in charge of other editors, I have also received credit for work that is their own. That is why I insist at the end of every project sending a private post mortem to my clients so people can understand everyone’s contribution.

I get way more credit than I deserve for being a father of my children, and it’s not fair. One time my wife and I brought the kids to a party. My oldest daughter doesn’t have the muscle strength to feed herself, so I spent time feeding her while my wife talked with her friends. After leaving the party, my wife remarked how impressed they were that I fed my child. My wife is an amazing mom. I married Mary Poppins. Our family does deal with a fair amount of challenges, but I have met many single mothers over the years that are worthy of so much more admiration for what they take on than anything we’ve ever accomplished.

5. Take care of yourself
You will never be the best editor you can be unless you take care of yourself. Eating correctly, sleeping enough and moderating drinking or drug use is just the tip of the iceberg. The most high-profile jobs will demand that you be at your best 100% of the time.

My oldest daughter cannot walk without the use of braces, so we need to remain strong enough to lift her upstairs or into the shower. I am getting older, so I’m really starting to make a concentrated effort to eat better, exercise and drink less. The most challenging times we have faced have demanded that we be at our absolute best mentally and physically as long nights during surgeries can be draining.

6. A job is a job; family is everything
I like to park my car on the far side of the studio that I work at. It gives me a 20-minute walk to my trailer that allows me to look at all the other shoots happening that day and reflect on how I used to dream as a kid to one day work in Hollywood. It also gives me a chance to get some exercise.

Hollywood has been very kind to me, but my job doesn’t define my happiness. It’s not who I am. One of the best things that has ever happened to me in Hollywood was to figure out that once you take all the glitz and glamor away, it is a job like any other. A job I enjoy that allows me to provide for my family.

When I’m gone from this world, my most meaningful accomplishments will have nothing to do with my job and everything to do with my family and friends. The greatest thing I have done with my life is adopting my (soon to be) two children. My job demands long hours, so I have to miss some things, but I take comfort in knowing that it is to provide for their future.

7. You are capable of much more than you know
When I became an editor, I really didn’t know what my career would have in store. I just found it fun and decided that I could make money doing it. When I started in late night television almost 10 years ago, delivering a 42-minute show in 90 minutes used to make my hands shake. Now, it is one of the easiest points of my day. I went from freelancing on side projects for little money to helping plan international media transfers and deliveries for network primetime specials supported by an amazing and capable team. I’m proud of what I’ve been able to accomplish.

When my first child was born. I didn’t know what life was going to have in store. We just decided to go all in and be the best we could be at it, and now we are parents to (soon to be) three wonderful kiddos with an amazing orthopedic medical team. Our children are part of case studies that will advance medical science. They’ve been filmed and photographed for others to learn how to properly treat joint contractures and prosthesis adaptations. Their presence is going to help future kids get the treatment they need. When something like this happens in your life, you find out what you are really made of.

8. Finally, please remember to have fun. It’s fun.
I wish you nothing but the best.


Robert James Ashe is the four-time Emmy-nominated lead editor of Conan on TBS. You can follow him on Twitter at @robertjamesashe and read more pieces from him on The Mighty.

NBCUni 7.26

AlphaDogs’ Terence Curren is on a quest: to prove why pros matter

By Randi Altman

Many of you might already know Terence Curren, owner of Burbank’s AlphaDogs, from his hosting of the monthly Editor’s Lounge, or his podcast The Terence and Philip Show, which he co-hosts with Philip Hodgetts. He’s also taken to producing fun, educational videos that break down the importance of color or ADR, for example.

He has a knack for offering simple explanations for necessary parts of the post workflow while hammering home what post pros bring to the table. You can watch them here:

I reached out to Terry to find out more.

How do you pick the topics you are going to tackle? Is it based on questions you get from clients? Those just starting in the industry?
Good question. It isn’t about clients as they already know most of this stuff. It’s actually a much deeper project surrounding a much deeper subject. As you well know, the media creation tools that used to be so expensive, and acted as a barrier to entry, are now ubiquitous and inexpensive. So the question becomes, “When everyone has editing software, why should someone pay a lot for an editor, colorist, audio mixer, etc.?”

ADR engineer Juan-Lucas Benavidez

Most folks realize there is a value to knowledge accrued from experience. How do you get the viewers to recognize and appreciate the difference in craftsmanship between a polished show or movie and a typical YouTube video? What I realized is there are very few people on the planet who can’t afford a pencil and some paper, and yet how many great writers are there? How many folks make a decent living writing, and why are readers willing to pay for good writing?

The answer I came up with is that almost anyone can recognize the difference between a paper written by a 5th grader and one written by a college graduate. Why? Well, from the time we are very little, adults start reading to us. Then we spend every school day learning more about writing. When you realize the hard work that goes into developing as a good writer, you are more inclined to pay a master at that craft. So how do we get folks to realize the value we bring to our craft?

Our biggest problem comes from the “magician” aspect of what we do. For most of the history of Hollywood, the tricks of the trade were kept hidden to help sell the illusion. Why should we get paid when the average viewer has a 4K camera phone with editing software on it?

That is what has spurred my mission. Educating the average viewer to the value we bring to the table. Making them aware of bad sound, poor lighting, a lack of color correction, etc. If they are aware of poorer quality, maybe they will begin to reject it, and we can continue to be gainfully employed exercising our hard-earned skills.

Boom operator Sam Vargas.

How often is your studio brought in to fix a project done by someone with access to the tools, but not the experience?
This actually happens a lot, and it is usually harder to fix something that has been done incorrectly than it is to just do it right from the beginning. However, at least they tried, and that is the point of my quest: to get folks to recognize and want a better product. I would rather see that they tried to make it better and failed than just accepted poor quality as “good enough.”

Your most recent video tackles ADR. So let’s talk about that for a bit. How complicated a task is ADR, specifically matching of new audio to the existing video?
We do a fair amount of ADR recording, which isn’t that hard for the experienced audio mixer. That said, I found out how hard it is being the talent doing ADR. It sounds a lot easier than it actually is when you are trying to match your delivery from the original recording.

What do you use for ADR?
We use Avid Pro Tools as our primary audio tool, but there are some additional tools in Fairlight (included free in Blackmagic’s Resolve now) that make ADR even easier for the mixer and the talent. Our mic is Sennheiser long shotgun, but we try to match mics to the field mic when possible for ADR.

I suppose Resolve proves your point — professional tools accessible for free to the masses?
Yeah. I can afford to buy a paint brush and some paint. It would take me a lot of years of practice to be a Michelangelo. Maybe Malcolm Gladwell, who posits that it takes 10,000 hours of practice to master something, is not too far off target.

What about for those clients who don’t think you need ADR and instead can use a noise reduction tool to remove the offensive noise?
We showed some noise reduction tools in another video in the series, but they are better at removing consistent sounds like air conditioner hum. We chose the freeway location as the background noise would be much harder to remove. In this case, ADR was the best choice.

It’s also good for replacing fumbled dialogue or something that was rewritten after production was completed. Often you can get away with cheating a new line of dialogue over a cutaway of another actor. To make the new line match perfectly, you would rerecord all the dialogue.

What did you shoot the video with? What about editing and color?
We shot with a Blackmagic Cinema Camera in RAW so we could fix more in post. Editing was done in Avid Media Composer with final color in Blackmagic’s Resolve. All the audio was handled in Avid’s Pro Tools.

What other topics have you covered in this series?
So far we’ve covered some audio issues and the need for color correction. We are in the planning stages for more videos, but we’re always looking for suggestions. Hint, hint.

Ok, letting you go, but is there anything I haven’t asked that’s important?
I am hoping that others who are more talented than I am pick up the mantle and continue the quest to educate the viewers. The goal is to prevent us all becoming “starving artists” in a world of mediocre media content.


Cinesite VFX supervisor Stephane Paris: 860 shots for The Commuter

By Randi Altman

The Commuter once again shows how badass Liam Neeson can be under very stressful circumstances. This time, Neeson plays a mild-mannered commuter named Michael who gets pushed too far by a seemingly benign but not-very-nice Vera Farmiga.

For this Jaume Collet-Serra-directed Lionsgate film, Cinesite’s London and Montreal locations combined to provide over 800 visual effects shots. The studio’s VFX supervisor, Stephane Paris, worked hand in hand with The Commuter’s overall VFX supervisor Steve Begg.

Stephane Paris

The visual effects shots vary, from CG commuters to Neesom’s outfits changing during his daily commute to fog and smog to the climactic huge train crash sequence. Cinesite’s work on the film included a little bit of everything. For more, we reached out to Paris…

How early did Cinesite get involved in The Commuter?
We were involved before principal photography began. I was then on set at Pinewood Studios, just outside London, for about six weeks alongside Steve. They had set up two stages. The first was a single train carriage adapted and dressed to look like multiple carriages — this was used to film all the main action onboard the train. The carriage was surrounded by bluescreen and shot on a hydraulic system to give realistic shake and movement. In one notable shot, the camera pulls back through the entire length of the train, through the carriage walls. A camera rig was set up on the roof and programmed to repeat the same pullback move through each iteration of the carriage — this was subsequently stitched together by the VFX team.

How did you work with the film’s VFX supervisor, Steve Begg?
Cinesite had worked with Steve previously on productions such as Spectre, Skyfall and Inkheart. Having created effects with him for the Bond films, he was confident that Cinesite could create the required high-quality invisible effects for the action-heavy sequences. We interacted with Steve mainly. The client’s approach was to concentrate on the action, performances and story during production, so we lit and filmed the bluescreens carefully, ensuring reflections were minimized and the bluescreens were secure in order to allow creative freedom to Jaume during filming. We were confident that by using this approach we would have what we needed for the visual effects at a later stage.

You guys were the main house on the film, providing a whopping 860 visual effects shots. What was your turnaround like? How did you work for the review and approval process?
Yes, Cinesite was the lead vendor, and in total we worked on The Commuter for about a year, beginning with principal photography in August 2016 and delivering in August 2017. Both our London and Montreal studios worked together on the film. We have worked together previously, notably on San Andreas and more recently on Independence Day: Resurgence, so I had experience of working across both locations. Most of the full CG heavy shots were completed in London, while the environments, some of the full CG shots and 2D backgrounds were completed in Montreal, which also completed the train station sequence that appears early in the film.

My time was split fairly evenly between both locations, so I would spend two to three weeks in London followed by the same amount of time in Montreal. Steve never needed to visit the Montreal studio, but he was very hands-on and involved throughout. He visited our London studio at least twice a week, where we used the RV system to review both the London and Montreal work.

Can you describe the types of shots you guys provided?
We delivered over 860, from train carriage composites right through to entirely CG shots for the spectacular climactic train crash sequence. The crash required the construction of a two-kilometers-long environment asset complete with station, forest, tracks and industrial detritus. Effects were key, with flying gravel, breaking and deforming tracks, exploding sleepers, fog, dust, smoke and fire, in addition to the damaged train carriages. Other shots required a realistic Neeson digi-double to perform stunts.

The teams also created shots near the film’s opening that demonstrate the repetition of Michael’s daily commute. In a poignant shot at Grand Central Station multiple iterations of Michael’s journey are shown simultaneously, with the crowds gradually accelerating around him while his pace remains measured. His outfit changes, and the mood lighting changes to show the passing of the seasons around him.

The shot was achieved with a combination of multiple motion control passes, creation of the iconic station environment using photogrammetry and, ultimately, by creating the crowd of fellow commuters in CG for the latter part of the shot (a seamless transition was required between the live-action passes and the CG people).

Did you do previs? If so, what tools did you use?
No. London’s Nvizible handled all the initial previs for the train crash. Steve Begg blocked everything out and then sent it to Jaume for feedback initially, but the final train crash layout was done by our team with Jaume at Cinesite.

What did you use tool-wise for the VFX?
Houdini’s RBD particle and fluid simulation processes were mainly used, with some Autodesk Maya for falling pieces of train. Simulated destruction of the train was also created using Houdini, with some internal set-up.

What was the most challenging scene or scenes you worked on? 
The challenge was, strangely enough, more about finding proper references that would fit our action movie requirements. Footage of derailing trains is difficult to find, and when you do find it you quickly notice that train carriages are not designed to tear and break the way you would like them to in an action movie. Naturally, they are constructed to be safe, with lots of energy absorption compartments and equipped with auto triggering safe mechanisms.

Putting reality aside, we devised a visually exciting and dangerous movie train crash for Jaume, complete with lots of metal crumbling, shattering windows and multiple large-scale impact explosions.

As a result, the crew had to ensure they were maintaining the destruction continuity across the sequence of shots as the train progressively derails and crashes. A high number of re-simulations were applied to the train and environment destruction whenever there was a change to one of these in a shot earlier in the sequence. Devising efficient workflows using in-house tools to streamline this where possible was key in order to deliver a large number of effects-heavy destruction shots whilst maintaining accurate continuity and remaining responsive to the clients’ notes during the show.


VFX supervisor Lesley Robson-Foster on Amazon’s Mrs. Maisel

By Randi Altman

If you are one of the many who tend to binge-watch streaming shows, you’ve likely already enjoyed Amazon’s The Marvelous Mrs. Maisel. This new comedy focuses on a young wife and mother living in New York City in 1958, when men worked and women tended to, well, not work.

After her husband leaves her, Mrs. Maisel chooses stand-up comedy over therapy — or you could say stand-up comedy chooses her. The show takes place in a few New York neighborhoods, including the toney Upper West Side, the Garment District and the Village. The storyline brings real-life characters into this fictional world — Midge Maisel studies by listening to Red Foxx comedy albums, and she also befriends comic Lenny Bruce, who appears in a number of episodes.

Lesley Robson-Foster on set.

The show, created by Amy Sherman-Palladino and Dan Palladino, is colorful and bright and features a significant amount of visual effects — approximately 80 per episode.

We reached out to the show’s VFX supervisor, Lesley Robson-Foster, to find out more.

How early did you get involved in Mrs. Maisel?
The producer Dhana Gilbert brought my producer Parker Chehak and I in early to discuss feasibility issues, as this is a period piece and to see if Amy and Dan liked us! We’ve been on since the pilot.

What did the creators/showrunners say they needed?
They needed 1958 New York City, weather changes and some very fancy single-shot blending. Also, some fantasy and magic realism.

As you mentioned, this is a period piece, so I’m assuming a lot of your work is based on that.
The big period shots in Season 1 are the Garment District reconstruction. We shot on 19th Street between 5th and 6th — the brilliant production designer Bill Groom did 1/3 of the street practically and VFX took care of the rest, such as crowd duplication and CG cars and crowds. Then we shot on Park Avenue and had to remove the Met Life building down near Grand Central, and knock out anything post-1958.

We also did a major gag with the driving footage. We shot driving plates around the Upper West Side and had a flotilla of period-correct cars with us, but could not get rid of all the parked cars. My genius design partner on the show Douglas Purver created a wall of parked period CG cars and put them over the modern ones. Phosphene then did the compositing.

What other types of effects did you provide?
Amy and Dan — the creators and showrunners — haven’t done many VFX shows, but they are very, very experienced. They write and ask for amazing things that allow me to have great fun. For example, I was asked to make a shot where our heroine is standing inside a subway car, and then the camera comes hurtling backwards through the end of the carriage and then sees the train going away down the tunnel. All we had was a third of a carriage with two and a half walls on set. Douglas Purver made a matte painting of the tunnel, created a CG train and put it all together.

Can you talk about the importance of being on set?
For me being on set is everything. I talk directors out of VFX shots and fixes all day long. If you can get it practically you should get it practically. It’s the best advice you’ll ever give as a VFX supervisor. A trust is built that you will give your best advice, and if you really need to shoot plates and interrupt the flow of the day, then they know it’s important for the finished shot.

Having a good relationship with every department is crucial.

Can you give an example of how being on set might have saved a shot or made a shot stronger?
This is a character-driven show. The directors really like Steadicam and long, long shots following the action. Even though a lot of the effects we want to do really demand motion control, I know I just can’t have it. It would kill the performances and take up too much time and room.

I run around with string and tennis balls to line things up. I watch the monitors carefully and use QTake to make sure things line up within acceptable parameters.

In my experience you have to have the production’s best interests at heart. Dhana Gilbert knows that a VFX supervisor on the crew and as part of the team smooths out the season. They really don’t want a supervisor who is intermittent and doesn’t have the whole picture. I’ve done several shows with Dhana; she knows my idea of how to service a show with an in-house team.

You shot b-roll for this? What camera did you use, and why?
We used a Blackmagic Ursa Mini Pro. We rented one on The OA for Netflix last year and found it to be really easy to use. We liked that’s its self-contained and we can use the Canon glass from our DSLR kits. It’s got a built-in monitor and it can shoot RAW 4.6K. It cut in just fine with the Alexa Mini for establishing shots and plates. It fits into a single backpack so we could get a shot at a moment’s notice. The user interface on the camera is so intuitive that anyone on the VFX team could pick it up and learn how to get the shot in 30 minutes.

What VFX houses did you employ, and how do you like to work with them?
We keep as much as we can in New York City, of course. Phosphene is our main vendor, and we like Shade and Alkemy X. I like RVX in Iceland, El Ranchito in Spain and Rodeo in Montreal. I also have a host of secret weapon individuals dotted around the world. For Parker and I, it’s always horses for courses. Whom we send the work to depends on the shot.

For each show we build a small in-house team — we do the temps and figure out the design, and shoot plates and elements before shots leave us to go to the vendor.

You’ve worked on many critically acclaimed television series. Television is famous for quick turnarounds. How do you and your team prepare for those tight deadlines?
Television schedules can be relentless. Prep, shoot and post all at the same time. I like it very much as it keeps the wheels of the machine oiled. We work on features in between the series and enjoy that slower process too. It’s all the same skill set and workflow — just different paces.

If you have to offer a production a tip or two about how to make the process go more smoothly, what would it be?
I would say be involved with EVERYTHING. Keep your nose close to the ground. Really familiarize yourself with the scripts — head trouble off at the pass by discussing upcoming events with the relevant person. Be fluid and flexible and engaged!


Working with Anthropologie to build AR design app

By Randi Altman

Buying furniture isn’t cheap; it’s an investment. So imagine having an AR app that allows you to see what your dream couch looks like in paisley, or colored dots! Well imagine no more. Anthropologie — which sells women’s clothing, shoes and accessories, as well as furniture, home décor, beauty and gifts — just launched its own AR app, which gives users the ability to design and customize their own pieces and then view them in real-life environments.

They called on production and post house CVLT to help design the app. The bi-coastal studio created over 96,000 assets, allowing users to combine products in very realistic and different ways. The app also accounts for environmental lighting and shadows in realtime.

We reached out to CVLT president Alberto Ruiz to find out more about how the studio worked with Anthroplogie to create this app.

How early did CVLT get involved in the project?
Our involvement began in the spring of 2017. We collaborated early in the planning phases when Anthropologie was concepting how to best execute the collection. Due to our background in photography, video production and CGI, we discussed the positives and pitfalls of each avenue, ultimately helping them select CGI as the path forward.

We’re often approached by a brand with a challenge and asked to consult on the best way to create the assets needed for the campaign. With specialists in each category, we look at all available ways of executing a particular project and provide a recommendation as to the best way to build a campaign with longevity in mind.

How did CVLT work with Anthropologie? How much input did you have?
We worked in close collaboration with Anthropologie every step of the way. We helped design style guides and partnered with their development team to test and optimize assets for every platform.

Our creatives worked closely with Anthropologie to elevate the assets to a high-quality reflective of the product integrity. We presented CGI as a way to engage customers now and in the future through AR/VR platforms. Because of this partnership, we understood the vision for future executions and built our assets with those executions in mind. They were receptive to our suggestions and engaged in product feedback. All in all, it was a true partnership between companies.

Has CVLT worked on assets or materials for an app before? How much of your work is for apps or the web?
The majority of the work that we produce is for digital platforms, whether for the web, mobile or experiential platforms. In addition to film and photography projects, we produce highly complex CGI products for luxury jewelers, fragrance and retail companies.

More and more clients are looking to either supplement or run full campaigns digitally. We believe that investing in emerging technologies, such as augmented and virtual reality, is paramount in the age of digital and mobile content. Our commitment to emerging technologies connects our clients with the resources to explore new ways of communicating with their audience.

What were the challenges of creating so many assets? What did you learn that could be applicable moving forward?
The biggest challenge was unpacking all the variables within this giant puzzle. There are 138 unique pieces of furniture in 11 different fabrics, with 152 colorways, eight leg finishes and a variety of hardware options. Stylistically, colors of a similar family were to live on complementary backgrounds, adding yet another variable to the project. It was basically a rubix cube on steroids. Luckily, we really enjoy puzzles.

We always believed in having a strong production team and pipeline. It was the only way to achieve the scale and quality of this project. This was further reinforced as we raced toward the finish line. We’re now engaged in future seasons and are focused on refining the pipe and workflow tools therein.

Any interesting stories from working on the project?
One of the most interesting things about working on the project was how much we learned about furniture. The level of planning and detail that goes into each piece is amazing. We talk a lot about the variables in colors, fabrics and styles because they are the big factors. What remains hidden are the small details that have large impacts. We were given a crash course in stitching details, seam placements, tufting styles and more. Those design details are what set an Anthropologie piece apart.

Another interesting part of the project was working with such an iconic brand with a strong heritage. The rich history of design at Anthropologie permeates every aspect of their work. The same level of detail poured into product design is also visible in the way they communicate with and understand their customer.

What tools were used throughout the project?
Every time we approach a new project we assess the tools that we have in our arsenal and the custom tools that we can develop to make the process smoother for our clients. This project was no different in that sense. We combined digital project management tools with proprietary software to create a seamless experience for our client and staff.

We built a bi-coastal team for this project between our New York and Los Angeles offices. Between that and our Philadelphia-based client, we relied heavily on collaborative digital tools to manage reviews. It’s a workflow we’re accustomed to as many of our clients have a global presence, which was further refined to meet the scale of this project.

What was the most difficult part of the project?
The timeframe was really the biggest challenge in this project. The sheer volume of assets — 96,000 that we created in under five months was definitely a monumental task, and one we’re very proud of.


Digging Deeper: The Mill Chicago’s head of color Luke Morrison

A native Londoner, Morrison started his career at The Mill where worked on music videos and commercials. In 2013, he moved across to the Midwest to head up The Mill Chicago’s color department.

Since then, Morrison has worked on campaigns for Beats, Prada, Jeep, Miller, Porsche, State Farm, Wrigley’s Extra Gum and a VR film for Jack Daniel’s.

Let’s find out more about Morrison.

How early on did you know color would be your path?
I started off, like so many at The Mill, as a runner. I initially thought I wanted to get into 3D, and after a month of modeling a photoreal screwdriver I realized that wasn’t the path for me. Luckily, I poked my nose into the color suites and saw them working with neg and lacing up the Spirit telecine. I was immediately drawn to it. It resonated with me and with my love of photography.

You are also a photographer?
Yes, I actually take pictures all the time. I always carry some sort of camera with me. I’m fortunate to have a father who is a keen photographer and he had a darkroom in our house when I was young. I was always fascinated with what he was doing up there, in the “red room.”

Photography for me is all about looking at your surroundings and capturing or documenting life and sharing it with other people. I started a photography club at The Mill, S35, because I wanted to share that part of my passion with people. I find as a ‘creative’ you need to have other outlets to feed into other parts of you. S35 is about inspiring people — friends, colleagues, clients — to go back to the classic, irreplaceable practice of using 35mm film and start to consider photography in a different way than the current trends.

State Farm

In 2013, you moved from London to Chicago. Are the markets different and did anything change?
Yes and no. I personally haven’t changed my style to suit or accommodate the different market. I think it’s one of the things that appeals to my clients. Chicago, however, has quite a different market than in the UK. Here, post production is more agency led and directors aren’t always involved in the process. In that kind of environment, there is a bigger role for the colorist to play in carrying the director’s vision through or setting the tone of the “look.”

I still strive to keep that collaboration with the director and DP in the color session whether it’s a phone call to discuss ahead of the session, doing some grade tests or looping them in with a remote grade session. There is definitely a difference in the suite dynamics, too. I found very quickly I had to communicate and translate the client’s and my creative intent differently here.

What sort of content do you work on?
We work on commercials, music promos, episodics and features, but always have an eye on new ways to tell narratives. That’s where the pioneering work in the emerging technology field comes into play. We’re no longer limited and are constantly looking for creative ways to remain at the forefront of creation for VR, AR, MR and experiential installations. It’s really exciting to watch it develop and to be a part of it. When Jack Daniel’s and DFCB Chicago approached us to create a VR experience taking the viewer to the Jack Daniel’s distillery in Kentucky, we leapt at the chance.

Do you like a variety of projects?
Who doesn’t? It’s always nice to be working on a variety, keeping things fresh and pushing yourself creatively. We’ve moved into grading more feature projects and episodic work recently, which has been an exciting way to be creatively and technically challenged. Most recently, I’ve had a lot of fun grading some comedy specials, one for Jerrod Carmichael and one for Hasan Minhaj. This job is ever-changing, be it thanks to evolving technology, new clients or challenging projects. That’s one of the many things I love about it.

Toronto Maple Leafs

You recently won two AICE awards for best color for your grade on the Toronto Maple Leafs’ spot Wise Man. Can you talk about that?
It was such a special project to collaborate on. I’ve been working with Ian Pons Jewell, who directed it, for many years now. We met way back in the day in London, when I was a color assistant. He would trade me deli meats and cheeses from his travels to do grades for him! That shared history made the AICE awards all the more special. It’s incredible to have continued to build that relationship and see how each of us have grown in our careers. Those kinds of partnerships are what I strive to do with every single client and job that comes through my suite.

When it comes to color grading commercials, what are the main principles?
For me, it’s always important to understand the idea, the creative intent and the tone of the spot. Once you understand that, it influences your decisions, dictates how you’ll approach the grade and what options you’ll offer the client. Then, it’s about crafting the grade appropriately and building on that.

You use FilmLight Baselight, what do your clients like most about what you can provide with that system?
Clients are always impressed with the speed at which I’m able to address their comments and react to things almost before they’ve said them. The tracker always gets a few “ooooooh’s” or “ahhhh’s.” It’s like they’re watching fireworks or something!

How do you keep current with emerging technologies?
That’s the amazing thing about working at The Mill: we’re makers and creators for all media. Our Emerging Technologies team is constantly looking for new ways to tell stories and collaborate with our clients, whether it’s branded content or passion projects, using all technologies at our disposal: anything is at our fingertips, even a Pop Llama.

Name three pieces of technology you can’t live without.
Well, I’ve got to have my Contax T2, an alarm clock, otherwise I’d never be anywhere on time, and my bicycle.

Would you say you are a “technical” colorist or would you rather prioritize instincts?
It’s all about instincts! I’m into the technical side, but I’m mostly driven by my instincts. It’s all about feeling and that comes from creating the correct environment in the suite, having a good kick off chat with clients, banging on the tunes and spinning the balls.

Where do you find inspiration?
I find a lot of inspiration from just being outside. It might sound like a cliché but travel is massive for me, and that goes hand in hand with my photography. I think it’s important to change your surroundings, be it traveling to Japan or just taking a different route to the studio. The change keeps me engaged in my surroundings, asking questions and stimulating my imagination.

What do you do to de-stress from it all?
Riding my bike is my main thing. I usually do a 30-mile ride a few mornings a week and then 50 to 100 miles at the weekend. Riding keeps you constantly focused on that one thing, so it’s a great way to de-stress and clear your mind.

What’s next for you?
I’ve got some great projects coming up that I’m excited about. But outside of the suite, I’ll be riding in this year’s 10th Annual Fireflies West ride. For the past 10 years, Fireflies West participants have embarked on a journey from San Francisco to Los Angeles in support of City of Hope. This year’s ride has the added challenge of an extra day tacked onto it making the ride 650 miles in total over seven days, so…I best get training! (See postPerspectives’ recent coverage on the ride.)


A conversation with editor Hughes Winborne, ACE

This Oscar-winning editor talks about his path, his process, Fences and Guardians of the Galaxy.

By Chris Visser

In the world of feature film editing, Hughes Winborne, ACE, has done it all. From cutting indie features (1996’s Sling Blade) to CG-heavy action blockbusters (2014’s Guardians of the Galaxy) to winning an Oscar (2005’s Crash), Winborne has run the proverbial gamut of impactful storytelling through editing.

His most recent film, the multiple-Oscar-nominated Fences, was an adaptation of the seminal August Wilson play. Denzel Washington, who starred alongside Viola Davis (who won an Oscar for her role), directed the film.

Winborne and I chatted recently about his work on Fences, his career and his brief foray into house painting before he caught the filmmaking bug. He edits on Avid Media Composer. Let’s find out more.

What led you to the path you are on now?
I grew up in Raleigh, North Carolina, and I went to college at the University of North Carolina at Chapel Hill. I graduated with a degree in history without a clue as to what I was going to do. I come from a family of attorneys, so because of an extreme lack of imagination, I thought I should do that. I became a paralegal and worked at North Carolina Legal Services for a bit. It didn’t take me long to realize that that wasn’t what I was meant to do, and I became a house painter.

A house painter?
I had my own house painting business for about three years with a couple of friends. The preamble to that is, I had always been a big movie fan. I went to the movies all the time in high school, but after college I started seeing between five and 10 a week. I didn’t even imagine working in the film business, because in Raleigh, that wasn’t really something that crossed my radar.

Then I saw an ad in the New York Times magazine for a six-week summer workshop at NYU. I took the course, moved to New York and set out to become a film editor. In the beginning, I did a lot of PA work for commercials and documentaries. Then I got an assistant editor job on a film called Girl From India.

What came next?
My father told me about a guy on the coast of North Carolina, A.B. Cooper, Jr., who wanted to make his own slasher film. I made him an offer: “If I get you an editor, can I be the assistant?” He said yes! About one-third of the way through the film, he fired the editor, and I took over that role. It was only my second film credit. I was never an assistant again, which is to the benefit of every editor that ever worked — I was terrible at it!

Where you able to make a living editing at that point?
Not as a picture editor, but I really started getting paid full-time for my editing when I started cutting industrials at AT&T. From there, I worked my way to 48 Hours. While I was there, they were kind enough to let me take on independent film projects for very little money, and they would hire me back after I did the job.

After a while, I moved to LA and started doing whatever I could get my hands on. I started with TV movies and gradually indie films, which really started for me with Sling Blade. Then, I worked my way into the studios after Crash. I’ve been kind of going back and forth ever since.

You mention your love of movies. What are the stories that inspire you? The ones that you get really excited to tell?
The movie that made me want to work in the film business was Barry Lyndon. Though it was not, by far, the film that got me started. I grew up on Truffaut. All his movies were just, for me, wonderful. It was a bit of a religion for me in those days; it gave me sustenance. I grew up on The Graduate. I grew up on Midnight Cowboy and Blow-Up.

I didn’t have a specific story I was interested in telling. I just knew that editing would be good for me. I like solitary jobs. I could never work on the set. It’s too crazy and social for me. I like being able to fiddle in the editing room and try things. The bottom line is, it’s fun. It can be a grind, and there can be a bit of pressure, but the best experiences I’ve had have been when I everybody on the show was having fun and working together. Films are made better when that collaboration is exploited to the limit.

Speaking of collaboration, how did that work on a film like Fences? What about working with actor/director Denzel Washington?
I’d worked with Denzel before [on The Great Debaters], so I kind of knew what he liked. They shot in Pittsburgh, but I didn’t go on location. There was no real collaboration the first six weeks but because I had worked with him before I had a sense of what he wanted.

I didn’t have to talk to him in order to put the film together because I could watch dailies — I could watch and listen to direction on camera and see how he liked to play the scenes. I put together the first cut on my own, which is typical, but in this case it was without almost any input. And my cut was really close. When Denzel came back, we concentrated in a few places on getting the performances the way he really wanted them, but I was probably 85 percent there. That’s not because I’m so great either, by the way, it’s because the actors were so great. Their performances were amazing, so I had a lot to choose from.

Can you talk about editing a film that was adapted from a play?
It was a Pulitzer Prize-winning play, so I wasn’t going to be taking anything out of it or moving anything around. All I had to do was concentrate on putting it together with strong performances — that’s a lot harder than it sounds. I’m working within these constraints where I can’t do anything, really. Not that I really wanted to. Have you seen the movie?

Yes, I loved it. It’s a movie I’ve been coming back to every day since I’ve seen it. I’ve been thinking about it a lot.
Then you’ll remember that the first 45 minutes to an hour is like a machine gun. That’s intentional. That’s me, intentionally, not slowing it down. I could have, but the idea is — and this is what was tricky — the film is about rhythm. Editing is about rhythm anyway, but this film is like rhythm to the 50th degree.

There’s very little music in the film, and we didn’t temp with much music either. I remember when Marc Evans [president, Motion Picture Group, Paramount Pictures] saw this film, he said, “The language is the music.” That’s exactly right.

To me, the dialogue feels like a score. There’s a musicality to it, a certain beat and timbre where it’s leading the audience through the scene, pulling them into the emotion without even hearing what they’re saying. Like when Denzel’s talking machine gun fast and it’s all jovial, then Lyons comes in and everything slows down and becomes very tense, then the scene busts back open and it’s all happy and fun again.
Yeah. You can just quote yourself on that one. [Laughs] That’s a perfect summation of it.

Partially, that’s going to come from set, that’s the acting and the direction, but on some level you’re going to have to construct that. How conscious of that were you the entire time?
I was very conscious of it. Where it becomes a little bit dicey at times is, unlike a play, you can cut. In a play, you’re sitting in the audience and watching everybody on stage at the same time. In a film, you’re not. When you start cutting, now you’ve got a new rhythm that’s different from the stage. In so doing, you’ve got to maintain that rhythm. You can’t just be on Denzel the entire time or Viola. You need to move around, and you need to move around in a way that rhythmically stays in time with the language. That was hard. That’s what we worked on most of the time after Denzel came back. We spent a lot of time just trying to make the rhythms right.

I think that’s one of the most difficult jobs an editor has, is choosing when to show someone saying something and when to show someone’s reaction to the thing being said. One example is when Troy is telling the story of his father, and you stay on him the entire time.
Hughes: Right.

The other side of that coin is when Troy reveals his secret to Rose and the reveal is on her. You see that emotion hit her and wash over her. When I was watching the movie, I thought, “That is the moment Viola Davis won an Oscar.”
Yeah, yeah, yeah. I agree.

I think that’s one of the most difficult jobs as an editor, knowing when to do what. Can you speak to that?
When I put this film together initially, I over-cut it, and then I tried to figure out where I wanted to be. It gets over-cut because I’m trying the best I can to find out what the core of the scene is. By I’m also trying to do that with what I consider to be the best performances. My process is, I start with that, and then I start weeding through it, getting it down and focusing; trying to make it as interesting as I can, and not predictable.

In the scenes that you’re talking about, it was all about Viola’s reaction anyway. Her reaction was going to be almost more interesting than whatever he says. I watched it a few times with audiences, and I know from talking to Denzel that when he did it on stage, there’s like a gasp.

When I saw it, everybody in the theatre was like, “What?” It was great.
I know, I know. It was so great. On the stage, people would talk to him, yell at him [Denzel]. “Shame on you, Denzel!” [laughs]. Then, she went into the backyard and did the scene, and that was the end of it. I’d never seen anything like it before. Honestly. It blew me away.

I was cutting that scene at my little home office. My wife was working behind me on her own stuff, and I was crying all the time. Finally, she turned around and asked, “What is wrong with you?” I showed it to her, and she had the same response. It took eight takes to get there, but when she got it, it was amazing. I don’t think too many actresses can do what Viola did. She’s so exposed. It’s just remarkable to watch.

There were three editors on Guardians of the Galaxy — you, Fred Raskin and Craig Wood. How did that work?
Marvel films are, generally speaking, 12 months from shoot to finish. I was on the film for eight months. Craig came in and took over for me. Having said that, it’s hard with two editors or just multiple editors in general. You have to divvy up scenes. Stuff would come in and we would decide together who was going to do it. I got the job because of Fred. I’d known Fred for 25 years. Fred was my intern on Drunks.

Fred had a prior relationship with James Gunn [director of Guardians]. In most cases, I deferred to Fred’s judgment as to how he wanted to divvy up the scenes, because I didn’t have much of a relationship with James when we started. I’d never done a big CG film. For me, it was a revelation. It was fun, trying to cut a dialogue scene between two sticks. One was tall, and one was short — the green marking was going to be Groot, and the other one was going to be Rocket Raccoon.

Can you talk about the importance of the assistant editor in the editorial process? How many assistants did you have on Fences?
On Fences, I had a first and a second. I started out cutting on film, and the assistant editor was a physical job. Touch it, slice it, catalog it, etc. What they have to do now is so complicated and technical that I don’t even know how to do it. Over my career, I’ve pretty much worked with a couple of assistants the whole time. John Breinholt and Heather Mullen worked with me on Fences. I’ve known Heather for 30 years.

What do you look for in an assistant?
Somebody who is going to be able to organize my life when I’m editing; I’m terrible at that. I need them to make sure that things are getting done. I don’t want to think about everything that’s going on behind the scenes, especially when I’m cutting, because it takes a lot of concentration for me just to sit there for 10 hours a day, or even longer, and concentrate on trying to put the movie together.

I like to have somebody that can look at my stuff and tell me what’s working and what’s isn’t. You get a different perspective from different assistants, and it’s really important to have that relationship.

You talked about working on Guardians for eight months, and I read that you cut Fences in six. What do you do to decompress and take care of your own mental health during those time periods?
Good question. It’s hard. When I was working on Fences, I was on the Paramount lot. They have a gym there, so I tried to go to the gym every day. It made my day longer, because I’d get there really early, but I’d go to the gym and get on the treadmill or something for 45 minutes, and that always helped.

Finally, for those who are young or aspiring editors, do you have any words of wisdom?
I think the once piece of advice is to keep going. It helps if you know what you want to do. So many people in this business don’t survive. There can be a lot of lean years, and there certainly were for me in the beginning — I had at least 10. You just have to stay in the game. Even if you’re not working at what you want to do, it’s important to keep working. If you want to be an editor, or a director, you have to practice.

Also, have fun. It’s a movie. Try and have a good time when you’re doing it. You’ll do your best work when you’re relaxed.


Chris Visser is a Wisconsin kid who works and lives in LA. He is currently an assistant editor working in scripted TV. You can find him on Facebook and Twitter.


Digging Deep: Helping launch the OnePlus 3T phone

By Jonathan Notaro

It’s always a big deal when a company drops a new smartphone. The years of planning and development culminate in a single moment, and the consumers are left to judge whether or not the new device is worthy of praise and — more importantly — worthy of purchase.

For bigger companies like Google and Apple, a misstep with a new phone release can often amount to nothing more than a hiccup in their operations. But for newer upstarts like OnePlus, it’s a make or break event. When we got the call at Brand New School to develop a launch spot for the company’s 3T smartphone, along with the agency Carrot Creative, we didn’t hesitate to dive in.

The Idea
OnePlus has built a solid foundation of loyal fans with their past releases, but with the 3T they saw the chance to build their fanbase out to more everyday consumers who may not be as tech-obsessed as their existing fans. It is an entirely new offering and, as creatives, the chance to present such a technologically advanced device to a new, wider audience was an opportunity we couldn’t pass up.

Carrot wanted to create something for OnePlus that gave viewers a unique sense of what the phone was capable of — to capture the energy, momentum and human element of the OnePlus 3T. The 3T is meant to be an extension of its owner, so this spot was designed to explore the parallels between man and machine. Doing this can run the risk of being cliché, so we opted for futuristic, abstract imagery that gets the point across effectively without being too heavy handed. We focused on representing the phone’s features that set it apart from other devices in this market, such as its powerful processor and its memory and storage capabilities.

How We Did It
Inspired by the brooding, alluring mood reflected in the design for the title sequence of The Girl With the Dragon Tattoo, we set out to meld lavish shots of the OnePlus 3T with robotically-infused human anatomy, drawing up initial designs in Autodesk Maya and Maxon Cinema 4D.

When the project moved into the animation phase, we stuck with Maya and used Nuke for compositing. Type designs were done in Adobe Illustrator and animated in Adobe After Effects.

Collaboration is always a concern when there are this many different scenes and moving parts, but this was a particular challenge. With a CG-heavy production like this, there’s no room for error, so we had to make sure that all of the different artists were on the same page every step along the way.

Our CG supervisor Russ Wootton and technical director Dan Bradham led the way and compiled a crack team to make this thing happen. I may be biased, but they continue to amaze me with what they can accomplish.

The Final Product
The project was two-month production process. Along the way, we found that working with Carrot and the brand was a breath of fresh air, as they were very knowledgeable and amenable to what we had in mind. They afforded us the creative space to take a few risks and explore some more abstract, avant-garde imagery that I felt represented what they were looking to achieve with this project.

In the end, we created something that I hope cuts through the crowded landscape of product videos and appeals to both the brand’s diehard-tech-savvy following and consumers who may not be as deep into that world. (Check it out here.)

Fueled by the goal of conveying the underlying message of “raw power” while balancing the scales of artificial and human elements, we created something I believe is beautiful, compelling and completely unique. Ultimately though, the biggest highlight was seeing the positive reaction the piece received when it was released. Normally, reaction from consumers would be centered solely on the product, but to have the video receive praise from a very discerning audience was truly satisfying.


Jonathan Notaro is a director at Brand New School, a bicoastal studio that provides VFX, animation and branding. 

25 Million Reasons to Smile: When a short film is more than a short

By Randi Altman

For UK-based father and son Paul and Josh Butterworth, working together on the short film 25 Million Reasons to Smile was a chance for both of them to show off their respective talents — Paul as an actor/producer and Josh as an aspiring filmmaker.

The film features two old friends, and literal partners in crime, who get together to enjoy the spoils of their labors after serving time in prison. After so many years apart, they are now able to explore a different and more intimate side of their relationship.

In addition to writing the piece, Josh served as DP and director, calling on his Canon 700D for the shoot. “I bought him that camera when he started film school in Manchester,” says Paul.

Josh and Paul Butterworth

The film stars Paul Butterworth (The Full Monty) and actor/dialect/voice coach Jon Sperry as the thieves who are filled with regret and hope. 25 Million Reasons to Smile was shot in Southern California, over the course of one day.

We reached out to the filmmakers to find out why they shot the short film, what they learned and how it was received.

With tools becoming more affordable these days, making a short is now an attainable goal. What are the benefits of creating something like 25 Million Reasons to Smile?
Josh: It’s wonderful. Young and old aspiring filmmakers alike are so lucky to have the ability to make short films. This can lead to issues, however, because people can lose sight of what it is important: character and story. What was so good about making 25 Million was the simplicity. One room, two brilliant actors, a cracking story and a camera is all you really need.

What about the edit?
Paul: We had one hour and six minutes (a full day’s filming) to edit down to about six minutes, which we were told was a day’s work. An experienced editor starts at £500 a day, which would have been half our total budget in one bite! I budgeted £200 for edit, £100 for color grade and £100 for workflow.

At £200 a day, you’re looking at editors with very little experience, usually no professional broadcast work, often no show reel… so I took a risk and went for somebody who had a couple of shorts in good festivals, named Harry Baker. Josh provided a lot of notes on the story and went from there. And crucial cuts, like staying off the painting as long as possible and cutting to the outside of the cabin for the final lines — those ideas came from our executive producer Ivana Massetti who was brilliant.

How did you work with the colorist on the look of the film?
Josh: I had a certain image in my head of getting as much light as possible into the room to show the beautiful painting in all its glory. When the colorist, Abhishek Hans, took the film, I gave him the freedom to do what he thought was best, and I was extremely happy with the results. He used Adobe Premiere Pro for the grade.

Paul: Josh was DP and director, so on the day he just shot the best shots he could using natural light — we didn’t have lights or a crew, not even a reflector. He just moved the actors round in the available light. Luckily, we had a brilliant white wall just a few feet away from the window and a great big Venice Beach sun, which flooded the room with light. The white walls bounced light everywhere.

The colorist gave Josh a page of notes on how he envisioned the color grade — different palettes for each character, how he’d go for the dominant character when it was a two shot and change the color mood from beginning to end as the character arc/resolution changed and it went from heist to relationship movie.

What about the audio?
Paul: I insisted Josh hire out a professional Róde microphone and a TASCAM sound box from his university. This actually saved the shoot as we didn’t have a sound person on the boom, and consequently the sound box wasn’t turned up… and also we swiveled the microphone rather than moving it between actors, so one had a reverb on the voice while the other didn’t.

The sound was unusable (too low), but since the gear was so good, sound designer Matt Snowden was able to boost it in post to broadcast standard without distortion. Sadly, he couldn’t do anything about the reverb.

Can you comment on the score?
Paul: A BAFTA mate of mine, composerDavid Poore, offered to do the music for free. It was wonderful and he was so professional. Dave already had a really good hold on the project as we’d had long chats but he took the Josh’s notes and we ended up with a truly beautiful score.

Was the script followed to the letter? Any improvisations?
Josh: No, not quite. Paul and Jon were great, and certainly added a lot to the dialogue through conversations before and during the shoot. Jon, especially, was very helpful in Americanizing his character, Jackson’s, dialogue.

Paul: Josh spent a long time on the script and worked on every word. We had script meetings at various LA cafes and table reads with me and Jon. On the shoot day, it was as written.

Josh ended up cutting one of my lines in the edit as it wasn’t entirely necessary, and the reverb was bad. It tightened it up. And our original ending had our hands touching on the bottle, but it didn’t look right so Josh went with the executive producer’s idea of going to the cabin.

What are the benefits of creating something like 25 Million Reasons to Smile?
Paul: Wow! The benefits are amazing… as an actor I never realized the process. The filming is actually a tiny proportion of the entire process. It gave me the whole picture (I’m now in awe of how hard producers work, and that’s only after playing at it!) and how much of a team effort it is — how the direction, edit, sound design and color grade can rewrite the film. I can now appreciate how the actor doesn’t see the bigger picture and has no control over any of those these elements. They are (rightly) fully immersed in their character, which is exactly what the actor’s role is: to turn up and do the lines.

I got a beautiful paid short film out of it, current footage for my show reel and a fantastic TV job — I was cast by Charles Sturridge in the new J.K.Rowling BBC1/HBO series Cormoran Strike as the dad of the female lead Robin (Holliday Grainger). I’d had a few years out bringing Josh up and getting him into film school. I relaunched when he went to university, but my agent said I needed a current credit as the career gap was causing casting directors problems. So I decided to take control and make my own footage — but it had to stand up on my show reel against clips like The Full Monty. If it wasn’t going to be broadcast-standard technically, then it had to have something in the script, and my acting (and my fellow actor had to be good) had to show that I could still do the job.

Josh met a producer in LA who’s given him runner work over here in England, and a senior producer with an international film company saw this and has given him an introduction to their people in Manchester. He also got a chance to write and direct a non-student short using industry professionals, which in the “real” world he might not get for years. And it came with real money and real consequences.

Josh, what did you learn from this experience from a filmmaker’s point of view?
More hands on deck is never a bad thing! It’s great having a tight-knit cast and crew, but the shoot would have definitely benefited from more people to help with lighting and sound, as well as the process running smoother overall.

Any surprises pop up? Any challenges?
Josh: The shoot actually ran very smoothly. The one challenge we had to face was time. Every shot took longer than expected, and we nearly ran out of time but got everything we needed in the end. It helped having such professional and patient actors.

Paul: I was surprised how well Josh (at 20 years old and at the start of film school) directed two professional middle-aged actors. Especially as one was his dad… and I was surprised by how filmic his script was.

Any tips for those looking to do something similar?
Josh: Once you have a story, find some good actors and just do it. As I said before, keep it simple and try to use character not plot to create drama.

Paul: Yes, my big tip would be to get the script right. Spend time and money on that and don’t film it till it’s ready. Get professional help/mentoring if you can. Secondly, use professional actors — just ask! You’d be surprised how many actors will take a project if the script and director are good. Of course, you need to pay them (not the full rate, but something).

Finally, don’t worry too much about the capture — as a producer said to me, “If I like a project I can buy in talent behind the camera. In a short I’m looking for a director’s voice and talent.”

Mozart in the Jungle

The colorful dimensions of Amazon’s Mozart in the Jungle

By Randi Altman

How do you describe Amazon’s Mozart in the Jungle? Well, in its most basic form it’s a comedy about the changing of the guard — or maestro — at the New York Philharmonic, and the musicians that make up that orchestra. When you dig deeper you get a behind-the-scenes look at the back-biting and crazy that goes on in the lives and heads of these gifted artists.

Timothy Vincent

Timothy Vincent

Based on the novel Mozart in the Jungle: Sex, Drugs, and Classical Music by oboist Blair Tindall, the series — which won the Golden Globe last year and was nominated this year — has shot in a number of locations over its three seasons, including Mexico and Italy.

Since its inception, Mozart in the Jungle has been finishing in 4K and streaming in both SDR and HDR. We recently reached out to Technicolor’s senior color timer, Timothy Vincent, who has been on the show since the pilot to find out more about the show’s color workflow.

Did Technicolor have to gear up infrastructure-wise for the show’s HDR workflow?
We were doing UHD 4K already and were just getting our HDR workflows worked out.

What is the workflow from offline to online to color?
The dailies are done in New York based on the Alexa K1S1 709 LUT. (Technicolor On-Location Services handled dailies out of Italy, and Technicolor PostWorks in New York.) After the offline and online, I get the offline reference made with the dailies so I can look at if I have a question about what was intended.

If someone was unsure about watching in HDR versus SDR, what would you tell them?
The emotional feel of both the SDR and the HDR is the same. That is always the goal in the HDR pass for Mozart. One of the experiences that is enhanced in the HDR is the depth of field and the three-dimensional quality you gain in the image. This really plays nicely with the feel in the landscapes of Italy, the stage performances where you feel more like you are in the audience, and the long streets of New York just to name a few.

Mozart in the JungleWhen I’m grading the HDR version, I’m able to retain more highlight detail than I was in the SDR pass. For someone who has not yet been able to experience HDR, I would actually recommend that they watch an episode of the show in SDR first and then in HDR so they can see the difference between them. At that point they can choose what kind of viewing experience they want. I think that Mozart looks fantastic in both versions.

What about the “look” of the show. What kind of direction where you given?
We established the look of the show based on conversations and collaboration in my bay. It has always been a filmic look with soft blacks and yellow warm tones as the main palette for the show. Then we added in a fearlessness to take the story in and out of strong shadows. We shape the look of the show to guide the viewers to exactly the story that is being told and the emotions that we want them to feel. Color has always been used as one of the storytelling tools on the show. There is a realistic beauty to the show.

What was your creative partnership like with the show’s cinematographer, Tobias Datum?
I look forward to each episode and discovering what Tobias has given me as palette and mood for each scene. For Season 3 we picked up where we left off at the end of Season 2. We had established the look and feel of the show and only had to account for a large portion of Season 3 being shot in Italy. Making sure to feel the different quality of light and feel of the warmth and beauty of Italy. We did this by playing with natural warm skin tones and the contrast of light and shadow he was creating for the different moods and locations. The same can be said for the two episodes in Mexico in Season 2. I know now what Tobias likes and can make decisions I’m confident that he will like.

Mozart in the JungleFrom a director and cinematographer’s point of view, what kind of choices does HDR open up creatively?
It depends on if they want to maintain the same feel of the SDR or if they want to create a new feel. If they choose to go in a different direction, they can accentuate the contrast and color more with HDR. You can keep more low-light detail while being dark, and you can really create a separate feel to different parts of the show… like a dream sequence or something like that.

Any workflow tricks/tips/trouble spots within the workflow or is it a well-oiled machine at this point?
I have actually changed the way I grade my shows based on the evolution of this show. My end results are the same, but I learned how to build grades that translate to HDR much easier and consistently.

Do you have a color assistant?
I have a couple of assistants that I work with who help me with prepping the show, getting proxies generated, color tracing and some color support.

What tools do you use — monitor, software, computer, scope, etc.?
I am working on Autodesk Lustre 2017 on an HP Z840, while monitoring on both a Panasonic CZ950 and a Sony X300. I work on Omnitek scopes off the downconverter to 2K. The show is shot on both Alexa XT and Alexa Mini, framing for 16×9. All finishing is done in 4K UHD for both SDR and HDR.

Anything you would like to add?
I would only say that everyone should be open to experiencing both SDR and HDR and giving themselves that opportunity to choose which they want to watch and when.

Digging Deeper: Fraunhofer’s Dr. Siegfried Foessel

By Randi Altman

If you’ve been to NAB, IBC, AES or regional conferences involving media and entertainment technology, you have likely seen Fraunhofer exhibiting or heard one of their representatives speaking on a panel.

Fraunhofer first showed up on my radar years ago at an AES show in New York City when they were touting the new MP3 format, which they created. From that moment on, I’ve made it a point to keep up on what Fraunhofer has been doing in other areas of the industry, but for some, what Fraunhofer is and does is a mystery.

We decided to help with that mystery by throwing some questions at Dr. Siegfried Foessel, Fraunhofer IIS Department Moving Picture Technologies.

Can you describe Fraunhofer?
Fraunhofer-Gesellschaft is an organization for applied research that has 67 institutes and research units at locations throughout Germany. At present, there are around 24,000 people. The majority are qualified scientists and engineers who work with an annual research budget of more than 2.1 billion euros.

More than 70 percent of the Fraunhofer-Gesellschaft’s research revenue is derived from contracts with industry and from publicly financed research projects. Almost 30 percent is contributed by the German federal and Länder governments in the form of base funding. This enables the institutes to work ahead on solutions to problems that will become relevant to industry and society within the next five or ten years from now.

How did it all begin? Is it a think tank of sorts? Tell us about Fraunhofer’s business model.
The Fraunhofer-Gesellschaft was founded in 1949 and is a recognized non-profit organization that takes its name from Joseph von Fraunhofer (1787–1826), the illustrious Munich researcher, inventor and entrepreneur. Its focus was clearly defined to do application-oriented research and to develop future-relevant key technologies. Through their research and development work, the Fraunhofer Institutes help to reinforce the competitive strength of the economy. They do so by promoting innovation, strengthening the technological base, improving the acceptance of new technologies and helping to train the urgently needed future generation of scientists and engineers.

What is Fraunhofer IIS?
The Fraunhofer Institute for Integrated Circuits IIS is an application-oriented research institution for microelectronic and IT system solutions and services. With the creation of MP3 and the co-development of AAC, Fraunhofer IIS has reached worldwide recognition. In close cooperation with partners and clients, the ISS institute provides research and development services in the following areas: audio and multimedia, imaging systems, energy management, IC design and design automation, communication systems, positioning, medical technology, sensor systems, safety and security technology, supply chain management and non-destructive testing. About 880 employees conduct contract research for industry, the service sector and public authorities.

Fraunhofer IIS partners with companies as well as public institutions?
We develop, implement and optimize processes, products and equipment until they are ready for use in the market. Flexible interlinking of expertise and capacities enables us to meet extremely broad project requirements and complex system solutions. We do contracted research for companies of all sizes. We license our technologies and developments. We work together with partners in publicly funded research projects or carry out commercial and technical feasibility studies.

IMF transcoding.

What is the focus of Fraunhofer IIS’ Department of Moving Picture Technologies?
For more than 15 years, our Department Moving Picture Technologies has driven developments for digital cinema and broadcast solutions focused on imaging systems, post production tools, formats and workflow solutions. The Department Moving Picture Technologies was chosen by the Digital Cinema Initiatives (DCI) to develop and implement the first certification test plan for digital cinema as the main reference for all systems in this area. As a leader in the ISO standardization committee for digital cinema within JPEG, my team and I are driving standardization for JPEG 2000 and formats, such as DCP and the Interoperable Master Format (IMF.)

We also are working together with SMPTE and other standardization bodies worldwide. Renowned developments for the department that are highly respected are the Arri D20/D21 camera, the easyDCP post production suite for DCP and IMF creation and playback, as well as the latest developments and results of multi-camera/light-field technology.

What are some of the things you are working on and how does that work find its way to post houses and post pros?
The engineers and scientists of the Department Moving Picture Technologies are working on tools and workflow solutions for new media file formats like IMF to enable smooth integration and use in existing workflows and to optimize performance and quality. As an example, we always enhance and augment the features available through the post production easyDCP suite. The team discusses and collaborates with customers, industry partners and professionals in the post production and digital cinema industries to identify the “most wanted and needed” requirements.

easyDCP

We preview new technologies and present developments that meet these requirements or facilitate process steps. Examples of this include the acceleration process of IMF or DCP creation by using an approach based on a hybrid JPEG 2000 functionality or introducing a media asset management tool for DCP/IMF or dailies. We present our ideas, developments and results at exhibitions such as NAB, the HPA Tech Retreat and IBC, as well as SMPTE conferences and plugfests all around the world.

Together with distribution partners who are selling the products like easyDCP, Fraunhofer IIS licenses those developments and puts them into the market. Therefore, the team always looks for customer feedback for their developments that is supported by a very active community.

Who are some of your current customers and partners?
We have more than 1,500 post houses as customers, managed by our licensing partner easyDCP GmbH. Nearly all of the Hollywood studios and post houses on all continents are our customers. We also work together with integration partners like Blackmagic and Quantel. Most of the names of our partners in the contract research area are confidential, but to name some partners from the past and present: Arri, DCI, IHSE GmbH.

Which technologies are available for license now?
• Tools for creation and playback of DCPs and IMPs, as standalone tools and for integration into third party tools
• Tools for quality control of DCPs and IMPs
• Tools for media asset management of DCPs and IMPs
• Plug-ins for light-field-processing and depth map generation
• Codecs for mezzanine compression of images

Lightfield tech

What are you working on now that people should know about?
We are developing new tools and plug-ins for bringing lightfield technology to the movie industry to enhance creativity opportunities. This includes system aspects in combination with existing post tools. We are chairing and actively participating on adhoc groups for lightfield-related standardization efforts in the JPEG/MPEG Joint Adhoc Group for digital representations of light/sound fields for immersive media applications (see https://jpeg.org/items/20160603_pleno_report.html).

We are also working together with DIN on a proposal to standardize digital long-term archive formats for movies. Basic work is done with German archives and service providers at DIN NVBF3 and together with CST from France at SMPTE with IMF App#4. Furthermore, we are developing mezzanine image compression formats for the transmission of video over IP in professional broadcast environments and GPU accelerated tools for creation and playback of JPEG 2000 code streams.

How do you pick what you will work on?
The employees at Fraunhofer IIS are very creative people. By observation of the market, research in joint projects and cooperation with universities, ideas are created and evaluated. Employees and our student scientists are discussing with industry partners what might be possible in the near future and which ideas have the greatest potential. Selected ideas will then be evaluated with respect to the business opportunities and transformed into internal projects or proposed as research projects. Our employees are tasked with working much like our eponym Joseph von Fraunhofer, as researchers, inventors and entrepreneurs — all at the same time.

What other “hats” do you wear in the industry?
As mentioned earlier, Fraunhofer is involved in standardization bodies and industry associations. For example, I chair the Systems Group within ISO SC29WG1 (JPEG) and the post production group within ISO TC36 (Cinematography). I am also a SMPTE governor (EMEA and Central and South America region) and a SMPTE fellow, along with supporting SMPTE conferences as a program committee member.

Currently, I am president of the German Society Fernseh- und Kinotechnische Gesellschaft (FKTG) and am involved in associations like EDCF and ISDCF. Additionally, I’m a speaker for the German VDE/ITG society in the area of media technology. Last, but not least, I chair the German standardization body at DIN for NVBF3 and consult the German federal film board in questions related to new technical challenges in the film industry.

Digging Deep: Sony intros the PXW-FS7 II camera

By Daniel Rodriguez

At a press event in New York City a couple of weeks ago, Sony unveiled the long-rumored follow-up to its extremely successful Sony PXW FS7 — the Sony PXW-FS7 II. With the new FS7 II, Sony dives deeper in the mid-level cinematographer/ videographer market that it firmly established with the FS100, FS700, FS7 and the more recent Sony FS5.

Knowing they are competing with cameras of other similarly priced brands, Sony has built upon a line that fulfills most technical and ergonomic needs. Sony prides itself on listening to videographers and cinematographers who make requests and suggestions from first-hand field experience, and it’s clear that they’ve continued to listen.

New Features
The Sony FS7 II might be the first camera where you can feel the deep care and consideration from Sony for those who have used the FS7 extensively, in regards to improvements. Although the body and overall design might seem nearly identical to the original FS7, the FS7 II has made subtle but important ergonomic improvements to the camera’s design.

Improving on their E-mount design, Sony has introduced a lever locking mechanism much how a PL mount functions. Unlike the PL mount, the new lever lock rotates counter-clockwise but provides a massive amount of support, especially since there is a secondary latch that prevents you from accidentally turning the lever back. The mount has been tested to support the same weight as traditional PL mounts, and larger cinema zooms can be easily mounted without the need of a lens support. Due to its short flange distance, Sony’s E-mount has become very popular with users for adapting almost all stills and cinema lenses to Sony cameras, and with this added support there is reduced risk and concern when adding lens adapters.

The camera body’s corners and edges have all been rounded out, allowing users to have a much more comfortable control of the camera. This is especially helpful for handheld use when the camera might be pressed up against someone’s body or under their arm. Considering things like operating below the underarm and at the waist, Sony has redesigned the arm grip, and most of the body, to be tool-less. The arm grip no longer requires tools to be adjusted and now uses two knobs to allow easy adjustments. This saves much needed time and maximizes comfort.

The viewfinder can now be extended further in either direction with a longer rod, which benefits left-eye dominant operators. The microphone holder is no longer permanently attached to the other side of the rod so it can either be adapted to the left side of camera to allow viewing the monitor to the right of the camera or it could be removed altogether. Sony has also made the viewfinder collapsible for those who’d rather just view the monitor. The viewfinder rod is now square shaped to allow uniform horizontal aligning in the framing in relation to the cameras balancing. This stemmed from operators confusing their framing by believing framing was crooked due to how the viewfinder was aligned, even if the camera was perfectly balanced.

Sony really kept the smaller suggestions in mind by making the memory card slots protrude more than on the original FS7. This allows for loaders to more easily access the memory card should they be wearing something that inhibits their grip, like gloves. Compatibility with the newer G-series XQD cards, which boast an impressive 440MBps write and 400MBps read speed, allowing FS7 II users to quickly dump their footage on the field without the worry of running out of useable memory cards.

Coming straight out the box is the FS7 II’s ability to do internal 4K DCI (4096×2160) without the need for upgrades or HDMI output. This 4K can be captured in nearly every codec, whether in XAVC, ProRes 422HQ, or RAW, with the option of HyperGammas, Slog-3 or basic 709. RAW output will be available to the camera, but like its siblings, an external recorder will still be required to do so. The FS7 II will also be capable of recording Sony’s version of compressed RAW, XOCN, which allows 16-bit 3:1 recording to an external recorder. Custom 3D LUTs will still be available to be uploaded into the camera. This allows more of a cinematographer’s touch when using a custom LUT, rather than factory presets.

Electronic Internal Variable ND
The most exciting feature of the Sony FS7 II — and the one that really separates this camera from the FS7 — is the introduction of an Electronic Internal Variable ND. Introduced originally in the FS5, the new options that the FS7 II has over the FS5 with this new Electronic Variable ND makes this a very promising camera and an improvement over its older sibling.

Oftentimes with similarly priced cameras, or ones that offer the same options, there is either a lack of internal NDs or a limited amount of internal ND control, which is either too much or not enough when it comes to exposure control. The term Variable ND is also approached with caution from videographers/cinematographers with concerns of color shifts and infrared pollution, but Sony has taken care of these precautions by having an IR cut filter over the sensor. This way, no level of ND will introduce any color shifts or infrared pollution. It’s also often easy to break the bank buying IR NDs to prevent infrared pollution, and the constant swapping of ND filters might prove a disadvantage when it comes to being time-efficient, which could also lead you to open or close your F-stop to compensate.

Compromising your F-stop is often an unfortunate reality when shooting — indoors or outdoors — and it’s extremely exciting to have a feature that allows you to adjust your exposure flawlessly without worrying about having the right ND level or adjusting your F-stop to compensate. It’s also exciting to know that you can adjust the ND filter without having to see a literal filter rotate in front of your image. The Electronic Variable ND can be adjusted from the grip as well, so you can essentially ride the iris without having to touch your F-stop and risk your depth of field being inconsistent.

closeup-settingsAs with most modern-day lenses that lack manual exposure, riding the iris is simply out of the question due to mechanical “clicked” irises and the very obvious exposure shift when changing the F-stop on one of these lenses. This is eliminated by letting the Variable ND do all the work and allowing you to leave your F-stop untouched. The Electronic Variable ND on manual mode allows you to smoothly transition between 0.6ND to 2.1ND in one-third increments.

Recording in BT
Another exciting new addition to the FS7 II is the ability to record in BT. 2020 (more commonly known as Rec. 2020) internally in UHD. While this might seem excessive to some, considering this camera is still a step below its siblings the F55 and F65 as far as use in productions where HDR deliverables are required, providing the option to shoot Rec. 2020 futureproofs this camera for years to come especially when Rec. 2020 monitoring and projection becomes the norm. Companies like Netflix usually request an HDR deliverable for their original programs so despite the FS7 II not being on the same level as the F55/F65, it shows it can deliver the same level of quality.

While the camera can’t boast a global shutter like its bigger sibling, the F55, the FS7 does show very capable rolling shutter with little to no skewing effects. In the FS7 II’s case it is preferable to retain rolling shutter over global because as a camera that leans slightly toward the commercial/videography spectrum of cinematography, it is preferable to retain a native ISO of 2000 and the full 14 stops over global shutter, which is easy to overlook and use cost much-needed dynamic range.

This exclusion of global shutter retains the native ISO of the FS7II at 2000 ISO, which is the same as the previous FS7. Retaining this native ISO puts the FS7 II above many similar priced video cameras whose native ISOs usually sit at 800. While the FS7 II may not be a low-light beast like the Sony a7s/a7sii, the ability to do internal 4K DCI, higher frame rates and record 10-bit 422HQ (and even RAW) greatly outweigh this loss in exposure.

The SELP18110G 18-110 F4.0 Servo Zoom
Alongside the Sony FS7 II, Sony has announced a new zoom lens to be released alongside the camera. Building off what they have introduced before with the Sony FE PZ 28-135 F4 G, the 18-110 F4 is a very powerful lens optically and the perfect companion to the FS7 II. The lens is sharp to the edges; doesn’t drop focus while zooming in and out; has no breathing whatsoever; has a quiet internal zoom, iris, and focus control; internal stabilization; and a 90-second zoom crawl from end to end. The lens covers Super 35mm and APSC-sized sensors and retains a constant f4 throughout each focal length.

It’s multi-coating allows for high contrast and low flaring with circular bokeh to give truly cinematic images. Despite its size, the lens only weighs 2.4 pounds, a weight easily supported by the FS7 II’s lever-locking E mount. Though it isn’t an extremely fast lens, paired with a camera like the FS7 II, which has a native ISO of 2000, the 18-110 F4 should prove to be a very useable lens on the field and as well in narrative work.

Final Impressions
This camera is very specifically designed for camerapersons who either have a very small camera team or shoot as individuals. Many of the new features, big and small, are great additions for making any project go down smoothly and nearly effortlessly. While its bigger siblings the F55 and F65 will still dominate major motion picture production and commercial work, this camera has all its corners covered to fill the freelance videographer/cinematographer’s needs.

Indie films, short films, smaller commercial and videography work will no doubt find this camera to be hugely beneficial and give as few headaches as possible. Speed and efficiency are often the biggest advantage on smaller productions and this camera easily handles and facilitates the most overlooked aspects of video production.

The specs are hard to pass up when discussing the Sony FS7 II. Hearing of a camera that does internal 4K DCI with the option of high frame rates at 10-bit 422HQ with 14 stops of dynamic range and the option to shoot in Slog3 or one of the many HyperGammas for faster deliverables should immediately excite any videographer/cinematographer. Many cinematographers making feature or short films have grown accustomed to shooting RAW, and unless they rent the external recorder, or buy it, they will be unable to do so with this camera. But with the high write speeds of the internal codecs, it’s difficult to argue that, despite a few minor features being lost, the internal video will retain a massive amount of information.

This camera truly delivers on providing nearly any ergonomic and technical need, and by anticipating future display formats with Rec.2020, this shows that Sony is very conscious of future-proofing this camera. The physical improvements on the camera have shown that Sony is very open and eager to hear suggestions and first-hand experiences from FS7 users, and no doubt any suggestions on the FS7 II will be taken into mind.

The Electronic Variable ND is easily the best feature of the camera since so much time in the field will be saved by not having to swap NDs, and the ability to shift through increments between the standard ND levels will be hugely beneficial to get your exposure right. Being able to adjust exposure mid shot without having filters come between the image will be a great feature to those shooting outdoors or working events where the lighting is uneven. Speed cannot be emphasized enough, and by having such a massively advantageous feature you are just cutting more and more time from whatever production you’re working.

Pairing up the camera with the new 18-110 F4 will make a great camera package for location shooting since you will be covered for nearly every focal length and have a sharp lens that has servo zooming, internal stabilization and low flaring. The lens might be off-putting to some narrative filmmakers, since it only opens to a F4.0 and isn’t fast by other lens standards, but with the quality and attention to optic performance the lens should be considered seriously alongside other lenses that aren’t quite cinema lenses but have been used heavily so far in the narrative world. With the native ISO of 2000, one should be able to shoot comfortably wide open or closed down with proper lighting and for films done mostly in natural light this lens should be highly considered.

Oftentimes when choosing a camera, the biggest question isn’t what the camera has but what it will cost. Since Sony isn’t discontinuing the original FS7, the FS7 II will be more expensive, and when considering BP-U60 batteries and XQD cards the price will only climb. I think despite these shortcomings, one must always consider the price of storage and power when upgrading your camera system. More powerful cameras will no doubt require faster cards and bigger power supplies, so these costs must be seen as investments.

While XQD cards might be considered pricey to some, especially those who are more familiar with buying and using SD cards, I consider jumping into the XQD card world a necessary step to develop your video capabilities. CFast cards are becoming the norm in higher-end digital cinema, especially when the FS7 II is being heavily considered.

Compromise is often expected in any level of production, be it technically, logistically or artistically. After getting an impression of what the FS7 II can provide and facilitate in any production scenario I feel this is one of the few cameras that will take away feelings of compromise from what you as a user can provide.

The FS7 II will be available in January 2017 for an estimated street price of $10,000 (body only) and $13,000 for the camcorder with 18-110mm power zoom lens kit.


Daniel Rodriguez is cinematographer and photographer living in New York City. Check out his work here. Dan took many of the pictures featured in this article.

Capturing the Olympic spirit for Coke

By Randi Altman

There is nothing like the feeling you get from a great achievement, or spending time with people who are special to you. This is the premise behind Coke’s Gold Feelings commercial out of agency David. The spot, which aired on broadcast television and via social media and exists in 60-, 30- and 15-second iterations, features Olympic athletes at the moment of winning. Along with the celebratory footage, there were graphics that feature quotes about winning and an update of the iconic Coke ribbon.

The agency brought in Lost Planet, Black Hole’s parent company for graphics, editing and final finishing. Lost Planet provided editing while Black Hole provided graphics and finishing.

Tim Vierling

Still feeling the Olympic spirit, we reached out to Black Hole producer Tim Vierling to find out more.

How early did you get involved in the project?
Black Hole became involved early on in the offline edit when initially conceptualizing how to integrate graphics. We worked with the agency creatives to layout the supers and helped determine what approach would be best.

How far along was it in terms of the graphics at that point?
Whereas the agency established the print portion of the creative beforehand, much of the animation was undiscovered territory. For the end tag, Black Hole animated various iterations of the Coke ribbon wiping onto screen and carefully considered how this would interact with each subject in the end shots.

We then had to update the existing disc animation to complement the new and improved/iconic Coke ribbon. The titles/supers that appear throughout the spot were under constant scrutiny — from tracking to kerning to font type. We held to a rule that type could never cross over an athlete’s face, which led to some clever thinking. Black Hole’s job was to locate the strongest moments to highlight and rotoscope various body parts of the athletes, having them move over and behind the titles throughout the spot.

What was the most challenging part of the project? Olympics projects tend to have a lot of moving parts, and there were some challenges caused by licensing issues, forcing us to adapt to an unusually high amount of editorial changes. This, in turn, resulted in constant rotoscoping. Often a new shot didn’t work well with the previous supers, so they were changing as frequently as the edit. This forced us to the push the schedule, but in the end we delivered something we’re really proud of.

What tools did you use?
Adobe After Effects and Photoshop, Imagineer Mocha and Autodesk Flame were all used for finishing and graphics.

A question for Lost Planet’s assistant editor Steven san Miguel: What direction were you given on the edit?
The spots were originally boarded with supers on solid backgrounds, but Lost Planet editors Kimmy Dube and Max Koepke knew this wouldn’t really work for a 60-second. It was just too much to read and not enough footage. Max was the first one to suggest a level of interactivity between the footage and the type, so from the very beginning we were working with Black Hole to lay out the type and roto the footage. This started before the agency even sat down with us. And since the copy and the footage were constantly changing there had to be really close communication between Lost Planet and Black Hole.

Early on the agency provided YouTube links for footage they used in their pitch video. We scoured the YouTube Olympic channel for more footage, and as the spot got closer to being final, we would send the clips to the IOC (International Olympic Committee) and they would provide us with the high-res material.

Check out the spot!

Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

Quick Chat: Cut + Run’s Jay Nelson on editing ‘The Bronze’

Who doesn’t like the story of someone overcoming a physical injury in sport and succeeding? (Think Curt Schilling’s bloody ankle during the 2004 World Series.) It’s how legends are made, but what happens after the applause has stopped and the reporters stop requesting interviews? Well this is the premise of the new comedy, The Bronze, by Bryan Buckley.

The film focuses a light on gymnast Hope Ann Greggory (Melissa Rauch), whose performance on a ruptured Achilles during the Olympics clinched a bronze medal for the US team — but things went downhill from there. In the years since capturing the medal, she’s still living in her father’s basement, still wearing her Team USA gym suit and sporting some crazy bangs, a ponytail and a scrunchie. She spends most days at the mall enjoying her minor celebrity while being unpleasant and rude. All of that changes when she is asked to coach her hometown’s newest gymnastics prodigy.

Jay Nelson

Jay Nelson

Director Buckley called on Cut + Run’s Jay Nelson to edit The Bronze, from Sony Pictures Classics. We reached out to LA-based Nelson, who used Avid Media Composer on the film, to find out more about the workflow and how he collaborated with the director.

How did you get involved in the film?
I had been working with Bryan for a couple of years, and he had been developing the idea with Melissa and Winston Rauch for about six months and he asked me if I’d want to be involved. He gave me the script, but I didn’t really need to read it — if Bryan asks if you want to do a film with him, you do it. Then I read the script and I thought it was hilarious and bold.

What are some things you enjoy about working with Buckley?
He is always available for you, no matter how busy he is. Also, he covers exactly what I need to make an edit great, which makes my job a heck of a lot easier. We have a really amazing shorthand with each other. We have the same taste in comedy. But my favorite part about working with Bryan is that I am constantly learning from him, and not just about filmmaking… about life. And we laugh a hell of a lot

Can you talk about any challenges during the editing process?
The approval process was very long. We had to answer to a lot of masters. I showed an edit a week after they finished shooting, then we spent six months revising that cut. The hardest part about the revisions was shaving the last four minutes out of the film. It was a very painful process getting it to 90 minutes.

How was it to premiere at Sundance?
Exhilarating. I’ve submitted four films to Sundance over the years and none of them ever made the cut for one reason or another. It’s always a roll of the dice; there are so many factors that contribute to a films success with their review process. To finally be there after all these years and experience seeing a first run of the film with a massive crowd was truly incredible. And to see lines of people just to be on the waiting list to get in was total vindication for all the work we put into it.

What’s the biggest lesson you learned?
The lessons I learned on this film weren’t so much about the process of making a film, but rather the process of bringing a film to market. Just making a great movie doesn’t mean a film is going to have success. It was almost 16 months from the time we premiered at Sundance to the final release of The Bronze, and a lot of stuff happened during that time. Relativity went out of business, then Sony Classics rescued the film, and then there were several delays pertaining to the release date.

I say it on every film I do — there are no guarantees. If you’re going to do a film, you gotta be willing to do it for the love of making a picture. Success is not imminent. In the end, I’m really proud of The Bronze, and proud we were able to share it with a wide audience. I think it’s going to have a great long life down the road. I think that sex scene alone will be kept in a hall of fame of some sort (laughs). That is the great thing about making movies: you have the opportunity to create something that can stay around after your gone.

If you could compete in the Olympics, your sport would be?
I always dreamed of winning a gold in hockey. It certainly wouldn’t be gymnastics. After sitting in an editing chair for as long as I have been, maybe I’d be better off pursuing curling or something like that.

——–
Check out The Bronze’s trailer.

Digging Deeper: NASA TV UHD executive producer Joel Marsden

It’s hard to deny the beauty of images of Earth captured from outer space. And NASA and partner Harmonic agree, boldly going where no one has gone before — creating NASA TV UHD, the first non-commercial consumer UHD channel in North America. Leveraging the resolution of ultra high definition, the channel gives viewers a front row seat to some gorgeous views captured from the International Space Station (ISS), other current NASA missions and remastered historical footage.

We recently reached out to Joel Marsden, executive producer of NASA TV UHD, to find out how this exciting new endeavor reached “liftoff.”

Joel Marsden

Joel Marsden

This was obviously a huge undertaking. How did you get started and how is the channel set up?
The new channel was launched with programming created from raw video footage and imagery supplied by NASA. Since that time, Harmonic has also shot and contributed 4K footage, including video of recent rocket launches. They provide the end-to-end UHD video delivery system and post production services while managing operations. It’s all hosted at a NASA facility managed by Encompass Digital Media in Atlanta, which is home to the agency’s satellite and NASA TV hubs.

Like the current NASA TV channels, and on the same transponder, NASA TV UHD is transmitted via the SES AMC-18C satellite, in the clear, with a North American footprint. The channel is delivered at 13.5Mbps, as compared with many of the UHD demo channels in the industry, which have required between 50 and 100 Mbps. NASA’s ability to minimize bandwidth use is based on a combination of encoding technology from Harmonic in conjunction with the next-generation H.265 HEVC compression algorithm.

Can you talk about how the footage was captured and how it got to you for post?
When the National Aeronautics and Space Act of 1958 was created, one of the legal requirements of NASA was to keep the public apprised of its work in the most efficient means possible and with the ultimate goal of bringing everyone on Earth as close as possible to being in space. Over the years, NASA has used imagery as the primary means of demonstration. The group in charge of these efforts, the NASA Imagery Experts Program, provides the public with a wide array of digital television, web video and still images based on the agency’s activities. Today, NASA’s broadcast offerings via NASA TV include an HD consumer channel, an HD media channel and an SD education channel.

In 2015, the agency introduced NASA TV UHD. Naturally, NASA archives provide remastered footage from historical missions and shots from NASA’s development and training processes, all of which are used for production of broadcast programming. In fact, before the agency launched NASA TV, it had already begun production of its own documentary series, based on footage collected during missions.

Just five or six years ago, NASA also began documenting major events in 4K resolution or higher. The agency has been using 6K Red Dragon digital cinema cameras for some time. NASA TV UHD video content is sourced from high-resolution images and video generated on the ISS, Hubble Space Telescope and other current NASA missions. The raw content files are then sent to Harmonic for post.

Can you walk us through the workflow?
Raw video files are mailed on physical discs or sent via FTP from a variety of NASA facilities to Harmonic’s post studio in San Jose and stored on the Harmonic MediaGrid system, which supports an edit-in-place workflow with Final Cut Pro and other third-party editing tools.

During the content processing phase, Harmonic uses Adobe After Effects to paint out dead pixels that result from the impact of cosmic radiation on camera sensors. They have built bad-pixel maps that they use in post production to remove the distracting white dots from the picture. The detail of UHD means that the footage also shows scratches on the windows of the ISS through which the camera is shooting, but these are left in for authenticity.

 

A Blackmagic DaVinci Resolve is used to color grade footage, and Maxon Cinema 4D Studio is used to create animations of images. Final Cut Pro X and Adobe Creative Suite are used to set the video to music and add text and graphics, along with the programming name, logo and branding.

Final programs are then transferred in HD back to the NASA teams for review, and in UHD to the Harmonic team in Atlanta to be loaded onto the Spectrum X for playout.

————

You can check out NASA TV’s offerings here.

‘Late Night with Seth Meyers’ associate director of post Dan Dome

This long-time editor talks about his path to late night television

By Randi Altman

You could say that editing runs through Dan Dome’s veins. Dome, associate director of post at Late Night with Seth Meyers, started in the business in 1994 when he took a job as a tape operator at National Video Industries (NVI) in New York.

Dome grew up around post — his dad, Art, was a linear videotape editor at NVI, working on Shop Rite spots and programming for a variety of other clients. Art had previously edited commercials for such artists as Kiss and was awarded a gold record for Kiss Alive 2. Dome loved to go in and watch his dad work. “I saw that there were a lot of machines and I knew he put videos together, but I was completely clueless to what the real process was.”

Dome’s first job at NVI was working in the centralized machine room as a tape operator. “I learned how to read a waveform monitor and a vectorscope, how to patch up Betacam SP, 1-inch and D2 machines to linear edit rooms, insert stages, graphics and audio suites. I also learned how to change the timings of the switcher through a proc amp — the nuts and bolts.”

This process proved to be invaluable. “Being able to have an understanding of signal flow on the technical side helped a ton in my career,” he explains. “A lot of post jobs are super technical. You’ve got to know the software and you’ve got to know the computers and machines; those were the fundamentals I learned in the machine room. I had to learn it all.”

While at NVI, nonlinear editing via the Avid Media Composer came on the scene. Dome took every advantage to learn this new way of working. After his 4pm-to-midnight shift as tape op, he would stay in the Avid rooms learning all he could about the software. He also befriended an editor who rented space at NVI. Christian Giornelli allowed Dome to shadow him on the Media Composer and, later, assist on different projects.

After a time, he became comfortable on the Avid. “I was working as a tape operator and editing at night at NVI, cutting promos and show reels. The first professional editing job I had was cutting the Neil Peart Test For Echo instructional drum video. Shortly after editing that video, I left NVI to pursue freelance work.”

A year into freelancing, Dome began doing work for the NBC promo department working on a little bit of everything, including cutting promos for various shows and sales tapes. During this time, freelance work also brought him to Broadway Video, MSNBC, MTV and VH1 — he was steadily building up a nice resume.

Let’s find out more about his path and how he landed at Late Night with Seth Myers...

When you and I were first in contact, you were out in LA working on the Conan O’Brien show on TBS.
Yes. My early work with NBC’s promo department led me to NBC’s post team, and they started booking me on gigs for Dateline, The Today Show and, every now and then, as an editor on Late Night With Conan O’Brien. I developed a great relationship with the writers and the other editors on Conan. When the transition to HD happened, they chose to use in a nonlinear system, Avid DS, which I learned. That helped me work on the first HD season of Saturday Night Live through the 2008 season.

Late Night With Conan O’Brien had two primary show editors and another editor cutting remote packages. I started falling into that group more and more, and close to the time Conan was taking over for Jay Leno on The Tonight Show, two of the editors retired. Myself and another editor ended up seeing Conan’s Late Night off the air.

During that time, I developed a great relationship with the associate director and mentioned that I wouldn’t mind moving out to California if they needed an editor. They did. I started working with Conan on The Tonight Show and continued on when he went over to do Conan on TBS. All in all, I was out in LA for almost five years.

How did you end up back in New York and working on Seth Meyers?
While I did enjoy California, I got a little homesick. I heard that Seth was taking over for Late Night from Jimmy Fallon and threw my hat in the ring for that show.

Let’s talk about Seth’s show. Did you help to set up the post workflow?
Late Night with Seth Meyers was the third show I’d launched as a lead editor — there was Conan’s Tonight Show, then Conan’s show on TBS and then Late Night with Seth. It was great to be at Late Night from the ground up, and now, with the title of associate director/lead editor,

 I worked with our engineers at NBC as far as folder structure on the SAN, what our workflow was going to be, what NLE we were going to use, what plug-ins we needed — we worked very closely on workflow and how we were going to deliver the show to air and the web.

A lot of those systems had already been in place, but there are always new technologies to consider. We went through the whole workflow and got it as streamlined as possible.

You are using Adobe Premiere, can you tell us why that was the right tool for this show?
Well, Final Cut 7 wasn’t going to grow any more, and I wasn’t convinced about Final Cut X. If we went with Avid, we’d need all the Avid-approved gear. We already knew we were going to be on Macs, and we would use AJA Kona cards with Premiere. We based this show’s post model off some of the other shows already using Premiere.

Do you use other parts of the Adobe suite?
The entire post team is using Creative Cloud. I edit, and I have an editor, Devon Schwab, and an assistant, Tony Dolezal. We’re primarily working in Premiere, Audition and Media Encoder. Our graphics artists are in Illustrator, Photoshop and After Effects. Every now and then we editors will dip into After Effects if we need to rotoscope something out, or we’ll use Mocha Pro for motion tracking when something in the show has to be censored or if we are making mattes for color grading.

You guys are live-to-tape — could you walk us through that?
We shoot the show live-to-tape between 6:30pm and 7:30pm. During the first act I’m watching the show as well as listening to the director, the production AD and the control room from my edit suite. If there are camera ISO fixes that need to be addressed, I’m hearing that from the director. If there are any issues with standards, like a word has to be bleeped or content has to be removed, I’m getting those notes from the producers and from the lawyers.

Tony, Dan and Devon.

Tony, Dan and Devon.

As soon as the first act is done, my assistant stops ingest and then starts it back up again, so now I have act one: seven ISO cameras and one program record. The program record file has the show as it’s cut for the audience, so all the graphics are already baked into it, and it’s a 5.1 mix coming from our audio rooms. I bring those eight QuickTime files into Premiere through an app called Easy Find and start laying the show out.

I try and finish all that needs to be done in the first act by the time second act of the show is done being ingested. Once all six acts are done, we’ll have a good idea if the show’s over or under in time. If it’s over, we figure out what we are going to cut. If it’s a little bit under, let’s say 20 or 30 seconds, then we may decide to run credits that night.

So taping is done by 7:30?
Yes. At that point the director, the show producers, segment producers and writers come down. We start editing the entire the show together for air. At that point I’ve already built the main project for the show to be edited. I then save a version of my project for my editor and my assistant editor and assign acts for them to edit.

How many do you cut personally?
I’ll usually end up doing three out of the six acts. My editor will do two interview acts, and my assistant will do one, usually the musical act. As the show is being put together for air, I keep track of the show time on an Excel spreadsheet. There’s a lot of communication among us during this time.

Once I do have the show close to time, I start sending individual acts to the Broadcast Operations Center at NBC, so they can start their QC process. That’s between 8:00pm or 8:15pm. As they are getting the six acts and they’ve begun to QC them, I release my timing sheet so they can confirm the show is on time. It’s 41 minutes and 20 seconds, and they get it ready to go to what they call a “composite” after QC. They composite the show between 10:30pm and 11:30pm with all the commercials put in. I’m completely done for the night when the show hits the air at 12:35am… if there have been no emergencies.

Taking a step back, how do you cut the pre-packaged bits?
Usually those go to my editor Devon. He will be editing, mixing audio and I will be doing the color grade — all within Premiere. If it’s a two- or three-camera shoot, I’ll get a look established for the A, B and the C cameras and have the segment director give notes on the color grade. Once the grade is approved, Devon can then just apply the color to the finished piece. Sometimes we are finessing pre-tapes right up until show record time at 6:30pm.

One recent color grade I did, that Devon edited, was a pre-taped piece called Reasonable Max, which was about Seth’s deleted scenes from the film Mad Max: Fury Road.

Anything you want to add before we wrap up?
I feel very lucky to have had all these experiences in the TV business. I want to thank my dad for introducing me to it and all the people who helped me get where I am today. The most talented people in the business staff all the shows that I have been lucky enough to work on. Watch Late Night With Seth Meyers, weeknights at 12:35 on NBC!

Digging Deeper: Endcrawl co-founder John ‘Pliny’ Eremic

By Randi Altman

Many of you might know John “Pliny” Eremic, a fixture in New York post. When I first met Pliny he was CTO and director of post production at Offhollywood. His post division was later spun off and sold to Light Iron, which was in turn acquired by Panavision.

After Offhollywood, Pliny moved to HBO as a workflow specialist, but he is also the co-founder— with long-time collaborator Alan Grow — of Endcrawl.com, a cloud-based tool for creating end titles for film and television.

Endcrawl has grown significantly over the last year and counts both modest indies and some pretty high-end titles as customers. I figured it was a good time to dig a bit deeper.

How did Endcrawl come about?
End titles were always a huge thorn in my side when I was running the post boutique. The endless, manual revision process is so time intensive that a number of major post houses flat-out refuse to offer this service any more. So, I started hacking on Endcrawl to scratch my own itch.

Both you and your co-founder Alan are working media professionals. Can you talk about how this affected the tool and its evolution?
Most filmmakers aren’t hackers; most coders never made a movie. As a result, many of this industry’s tools are built by folks who are incredibly smart but may lack first-hand post and filmmaking experience. I’ve felt that pain a lot.

Endcrawl is built by filmmakers for filmmakers. We have deep, first-hand experience with file-based specs and formats (DCI, IMF, AS-02), so our renders are targeted at these industry-standard delivery specifications. Occasionally we’re even able to steer customers away from a bad workflow decision.

How is this different than other end credit tools in the world?
For starters we offer unlimited renders.

Why unlimited renders?
This was a mantra from day one. There’s always “one last fix.” A typical indie feature with Endcrawl will keep making revisions six to 12 months after calling it final. That’s where a flat rate with unlimited do-overs comes in very handy. I’ve seen productions start with a $2-3k quote from a designer, and end up with a $6-10k bill. That’s just for the end credits. We’re not interested in dinging you for overages. It’s a flat rate, so render away.

What else differentiates Endcrawl?
Endcrawl is a cloud tool that’s designed to manage the end titles process only — that is its reason for being. So speed, affordability and removing workflow hassles is our goal.

How do people do traditionally do end titles?
Typically there are three options. One is using a title designer. This option costs a lot and they might want to charge you overages after your 89th revision.

There are also do-it-yourself options using products from Adobe or Autodesk, and while these are great tools, the process is extremely time consuming for this use — I’d estimate 40-plus hours of human labor.

Finally, there are affordable plug-ins, but they deliver, in my opinion, cheap-looking results.

Do you need to be a designer to use Endcrawl?
No. We’ve made it so our typography is good-looking right out of the box. After hundreds of projects, we’ve spent a lot of time thinking about what does and does not work typographically.

Do you have tips for these non-deisgners regarding typography?
I could write a book. In fact, we are about to publish a series of articles on this topic, but I’ll give you a few:

• Don’t rely on “classic” typefaces like Helvetica and Futura. Nice on large posters, but lousy on screen in small point sizes.

• Lean toward typefaces with an upright stress — meaning more condensed fonts — which will allow you to make better use of horizontal space. This in turn preserves vertical space, resulting in a smoother scroll.

• Avoid “light” and “ultralight” fonts, or typefaces with a high stroke contrast. Those tend to shimmer quite a bit when in motion. Pick a typeface that has a large variety of designed weights and stick to medium, semibold and bold.

• Make sure your font has strong glyph support for those grips named Bjørn Sæther Løvås and Hansína Þórðardóttir.

Do people have to download the product?
Endcrawl runs right in your web browser. There is nothing to download or install.

What about compatibility?
Our render engine outputs uncompressed DPX, all the standard QuickTime formats, H.264 and PDFs. By far the most common final deliverable is 10-bit DPX, which we typically turn around inside of one hour. The preview renders come in minutes. And the render engine is on-demand, 24/7.

 

How has the product evolved since you first came to market?
Our “lean startup” was a script attached to a Google Doc. We did our first 20 to 30 projects that way. We saw a lot of validation, especially around the speed and ease of the service.

Year one, we had a customer with four films at Sundance. He completed all of his end titles in three days, with many revisions and renders in between. He’s finished over 20 projects with us now.

Since then, Alan has architected a highly optimized cloud render engine. Endcrawl still integrates with Google Docs for collaboration, but that is now connected to a powerful Web UI controlling layout and realtime preview.

How do people pay for Endcrawl?
On the free tier, we provide free and unlimited 1K preview renders in H.264. For $499, a project can upgrade to unlimited, uncompressed DPX renders. We are currently targeting feature films, but we will be deploying more pricing tiers for other types of projects — think episodic and shorts — in 2016.

What films have used the tool?
Some recent titles include Spike Lee’s Chi-Raq and Oliver Stone’s Snowden. Our customers run the gamut from $50K Kickstarter movies to $100 million studio franchises. (I can’t name most of those studio features because several title houses run all of their end credits through us as a white-label service.)

Some 2016 Sundance movies this year include Spa Night, Swiss Army Man, Tallulah and The Bad Kids. Some of my personal favorites are Beasts of No Nation, A Most Violent Year, The Family Fang, Meadowland, The Adderall Diaries and Video Game High School.

What haven’t I asked that is important?
We’re about to roll out 4K. We’ve “unofficially” supported 4K on a few pilot projects like Beasts of No Nation and War Room, but it’s about to be available to everyone.

Also, we have a pretty cool Twitter account @Endcrawl, which you should definitely follow.