Category Archives: Digging Deeper

Working with Anthropologie to build AR design app

By Randi Altman

Buying furniture isn’t cheap; it’s an investment. So imagine having an AR app that allows you to see what your dream couch looks like in paisley, or colored dots! Well imagine no more. Anthropologie — which sells women’s clothing, shoes and accessories, as well as furniture, home décor, beauty and gifts — just launched its own AR app, which gives users the ability to design and customize their own pieces and then view them in real-life environments.

They called on production and post house CVLT to help design the app. The bi-coastal studio created over 96,000 assets, allowing users to combine products in very realistic and different ways. The app also accounts for environmental lighting and shadows in realtime.

We reached out to CVLT president Alberto Ruiz to find out more about how the studio worked with Anthroplogie to create this app.

How early did CVLT get involved in the project?
Our involvement began in the spring of 2017. We collaborated early in the planning phases when Anthropologie was concepting how to best execute the collection. Due to our background in photography, video production and CGI, we discussed the positives and pitfalls of each avenue, ultimately helping them select CGI as the path forward.

We’re often approached by a brand with a challenge and asked to consult on the best way to create the assets needed for the campaign. With specialists in each category, we look at all available ways of executing a particular project and provide a recommendation as to the best way to build a campaign with longevity in mind.

How did CVLT work with Anthropologie? How much input did you have?
We worked in close collaboration with Anthropologie every step of the way. We helped design style guides and partnered with their development team to test and optimize assets for every platform.

Our creatives worked closely with Anthropologie to elevate the assets to a high-quality reflective of the product integrity. We presented CGI as a way to engage customers now and in the future through AR/VR platforms. Because of this partnership, we understood the vision for future executions and built our assets with those executions in mind. They were receptive to our suggestions and engaged in product feedback. All in all, it was a true partnership between companies.

Has CVLT worked on assets or materials for an app before? How much of your work is for apps or the web?
The majority of the work that we produce is for digital platforms, whether for the web, mobile or experiential platforms. In addition to film and photography projects, we produce highly complex CGI products for luxury jewelers, fragrance and retail companies.

More and more clients are looking to either supplement or run full campaigns digitally. We believe that investing in emerging technologies, such as augmented and virtual reality, is paramount in the age of digital and mobile content. Our commitment to emerging technologies connects our clients with the resources to explore new ways of communicating with their audience.

What were the challenges of creating so many assets? What did you learn that could be applicable moving forward?
The biggest challenge was unpacking all the variables within this giant puzzle. There are 138 unique pieces of furniture in 11 different fabrics, with 152 colorways, eight leg finishes and a variety of hardware options. Stylistically, colors of a similar family were to live on complementary backgrounds, adding yet another variable to the project. It was basically a rubix cube on steroids. Luckily, we really enjoy puzzles.

We always believed in having a strong production team and pipeline. It was the only way to achieve the scale and quality of this project. This was further reinforced as we raced toward the finish line. We’re now engaged in future seasons and are focused on refining the pipe and workflow tools therein.

Any interesting stories from working on the project?
One of the most interesting things about working on the project was how much we learned about furniture. The level of planning and detail that goes into each piece is amazing. We talk a lot about the variables in colors, fabrics and styles because they are the big factors. What remains hidden are the small details that have large impacts. We were given a crash course in stitching details, seam placements, tufting styles and more. Those design details are what set an Anthropologie piece apart.

Another interesting part of the project was working with such an iconic brand with a strong heritage. The rich history of design at Anthropologie permeates every aspect of their work. The same level of detail poured into product design is also visible in the way they communicate with and understand their customer.

What tools were used throughout the project?
Every time we approach a new project we assess the tools that we have in our arsenal and the custom tools that we can develop to make the process smoother for our clients. This project was no different in that sense. We combined digital project management tools with proprietary software to create a seamless experience for our client and staff.

We built a bi-coastal team for this project between our New York and Los Angeles offices. Between that and our Philadelphia-based client, we relied heavily on collaborative digital tools to manage reviews. It’s a workflow we’re accustomed to as many of our clients have a global presence, which was further refined to meet the scale of this project.

What was the most difficult part of the project?
The timeframe was really the biggest challenge in this project. The sheer volume of assets — 96,000 that we created in under five months was definitely a monumental task, and one we’re very proud of.

Digging Deeper: The Mill Chicago’s head of color Luke Morrison

A native Londoner, Morrison started his career at The Mill where worked on music videos and commercials. In 2013, he moved across to the Midwest to head up The Mill Chicago’s color department.

Since then, Morrison has worked on campaigns for Beats, Prada, Jeep, Miller, Porsche, State Farm, Wrigley’s Extra Gum and a VR film for Jack Daniel’s.

Let’s find out more about Morrison.

How early on did you know color would be your path?
I started off, like so many at The Mill, as a runner. I initially thought I wanted to get into 3D, and after a month of modeling a photoreal screwdriver I realized that wasn’t the path for me. Luckily, I poked my nose into the color suites and saw them working with neg and lacing up the Spirit telecine. I was immediately drawn to it. It resonated with me and with my love of photography.

You are also a photographer?
Yes, I actually take pictures all the time. I always carry some sort of camera with me. I’m fortunate to have a father who is a keen photographer and he had a darkroom in our house when I was young. I was always fascinated with what he was doing up there, in the “red room.”

Photography for me is all about looking at your surroundings and capturing or documenting life and sharing it with other people. I started a photography club at The Mill, S35, because I wanted to share that part of my passion with people. I find as a ‘creative’ you need to have other outlets to feed into other parts of you. S35 is about inspiring people — friends, colleagues, clients — to go back to the classic, irreplaceable practice of using 35mm film and start to consider photography in a different way than the current trends.

State Farm

In 2013, you moved from London to Chicago. Are the markets different and did anything change?
Yes and no. I personally haven’t changed my style to suit or accommodate the different market. I think it’s one of the things that appeals to my clients. Chicago, however, has quite a different market than in the UK. Here, post production is more agency led and directors aren’t always involved in the process. In that kind of environment, there is a bigger role for the colorist to play in carrying the director’s vision through or setting the tone of the “look.”

I still strive to keep that collaboration with the director and DP in the color session whether it’s a phone call to discuss ahead of the session, doing some grade tests or looping them in with a remote grade session. There is definitely a difference in the suite dynamics, too. I found very quickly I had to communicate and translate the client’s and my creative intent differently here.

What sort of content do you work on?
We work on commercials, music promos, episodics and features, but always have an eye on new ways to tell narratives. That’s where the pioneering work in the emerging technology field comes into play. We’re no longer limited and are constantly looking for creative ways to remain at the forefront of creation for VR, AR, MR and experiential installations. It’s really exciting to watch it develop and to be a part of it. When Jack Daniel’s and DFCB Chicago approached us to create a VR experience taking the viewer to the Jack Daniel’s distillery in Kentucky, we leapt at the chance.

Do you like a variety of projects?
Who doesn’t? It’s always nice to be working on a variety, keeping things fresh and pushing yourself creatively. We’ve moved into grading more feature projects and episodic work recently, which has been an exciting way to be creatively and technically challenged. Most recently, I’ve had a lot of fun grading some comedy specials, one for Jerrod Carmichael and one for Hasan Minhaj. This job is ever-changing, be it thanks to evolving technology, new clients or challenging projects. That’s one of the many things I love about it.

Toronto Maple Leafs

You recently won two AICE awards for best color for your grade on the Toronto Maple Leafs’ spot Wise Man. Can you talk about that?
It was such a special project to collaborate on. I’ve been working with Ian Pons Jewell, who directed it, for many years now. We met way back in the day in London, when I was a color assistant. He would trade me deli meats and cheeses from his travels to do grades for him! That shared history made the AICE awards all the more special. It’s incredible to have continued to build that relationship and see how each of us have grown in our careers. Those kinds of partnerships are what I strive to do with every single client and job that comes through my suite.

When it comes to color grading commercials, what are the main principles?
For me, it’s always important to understand the idea, the creative intent and the tone of the spot. Once you understand that, it influences your decisions, dictates how you’ll approach the grade and what options you’ll offer the client. Then, it’s about crafting the grade appropriately and building on that.

You use FilmLight Baselight, what do your clients like most about what you can provide with that system?
Clients are always impressed with the speed at which I’m able to address their comments and react to things almost before they’ve said them. The tracker always gets a few “ooooooh’s” or “ahhhh’s.” It’s like they’re watching fireworks or something!

How do you keep current with emerging technologies?
That’s the amazing thing about working at The Mill: we’re makers and creators for all media. Our Emerging Technologies team is constantly looking for new ways to tell stories and collaborate with our clients, whether it’s branded content or passion projects, using all technologies at our disposal: anything is at our fingertips, even a Pop Llama.

Name three pieces of technology you can’t live without.
Well, I’ve got to have my Contax T2, an alarm clock, otherwise I’d never be anywhere on time, and my bicycle.

Would you say you are a “technical” colorist or would you rather prioritize instincts?
It’s all about instincts! I’m into the technical side, but I’m mostly driven by my instincts. It’s all about feeling and that comes from creating the correct environment in the suite, having a good kick off chat with clients, banging on the tunes and spinning the balls.

Where do you find inspiration?
I find a lot of inspiration from just being outside. It might sound like a cliché but travel is massive for me, and that goes hand in hand with my photography. I think it’s important to change your surroundings, be it traveling to Japan or just taking a different route to the studio. The change keeps me engaged in my surroundings, asking questions and stimulating my imagination.

What do you do to de-stress from it all?
Riding my bike is my main thing. I usually do a 30-mile ride a few mornings a week and then 50 to 100 miles at the weekend. Riding keeps you constantly focused on that one thing, so it’s a great way to de-stress and clear your mind.

What’s next for you?
I’ve got some great projects coming up that I’m excited about. But outside of the suite, I’ll be riding in this year’s 10th Annual Fireflies West ride. For the past 10 years, Fireflies West participants have embarked on a journey from San Francisco to Los Angeles in support of City of Hope. This year’s ride has the added challenge of an extra day tacked onto it making the ride 650 miles in total over seven days, so…I best get training! (See postPerspectives’ recent coverage on the ride.)

Dell 6.15

A conversation with editor Hughes Winborne, ACE

This Oscar-winning editor talks about his path, his process, Fences and Guardians of the Galaxy.

By Chris Visser

In the world of feature film editing, Hughes Winborne, ACE, has done it all. From cutting indie features (1996’s Sling Blade) to CG-heavy action blockbusters (2014’s Guardians of the Galaxy) to winning an Oscar (2005’s Crash), Winborne has run the proverbial gamut of impactful storytelling through editing.

His most recent film, the multiple-Oscar-nominated Fences, was an adaptation of the seminal August Wilson play. Denzel Washington, who starred alongside Viola Davis (who won an Oscar for her role), directed the film.

Winborne and I chatted recently about his work on Fences, his career and his brief foray into house painting before he caught the filmmaking bug. He edits on Avid Media Composer. Let’s find out more.

What led you to the path you are on now?
I grew up in Raleigh, North Carolina, and I went to college at the University of North Carolina at Chapel Hill. I graduated with a degree in history without a clue as to what I was going to do. I come from a family of attorneys, so because of an extreme lack of imagination, I thought I should do that. I became a paralegal and worked at North Carolina Legal Services for a bit. It didn’t take me long to realize that that wasn’t what I was meant to do, and I became a house painter.

A house painter?
I had my own house painting business for about three years with a couple of friends. The preamble to that is, I had always been a big movie fan. I went to the movies all the time in high school, but after college I started seeing between five and 10 a week. I didn’t even imagine working in the film business, because in Raleigh, that wasn’t really something that crossed my radar.

Then I saw an ad in the New York Times magazine for a six-week summer workshop at NYU. I took the course, moved to New York and set out to become a film editor. In the beginning, I did a lot of PA work for commercials and documentaries. Then I got an assistant editor job on a film called Girl From India.

What came next?
My father told me about a guy on the coast of North Carolina, A.B. Cooper, Jr., who wanted to make his own slasher film. I made him an offer: “If I get you an editor, can I be the assistant?” He said yes! About one-third of the way through the film, he fired the editor, and I took over that role. It was only my second film credit. I was never an assistant again, which is to the benefit of every editor that ever worked — I was terrible at it!

Where you able to make a living editing at that point?
Not as a picture editor, but I really started getting paid full-time for my editing when I started cutting industrials at AT&T. From there, I worked my way to 48 Hours. While I was there, they were kind enough to let me take on independent film projects for very little money, and they would hire me back after I did the job.

After a while, I moved to LA and started doing whatever I could get my hands on. I started with TV movies and gradually indie films, which really started for me with Sling Blade. Then, I worked my way into the studios after Crash. I’ve been kind of going back and forth ever since.

You mention your love of movies. What are the stories that inspire you? The ones that you get really excited to tell?
The movie that made me want to work in the film business was Barry Lyndon. Though it was not, by far, the film that got me started. I grew up on Truffaut. All his movies were just, for me, wonderful. It was a bit of a religion for me in those days; it gave me sustenance. I grew up on The Graduate. I grew up on Midnight Cowboy and Blow-Up.

I didn’t have a specific story I was interested in telling. I just knew that editing would be good for me. I like solitary jobs. I could never work on the set. It’s too crazy and social for me. I like being able to fiddle in the editing room and try things. The bottom line is, it’s fun. It can be a grind, and there can be a bit of pressure, but the best experiences I’ve had have been when I everybody on the show was having fun and working together. Films are made better when that collaboration is exploited to the limit.

Speaking of collaboration, how did that work on a film like Fences? What about working with actor/director Denzel Washington?
I’d worked with Denzel before [on The Great Debaters], so I kind of knew what he liked. They shot in Pittsburgh, but I didn’t go on location. There was no real collaboration the first six weeks but because I had worked with him before I had a sense of what he wanted.

I didn’t have to talk to him in order to put the film together because I could watch dailies — I could watch and listen to direction on camera and see how he liked to play the scenes. I put together the first cut on my own, which is typical, but in this case it was without almost any input. And my cut was really close. When Denzel came back, we concentrated in a few places on getting the performances the way he really wanted them, but I was probably 85 percent there. That’s not because I’m so great either, by the way, it’s because the actors were so great. Their performances were amazing, so I had a lot to choose from.

Can you talk about editing a film that was adapted from a play?
It was a Pulitzer Prize-winning play, so I wasn’t going to be taking anything out of it or moving anything around. All I had to do was concentrate on putting it together with strong performances — that’s a lot harder than it sounds. I’m working within these constraints where I can’t do anything, really. Not that I really wanted to. Have you seen the movie?

Yes, I loved it. It’s a movie I’ve been coming back to every day since I’ve seen it. I’ve been thinking about it a lot.
Then you’ll remember that the first 45 minutes to an hour is like a machine gun. That’s intentional. That’s me, intentionally, not slowing it down. I could have, but the idea is — and this is what was tricky — the film is about rhythm. Editing is about rhythm anyway, but this film is like rhythm to the 50th degree.

There’s very little music in the film, and we didn’t temp with much music either. I remember when Marc Evans [president, Motion Picture Group, Paramount Pictures] saw this film, he said, “The language is the music.” That’s exactly right.

To me, the dialogue feels like a score. There’s a musicality to it, a certain beat and timbre where it’s leading the audience through the scene, pulling them into the emotion without even hearing what they’re saying. Like when Denzel’s talking machine gun fast and it’s all jovial, then Lyons comes in and everything slows down and becomes very tense, then the scene busts back open and it’s all happy and fun again.
Yeah. You can just quote yourself on that one. [Laughs] That’s a perfect summation of it.

Partially, that’s going to come from set, that’s the acting and the direction, but on some level you’re going to have to construct that. How conscious of that were you the entire time?
I was very conscious of it. Where it becomes a little bit dicey at times is, unlike a play, you can cut. In a play, you’re sitting in the audience and watching everybody on stage at the same time. In a film, you’re not. When you start cutting, now you’ve got a new rhythm that’s different from the stage. In so doing, you’ve got to maintain that rhythm. You can’t just be on Denzel the entire time or Viola. You need to move around, and you need to move around in a way that rhythmically stays in time with the language. That was hard. That’s what we worked on most of the time after Denzel came back. We spent a lot of time just trying to make the rhythms right.

I think that’s one of the most difficult jobs an editor has, is choosing when to show someone saying something and when to show someone’s reaction to the thing being said. One example is when Troy is telling the story of his father, and you stay on him the entire time.
Hughes: Right.

The other side of that coin is when Troy reveals his secret to Rose and the reveal is on her. You see that emotion hit her and wash over her. When I was watching the movie, I thought, “That is the moment Viola Davis won an Oscar.”
Yeah, yeah, yeah. I agree.

I think that’s one of the most difficult jobs as an editor, knowing when to do what. Can you speak to that?
When I put this film together initially, I over-cut it, and then I tried to figure out where I wanted to be. It gets over-cut because I’m trying the best I can to find out what the core of the scene is. By I’m also trying to do that with what I consider to be the best performances. My process is, I start with that, and then I start weeding through it, getting it down and focusing; trying to make it as interesting as I can, and not predictable.

In the scenes that you’re talking about, it was all about Viola’s reaction anyway. Her reaction was going to be almost more interesting than whatever he says. I watched it a few times with audiences, and I know from talking to Denzel that when he did it on stage, there’s like a gasp.

When I saw it, everybody in the theatre was like, “What?” It was great.
I know, I know. It was so great. On the stage, people would talk to him, yell at him [Denzel]. “Shame on you, Denzel!” [laughs]. Then, she went into the backyard and did the scene, and that was the end of it. I’d never seen anything like it before. Honestly. It blew me away.

I was cutting that scene at my little home office. My wife was working behind me on her own stuff, and I was crying all the time. Finally, she turned around and asked, “What is wrong with you?” I showed it to her, and she had the same response. It took eight takes to get there, but when she got it, it was amazing. I don’t think too many actresses can do what Viola did. She’s so exposed. It’s just remarkable to watch.

There were three editors on Guardians of the Galaxy — you, Fred Raskin and Craig Wood. How did that work?
Marvel films are, generally speaking, 12 months from shoot to finish. I was on the film for eight months. Craig came in and took over for me. Having said that, it’s hard with two editors or just multiple editors in general. You have to divvy up scenes. Stuff would come in and we would decide together who was going to do it. I got the job because of Fred. I’d known Fred for 25 years. Fred was my intern on Drunks.

Fred had a prior relationship with James Gunn [director of Guardians]. In most cases, I deferred to Fred’s judgment as to how he wanted to divvy up the scenes, because I didn’t have much of a relationship with James when we started. I’d never done a big CG film. For me, it was a revelation. It was fun, trying to cut a dialogue scene between two sticks. One was tall, and one was short — the green marking was going to be Groot, and the other one was going to be Rocket Raccoon.

Can you talk about the importance of the assistant editor in the editorial process? How many assistants did you have on Fences?
On Fences, I had a first and a second. I started out cutting on film, and the assistant editor was a physical job. Touch it, slice it, catalog it, etc. What they have to do now is so complicated and technical that I don’t even know how to do it. Over my career, I’ve pretty much worked with a couple of assistants the whole time. John Breinholt and Heather Mullen worked with me on Fences. I’ve known Heather for 30 years.

What do you look for in an assistant?
Somebody who is going to be able to organize my life when I’m editing; I’m terrible at that. I need them to make sure that things are getting done. I don’t want to think about everything that’s going on behind the scenes, especially when I’m cutting, because it takes a lot of concentration for me just to sit there for 10 hours a day, or even longer, and concentrate on trying to put the movie together.

I like to have somebody that can look at my stuff and tell me what’s working and what’s isn’t. You get a different perspective from different assistants, and it’s really important to have that relationship.

You talked about working on Guardians for eight months, and I read that you cut Fences in six. What do you do to decompress and take care of your own mental health during those time periods?
Good question. It’s hard. When I was working on Fences, I was on the Paramount lot. They have a gym there, so I tried to go to the gym every day. It made my day longer, because I’d get there really early, but I’d go to the gym and get on the treadmill or something for 45 minutes, and that always helped.

Finally, for those who are young or aspiring editors, do you have any words of wisdom?
I think the once piece of advice is to keep going. It helps if you know what you want to do. So many people in this business don’t survive. There can be a lot of lean years, and there certainly were for me in the beginning — I had at least 10. You just have to stay in the game. Even if you’re not working at what you want to do, it’s important to keep working. If you want to be an editor, or a director, you have to practice.

Also, have fun. It’s a movie. Try and have a good time when you’re doing it. You’ll do your best work when you’re relaxed.


Chris Visser is a Wisconsin kid who works and lives in LA. He is currently an assistant editor working in scripted TV. You can find him on Facebook and Twitter.


Digging Deep: Helping launch the OnePlus 3T phone

By Jonathan Notaro

It’s always a big deal when a company drops a new smartphone. The years of planning and development culminate in a single moment, and the consumers are left to judge whether or not the new device is worthy of praise and — more importantly — worthy of purchase.

For bigger companies like Google and Apple, a misstep with a new phone release can often amount to nothing more than a hiccup in their operations. But for newer upstarts like OnePlus, it’s a make or break event. When we got the call at Brand New School to develop a launch spot for the company’s 3T smartphone, along with the agency Carrot Creative, we didn’t hesitate to dive in.

The Idea
OnePlus has built a solid foundation of loyal fans with their past releases, but with the 3T they saw the chance to build their fanbase out to more everyday consumers who may not be as tech-obsessed as their existing fans. It is an entirely new offering and, as creatives, the chance to present such a technologically advanced device to a new, wider audience was an opportunity we couldn’t pass up.

Carrot wanted to create something for OnePlus that gave viewers a unique sense of what the phone was capable of — to capture the energy, momentum and human element of the OnePlus 3T. The 3T is meant to be an extension of its owner, so this spot was designed to explore the parallels between man and machine. Doing this can run the risk of being cliché, so we opted for futuristic, abstract imagery that gets the point across effectively without being too heavy handed. We focused on representing the phone’s features that set it apart from other devices in this market, such as its powerful processor and its memory and storage capabilities.

How We Did It
Inspired by the brooding, alluring mood reflected in the design for the title sequence of The Girl With the Dragon Tattoo, we set out to meld lavish shots of the OnePlus 3T with robotically-infused human anatomy, drawing up initial designs in Autodesk Maya and Maxon Cinema 4D.

When the project moved into the animation phase, we stuck with Maya and used Nuke for compositing. Type designs were done in Adobe Illustrator and animated in Adobe After Effects.

Collaboration is always a concern when there are this many different scenes and moving parts, but this was a particular challenge. With a CG-heavy production like this, there’s no room for error, so we had to make sure that all of the different artists were on the same page every step along the way.

Our CG supervisor Russ Wootton and technical director Dan Bradham led the way and compiled a crack team to make this thing happen. I may be biased, but they continue to amaze me with what they can accomplish.

The Final Product
The project was two-month production process. Along the way, we found that working with Carrot and the brand was a breath of fresh air, as they were very knowledgeable and amenable to what we had in mind. They afforded us the creative space to take a few risks and explore some more abstract, avant-garde imagery that I felt represented what they were looking to achieve with this project.

In the end, we created something that I hope cuts through the crowded landscape of product videos and appeals to both the brand’s diehard-tech-savvy following and consumers who may not be as deep into that world. (Check it out here.)

Fueled by the goal of conveying the underlying message of “raw power” while balancing the scales of artificial and human elements, we created something I believe is beautiful, compelling and completely unique. Ultimately though, the biggest highlight was seeing the positive reaction the piece received when it was released. Normally, reaction from consumers would be centered solely on the product, but to have the video receive praise from a very discerning audience was truly satisfying.


Jonathan Notaro is a director at Brand New School, a bicoastal studio that provides VFX, animation and branding. 


25 Million Reasons to Smile: When a short film is more than a short

By Randi Altman

For UK-based father and son Paul and Josh Butterworth, working together on the short film 25 Million Reasons to Smile was a chance for both of them to show off their respective talents — Paul as an actor/producer and Josh as an aspiring filmmaker.

The film features two old friends, and literal partners in crime, who get together to enjoy the spoils of their labors after serving time in prison. After so many years apart, they are now able to explore a different and more intimate side of their relationship.

In addition to writing the piece, Josh served as DP and director, calling on his Canon 700D for the shoot. “I bought him that camera when he started film school in Manchester,” says Paul.

Josh and Paul Butterworth

The film stars Paul Butterworth (The Full Monty) and actor/dialect/voice coach Jon Sperry as the thieves who are filled with regret and hope. 25 Million Reasons to Smile was shot in Southern California, over the course of one day.

We reached out to the filmmakers to find out why they shot the short film, what they learned and how it was received.

With tools becoming more affordable these days, making a short is now an attainable goal. What are the benefits of creating something like 25 Million Reasons to Smile?
Josh: It’s wonderful. Young and old aspiring filmmakers alike are so lucky to have the ability to make short films. This can lead to issues, however, because people can lose sight of what it is important: character and story. What was so good about making 25 Million was the simplicity. One room, two brilliant actors, a cracking story and a camera is all you really need.

What about the edit?
Paul: We had one hour and six minutes (a full day’s filming) to edit down to about six minutes, which we were told was a day’s work. An experienced editor starts at £500 a day, which would have been half our total budget in one bite! I budgeted £200 for edit, £100 for color grade and £100 for workflow.

At £200 a day, you’re looking at editors with very little experience, usually no professional broadcast work, often no show reel… so I took a risk and went for somebody who had a couple of shorts in good festivals, named Harry Baker. Josh provided a lot of notes on the story and went from there. And crucial cuts, like staying off the painting as long as possible and cutting to the outside of the cabin for the final lines — those ideas came from our executive producer Ivana Massetti who was brilliant.

How did you work with the colorist on the look of the film?
Josh: I had a certain image in my head of getting as much light as possible into the room to show the beautiful painting in all its glory. When the colorist, Abhishek Hans, took the film, I gave him the freedom to do what he thought was best, and I was extremely happy with the results. He used Adobe Premiere Pro for the grade.

Paul: Josh was DP and director, so on the day he just shot the best shots he could using natural light — we didn’t have lights or a crew, not even a reflector. He just moved the actors round in the available light. Luckily, we had a brilliant white wall just a few feet away from the window and a great big Venice Beach sun, which flooded the room with light. The white walls bounced light everywhere.

The colorist gave Josh a page of notes on how he envisioned the color grade — different palettes for each character, how he’d go for the dominant character when it was a two shot and change the color mood from beginning to end as the character arc/resolution changed and it went from heist to relationship movie.

What about the audio?
Paul: I insisted Josh hire out a professional Róde microphone and a TASCAM sound box from his university. This actually saved the shoot as we didn’t have a sound person on the boom, and consequently the sound box wasn’t turned up… and also we swiveled the microphone rather than moving it between actors, so one had a reverb on the voice while the other didn’t.

The sound was unusable (too low), but since the gear was so good, sound designer Matt Snowden was able to boost it in post to broadcast standard without distortion. Sadly, he couldn’t do anything about the reverb.

Can you comment on the score?
Paul: A BAFTA mate of mine, composerDavid Poore, offered to do the music for free. It was wonderful and he was so professional. Dave already had a really good hold on the project as we’d had long chats but he took the Josh’s notes and we ended up with a truly beautiful score.

Was the script followed to the letter? Any improvisations?
Josh: No, not quite. Paul and Jon were great, and certainly added a lot to the dialogue through conversations before and during the shoot. Jon, especially, was very helpful in Americanizing his character, Jackson’s, dialogue.

Paul: Josh spent a long time on the script and worked on every word. We had script meetings at various LA cafes and table reads with me and Jon. On the shoot day, it was as written.

Josh ended up cutting one of my lines in the edit as it wasn’t entirely necessary, and the reverb was bad. It tightened it up. And our original ending had our hands touching on the bottle, but it didn’t look right so Josh went with the executive producer’s idea of going to the cabin.

What are the benefits of creating something like 25 Million Reasons to Smile?
Paul: Wow! The benefits are amazing… as an actor I never realized the process. The filming is actually a tiny proportion of the entire process. It gave me the whole picture (I’m now in awe of how hard producers work, and that’s only after playing at it!) and how much of a team effort it is — how the direction, edit, sound design and color grade can rewrite the film. I can now appreciate how the actor doesn’t see the bigger picture and has no control over any of those these elements. They are (rightly) fully immersed in their character, which is exactly what the actor’s role is: to turn up and do the lines.

I got a beautiful paid short film out of it, current footage for my show reel and a fantastic TV job — I was cast by Charles Sturridge in the new J.K.Rowling BBC1/HBO series Cormoran Strike as the dad of the female lead Robin (Holliday Grainger). I’d had a few years out bringing Josh up and getting him into film school. I relaunched when he went to university, but my agent said I needed a current credit as the career gap was causing casting directors problems. So I decided to take control and make my own footage — but it had to stand up on my show reel against clips like The Full Monty. If it wasn’t going to be broadcast-standard technically, then it had to have something in the script, and my acting (and my fellow actor had to be good) had to show that I could still do the job.

Josh met a producer in LA who’s given him runner work over here in England, and a senior producer with an international film company saw this and has given him an introduction to their people in Manchester. He also got a chance to write and direct a non-student short using industry professionals, which in the “real” world he might not get for years. And it came with real money and real consequences.

Josh, what did you learn from this experience from a filmmaker’s point of view?
More hands on deck is never a bad thing! It’s great having a tight-knit cast and crew, but the shoot would have definitely benefited from more people to help with lighting and sound, as well as the process running smoother overall.

Any surprises pop up? Any challenges?
Josh: The shoot actually ran very smoothly. The one challenge we had to face was time. Every shot took longer than expected, and we nearly ran out of time but got everything we needed in the end. It helped having such professional and patient actors.

Paul: I was surprised how well Josh (at 20 years old and at the start of film school) directed two professional middle-aged actors. Especially as one was his dad… and I was surprised by how filmic his script was.

Any tips for those looking to do something similar?
Josh: Once you have a story, find some good actors and just do it. As I said before, keep it simple and try to use character not plot to create drama.

Paul: Yes, my big tip would be to get the script right. Spend time and money on that and don’t film it till it’s ready. Get professional help/mentoring if you can. Secondly, use professional actors — just ask! You’d be surprised how many actors will take a project if the script and director are good. Of course, you need to pay them (not the full rate, but something).

Finally, don’t worry too much about the capture — as a producer said to me, “If I like a project I can buy in talent behind the camera. In a short I’m looking for a director’s voice and talent.”


Mozart in the Jungle

The colorful dimensions of Amazon’s Mozart in the Jungle

By Randi Altman

How do you describe Amazon’s Mozart in the Jungle? Well, in its most basic form it’s a comedy about the changing of the guard — or maestro — at the New York Philharmonic, and the musicians that make up that orchestra. When you dig deeper you get a behind-the-scenes look at the back-biting and crazy that goes on in the lives and heads of these gifted artists.

Timothy Vincent

Timothy Vincent

Based on the novel Mozart in the Jungle: Sex, Drugs, and Classical Music by oboist Blair Tindall, the series — which won the Golden Globe last year and was nominated this year — has shot in a number of locations over its three seasons, including Mexico and Italy.

Since its inception, Mozart in the Jungle has been finishing in 4K and streaming in both SDR and HDR. We recently reached out to Technicolor’s senior color timer, Timothy Vincent, who has been on the show since the pilot to find out more about the show’s color workflow.

Did Technicolor have to gear up infrastructure-wise for the show’s HDR workflow?
We were doing UHD 4K already and were just getting our HDR workflows worked out.

What is the workflow from offline to online to color?
The dailies are done in New York based on the Alexa K1S1 709 LUT. (Technicolor On-Location Services handled dailies out of Italy, and Technicolor PostWorks in New York.) After the offline and online, I get the offline reference made with the dailies so I can look at if I have a question about what was intended.

If someone was unsure about watching in HDR versus SDR, what would you tell them?
The emotional feel of both the SDR and the HDR is the same. That is always the goal in the HDR pass for Mozart. One of the experiences that is enhanced in the HDR is the depth of field and the three-dimensional quality you gain in the image. This really plays nicely with the feel in the landscapes of Italy, the stage performances where you feel more like you are in the audience, and the long streets of New York just to name a few.

Mozart in the JungleWhen I’m grading the HDR version, I’m able to retain more highlight detail than I was in the SDR pass. For someone who has not yet been able to experience HDR, I would actually recommend that they watch an episode of the show in SDR first and then in HDR so they can see the difference between them. At that point they can choose what kind of viewing experience they want. I think that Mozart looks fantastic in both versions.

What about the “look” of the show. What kind of direction where you given?
We established the look of the show based on conversations and collaboration in my bay. It has always been a filmic look with soft blacks and yellow warm tones as the main palette for the show. Then we added in a fearlessness to take the story in and out of strong shadows. We shape the look of the show to guide the viewers to exactly the story that is being told and the emotions that we want them to feel. Color has always been used as one of the storytelling tools on the show. There is a realistic beauty to the show.

What was your creative partnership like with the show’s cinematographer, Tobias Datum?
I look forward to each episode and discovering what Tobias has given me as palette and mood for each scene. For Season 3 we picked up where we left off at the end of Season 2. We had established the look and feel of the show and only had to account for a large portion of Season 3 being shot in Italy. Making sure to feel the different quality of light and feel of the warmth and beauty of Italy. We did this by playing with natural warm skin tones and the contrast of light and shadow he was creating for the different moods and locations. The same can be said for the two episodes in Mexico in Season 2. I know now what Tobias likes and can make decisions I’m confident that he will like.

Mozart in the JungleFrom a director and cinematographer’s point of view, what kind of choices does HDR open up creatively?
It depends on if they want to maintain the same feel of the SDR or if they want to create a new feel. If they choose to go in a different direction, they can accentuate the contrast and color more with HDR. You can keep more low-light detail while being dark, and you can really create a separate feel to different parts of the show… like a dream sequence or something like that.

Any workflow tricks/tips/trouble spots within the workflow or is it a well-oiled machine at this point?
I have actually changed the way I grade my shows based on the evolution of this show. My end results are the same, but I learned how to build grades that translate to HDR much easier and consistently.

Do you have a color assistant?
I have a couple of assistants that I work with who help me with prepping the show, getting proxies generated, color tracing and some color support.

What tools do you use — monitor, software, computer, scope, etc.?
I am working on Autodesk Lustre 2017 on an HP Z840, while monitoring on both a Panasonic CZ950 and a Sony X300. I work on Omnitek scopes off the downconverter to 2K. The show is shot on both Alexa XT and Alexa Mini, framing for 16×9. All finishing is done in 4K UHD for both SDR and HDR.

Anything you would like to add?
I would only say that everyone should be open to experiencing both SDR and HDR and giving themselves that opportunity to choose which they want to watch and when.


Digging Deeper: Fraunhofer’s Dr. Siegfried Foessel

By Randi Altman

If you’ve been to NAB, IBC, AES or regional conferences involving media and entertainment technology, you have likely seen Fraunhofer exhibiting or heard one of their representatives speaking on a panel.

Fraunhofer first showed up on my radar years ago at an AES show in New York City when they were touting the new MP3 format, which they created. From that moment on, I’ve made it a point to keep up on what Fraunhofer has been doing in other areas of the industry, but for some, what Fraunhofer is and does is a mystery.

We decided to help with that mystery by throwing some questions at Dr. Siegfried Foessel, Fraunhofer IIS Department Moving Picture Technologies.

Can you describe Fraunhofer?
Fraunhofer-Gesellschaft is an organization for applied research that has 67 institutes and research units at locations throughout Germany. At present, there are around 24,000 people. The majority are qualified scientists and engineers who work with an annual research budget of more than 2.1 billion euros.

More than 70 percent of the Fraunhofer-Gesellschaft’s research revenue is derived from contracts with industry and from publicly financed research projects. Almost 30 percent is contributed by the German federal and Länder governments in the form of base funding. This enables the institutes to work ahead on solutions to problems that will become relevant to industry and society within the next five or ten years from now.

How did it all begin? Is it a think tank of sorts? Tell us about Fraunhofer’s business model.
The Fraunhofer-Gesellschaft was founded in 1949 and is a recognized non-profit organization that takes its name from Joseph von Fraunhofer (1787–1826), the illustrious Munich researcher, inventor and entrepreneur. Its focus was clearly defined to do application-oriented research and to develop future-relevant key technologies. Through their research and development work, the Fraunhofer Institutes help to reinforce the competitive strength of the economy. They do so by promoting innovation, strengthening the technological base, improving the acceptance of new technologies and helping to train the urgently needed future generation of scientists and engineers.

What is Fraunhofer IIS?
The Fraunhofer Institute for Integrated Circuits IIS is an application-oriented research institution for microelectronic and IT system solutions and services. With the creation of MP3 and the co-development of AAC, Fraunhofer IIS has reached worldwide recognition. In close cooperation with partners and clients, the ISS institute provides research and development services in the following areas: audio and multimedia, imaging systems, energy management, IC design and design automation, communication systems, positioning, medical technology, sensor systems, safety and security technology, supply chain management and non-destructive testing. About 880 employees conduct contract research for industry, the service sector and public authorities.

Fraunhofer IIS partners with companies as well as public institutions?
We develop, implement and optimize processes, products and equipment until they are ready for use in the market. Flexible interlinking of expertise and capacities enables us to meet extremely broad project requirements and complex system solutions. We do contracted research for companies of all sizes. We license our technologies and developments. We work together with partners in publicly funded research projects or carry out commercial and technical feasibility studies.

IMF transcoding.

What is the focus of Fraunhofer IIS’ Department of Moving Picture Technologies?
For more than 15 years, our Department Moving Picture Technologies has driven developments for digital cinema and broadcast solutions focused on imaging systems, post production tools, formats and workflow solutions. The Department Moving Picture Technologies was chosen by the Digital Cinema Initiatives (DCI) to develop and implement the first certification test plan for digital cinema as the main reference for all systems in this area. As a leader in the ISO standardization committee for digital cinema within JPEG, my team and I are driving standardization for JPEG 2000 and formats, such as DCP and the Interoperable Master Format (IMF.)

We also are working together with SMPTE and other standardization bodies worldwide. Renowned developments for the department that are highly respected are the Arri D20/D21 camera, the easyDCP post production suite for DCP and IMF creation and playback, as well as the latest developments and results of multi-camera/light-field technology.

What are some of the things you are working on and how does that work find its way to post houses and post pros?
The engineers and scientists of the Department Moving Picture Technologies are working on tools and workflow solutions for new media file formats like IMF to enable smooth integration and use in existing workflows and to optimize performance and quality. As an example, we always enhance and augment the features available through the post production easyDCP suite. The team discusses and collaborates with customers, industry partners and professionals in the post production and digital cinema industries to identify the “most wanted and needed” requirements.

easyDCP

We preview new technologies and present developments that meet these requirements or facilitate process steps. Examples of this include the acceleration process of IMF or DCP creation by using an approach based on a hybrid JPEG 2000 functionality or introducing a media asset management tool for DCP/IMF or dailies. We present our ideas, developments and results at exhibitions such as NAB, the HPA Tech Retreat and IBC, as well as SMPTE conferences and plugfests all around the world.

Together with distribution partners who are selling the products like easyDCP, Fraunhofer IIS licenses those developments and puts them into the market. Therefore, the team always looks for customer feedback for their developments that is supported by a very active community.

Who are some of your current customers and partners?
We have more than 1,500 post houses as customers, managed by our licensing partner easyDCP GmbH. Nearly all of the Hollywood studios and post houses on all continents are our customers. We also work together with integration partners like Blackmagic and Quantel. Most of the names of our partners in the contract research area are confidential, but to name some partners from the past and present: Arri, DCI, IHSE GmbH.

Which technologies are available for license now?
• Tools for creation and playback of DCPs and IMPs, as standalone tools and for integration into third party tools
• Tools for quality control of DCPs and IMPs
• Tools for media asset management of DCPs and IMPs
• Plug-ins for light-field-processing and depth map generation
• Codecs for mezzanine compression of images

Lightfield tech

What are you working on now that people should know about?
We are developing new tools and plug-ins for bringing lightfield technology to the movie industry to enhance creativity opportunities. This includes system aspects in combination with existing post tools. We are chairing and actively participating on adhoc groups for lightfield-related standardization efforts in the JPEG/MPEG Joint Adhoc Group for digital representations of light/sound fields for immersive media applications (see https://jpeg.org/items/20160603_pleno_report.html).

We are also working together with DIN on a proposal to standardize digital long-term archive formats for movies. Basic work is done with German archives and service providers at DIN NVBF3 and together with CST from France at SMPTE with IMF App#4. Furthermore, we are developing mezzanine image compression formats for the transmission of video over IP in professional broadcast environments and GPU accelerated tools for creation and playback of JPEG 2000 code streams.

How do you pick what you will work on?
The employees at Fraunhofer IIS are very creative people. By observation of the market, research in joint projects and cooperation with universities, ideas are created and evaluated. Employees and our student scientists are discussing with industry partners what might be possible in the near future and which ideas have the greatest potential. Selected ideas will then be evaluated with respect to the business opportunities and transformed into internal projects or proposed as research projects. Our employees are tasked with working much like our eponym Joseph von Fraunhofer, as researchers, inventors and entrepreneurs — all at the same time.

What other “hats” do you wear in the industry?
As mentioned earlier, Fraunhofer is involved in standardization bodies and industry associations. For example, I chair the Systems Group within ISO SC29WG1 (JPEG) and the post production group within ISO TC36 (Cinematography). I am also a SMPTE governor (EMEA and Central and South America region) and a SMPTE fellow, along with supporting SMPTE conferences as a program committee member.

Currently, I am president of the German Society Fernseh- und Kinotechnische Gesellschaft (FKTG) and am involved in associations like EDCF and ISDCF. Additionally, I’m a speaker for the German VDE/ITG society in the area of media technology. Last, but not least, I chair the German standardization body at DIN for NVBF3 and consult the German federal film board in questions related to new technical challenges in the film industry.


Digging Deep: Sony intros the PXW-FS7 II camera

By Daniel Rodriguez

At a press event in New York City a couple of weeks ago, Sony unveiled the long-rumored follow-up to its extremely successful Sony PXW FS7 — the Sony PXW-FS7 II. With the new FS7 II, Sony dives deeper in the mid-level cinematographer/ videographer market that it firmly established with the FS100, FS700, FS7 and the more recent Sony FS5.

Knowing they are competing with cameras of other similarly priced brands, Sony has built upon a line that fulfills most technical and ergonomic needs. Sony prides itself on listening to videographers and cinematographers who make requests and suggestions from first-hand field experience, and it’s clear that they’ve continued to listen.

New Features
The Sony FS7 II might be the first camera where you can feel the deep care and consideration from Sony for those who have used the FS7 extensively, in regards to improvements. Although the body and overall design might seem nearly identical to the original FS7, the FS7 II has made subtle but important ergonomic improvements to the camera’s design.

Improving on their E-mount design, Sony has introduced a lever locking mechanism much how a PL mount functions. Unlike the PL mount, the new lever lock rotates counter-clockwise but provides a massive amount of support, especially since there is a secondary latch that prevents you from accidentally turning the lever back. The mount has been tested to support the same weight as traditional PL mounts, and larger cinema zooms can be easily mounted without the need of a lens support. Due to its short flange distance, Sony’s E-mount has become very popular with users for adapting almost all stills and cinema lenses to Sony cameras, and with this added support there is reduced risk and concern when adding lens adapters.

The camera body’s corners and edges have all been rounded out, allowing users to have a much more comfortable control of the camera. This is especially helpful for handheld use when the camera might be pressed up against someone’s body or under their arm. Considering things like operating below the underarm and at the waist, Sony has redesigned the arm grip, and most of the body, to be tool-less. The arm grip no longer requires tools to be adjusted and now uses two knobs to allow easy adjustments. This saves much needed time and maximizes comfort.

The viewfinder can now be extended further in either direction with a longer rod, which benefits left-eye dominant operators. The microphone holder is no longer permanently attached to the other side of the rod so it can either be adapted to the left side of camera to allow viewing the monitor to the right of the camera or it could be removed altogether. Sony has also made the viewfinder collapsible for those who’d rather just view the monitor. The viewfinder rod is now square shaped to allow uniform horizontal aligning in the framing in relation to the cameras balancing. This stemmed from operators confusing their framing by believing framing was crooked due to how the viewfinder was aligned, even if the camera was perfectly balanced.

Sony really kept the smaller suggestions in mind by making the memory card slots protrude more than on the original FS7. This allows for loaders to more easily access the memory card should they be wearing something that inhibits their grip, like gloves. Compatibility with the newer G-series XQD cards, which boast an impressive 440MBps write and 400MBps read speed, allowing FS7 II users to quickly dump their footage on the field without the worry of running out of useable memory cards.

Coming straight out the box is the FS7 II’s ability to do internal 4K DCI (4096×2160) without the need for upgrades or HDMI output. This 4K can be captured in nearly every codec, whether in XAVC, ProRes 422HQ, or RAW, with the option of HyperGammas, Slog-3 or basic 709. RAW output will be available to the camera, but like its siblings, an external recorder will still be required to do so. The FS7 II will also be capable of recording Sony’s version of compressed RAW, XOCN, which allows 16-bit 3:1 recording to an external recorder. Custom 3D LUTs will still be available to be uploaded into the camera. This allows more of a cinematographer’s touch when using a custom LUT, rather than factory presets.

Electronic Internal Variable ND
The most exciting feature of the Sony FS7 II — and the one that really separates this camera from the FS7 — is the introduction of an Electronic Internal Variable ND. Introduced originally in the FS5, the new options that the FS7 II has over the FS5 with this new Electronic Variable ND makes this a very promising camera and an improvement over its older sibling.

Oftentimes with similarly priced cameras, or ones that offer the same options, there is either a lack of internal NDs or a limited amount of internal ND control, which is either too much or not enough when it comes to exposure control. The term Variable ND is also approached with caution from videographers/cinematographers with concerns of color shifts and infrared pollution, but Sony has taken care of these precautions by having an IR cut filter over the sensor. This way, no level of ND will introduce any color shifts or infrared pollution. It’s also often easy to break the bank buying IR NDs to prevent infrared pollution, and the constant swapping of ND filters might prove a disadvantage when it comes to being time-efficient, which could also lead you to open or close your F-stop to compensate.

Compromising your F-stop is often an unfortunate reality when shooting — indoors or outdoors — and it’s extremely exciting to have a feature that allows you to adjust your exposure flawlessly without worrying about having the right ND level or adjusting your F-stop to compensate. It’s also exciting to know that you can adjust the ND filter without having to see a literal filter rotate in front of your image. The Electronic Variable ND can be adjusted from the grip as well, so you can essentially ride the iris without having to touch your F-stop and risk your depth of field being inconsistent.

closeup-settingsAs with most modern-day lenses that lack manual exposure, riding the iris is simply out of the question due to mechanical “clicked” irises and the very obvious exposure shift when changing the F-stop on one of these lenses. This is eliminated by letting the Variable ND do all the work and allowing you to leave your F-stop untouched. The Electronic Variable ND on manual mode allows you to smoothly transition between 0.6ND to 2.1ND in one-third increments.

Recording in BT
Another exciting new addition to the FS7 II is the ability to record in BT. 2020 (more commonly known as Rec. 2020) internally in UHD. While this might seem excessive to some, considering this camera is still a step below its siblings the F55 and F65 as far as use in productions where HDR deliverables are required, providing the option to shoot Rec. 2020 futureproofs this camera for years to come especially when Rec. 2020 monitoring and projection becomes the norm. Companies like Netflix usually request an HDR deliverable for their original programs so despite the FS7 II not being on the same level as the F55/F65, it shows it can deliver the same level of quality.

While the camera can’t boast a global shutter like its bigger sibling, the F55, the FS7 does show very capable rolling shutter with little to no skewing effects. In the FS7 II’s case it is preferable to retain rolling shutter over global because as a camera that leans slightly toward the commercial/videography spectrum of cinematography, it is preferable to retain a native ISO of 2000 and the full 14 stops over global shutter, which is easy to overlook and use cost much-needed dynamic range.

This exclusion of global shutter retains the native ISO of the FS7II at 2000 ISO, which is the same as the previous FS7. Retaining this native ISO puts the FS7 II above many similar priced video cameras whose native ISOs usually sit at 800. While the FS7 II may not be a low-light beast like the Sony a7s/a7sii, the ability to do internal 4K DCI, higher frame rates and record 10-bit 422HQ (and even RAW) greatly outweigh this loss in exposure.

The SELP18110G 18-110 F4.0 Servo Zoom
Alongside the Sony FS7 II, Sony has announced a new zoom lens to be released alongside the camera. Building off what they have introduced before with the Sony FE PZ 28-135 F4 G, the 18-110 F4 is a very powerful lens optically and the perfect companion to the FS7 II. The lens is sharp to the edges; doesn’t drop focus while zooming in and out; has no breathing whatsoever; has a quiet internal zoom, iris, and focus control; internal stabilization; and a 90-second zoom crawl from end to end. The lens covers Super 35mm and APSC-sized sensors and retains a constant f4 throughout each focal length.

It’s multi-coating allows for high contrast and low flaring with circular bokeh to give truly cinematic images. Despite its size, the lens only weighs 2.4 pounds, a weight easily supported by the FS7 II’s lever-locking E mount. Though it isn’t an extremely fast lens, paired with a camera like the FS7 II, which has a native ISO of 2000, the 18-110 F4 should prove to be a very useable lens on the field and as well in narrative work.

Final Impressions
This camera is very specifically designed for camerapersons who either have a very small camera team or shoot as individuals. Many of the new features, big and small, are great additions for making any project go down smoothly and nearly effortlessly. While its bigger siblings the F55 and F65 will still dominate major motion picture production and commercial work, this camera has all its corners covered to fill the freelance videographer/cinematographer’s needs.

Indie films, short films, smaller commercial and videography work will no doubt find this camera to be hugely beneficial and give as few headaches as possible. Speed and efficiency are often the biggest advantage on smaller productions and this camera easily handles and facilitates the most overlooked aspects of video production.

The specs are hard to pass up when discussing the Sony FS7 II. Hearing of a camera that does internal 4K DCI with the option of high frame rates at 10-bit 422HQ with 14 stops of dynamic range and the option to shoot in Slog3 or one of the many HyperGammas for faster deliverables should immediately excite any videographer/cinematographer. Many cinematographers making feature or short films have grown accustomed to shooting RAW, and unless they rent the external recorder, or buy it, they will be unable to do so with this camera. But with the high write speeds of the internal codecs, it’s difficult to argue that, despite a few minor features being lost, the internal video will retain a massive amount of information.

This camera truly delivers on providing nearly any ergonomic and technical need, and by anticipating future display formats with Rec.2020, this shows that Sony is very conscious of future-proofing this camera. The physical improvements on the camera have shown that Sony is very open and eager to hear suggestions and first-hand experiences from FS7 users, and no doubt any suggestions on the FS7 II will be taken into mind.

The Electronic Variable ND is easily the best feature of the camera since so much time in the field will be saved by not having to swap NDs, and the ability to shift through increments between the standard ND levels will be hugely beneficial to get your exposure right. Being able to adjust exposure mid shot without having filters come between the image will be a great feature to those shooting outdoors or working events where the lighting is uneven. Speed cannot be emphasized enough, and by having such a massively advantageous feature you are just cutting more and more time from whatever production you’re working.

Pairing up the camera with the new 18-110 F4 will make a great camera package for location shooting since you will be covered for nearly every focal length and have a sharp lens that has servo zooming, internal stabilization and low flaring. The lens might be off-putting to some narrative filmmakers, since it only opens to a F4.0 and isn’t fast by other lens standards, but with the quality and attention to optic performance the lens should be considered seriously alongside other lenses that aren’t quite cinema lenses but have been used heavily so far in the narrative world. With the native ISO of 2000, one should be able to shoot comfortably wide open or closed down with proper lighting and for films done mostly in natural light this lens should be highly considered.

Oftentimes when choosing a camera, the biggest question isn’t what the camera has but what it will cost. Since Sony isn’t discontinuing the original FS7, the FS7 II will be more expensive, and when considering BP-U60 batteries and XQD cards the price will only climb. I think despite these shortcomings, one must always consider the price of storage and power when upgrading your camera system. More powerful cameras will no doubt require faster cards and bigger power supplies, so these costs must be seen as investments.

While XQD cards might be considered pricey to some, especially those who are more familiar with buying and using SD cards, I consider jumping into the XQD card world a necessary step to develop your video capabilities. CFast cards are becoming the norm in higher-end digital cinema, especially when the FS7 II is being heavily considered.

Compromise is often expected in any level of production, be it technically, logistically or artistically. After getting an impression of what the FS7 II can provide and facilitate in any production scenario I feel this is one of the few cameras that will take away feelings of compromise from what you as a user can provide.

The FS7 II will be available in January 2017 for an estimated street price of $10,000 (body only) and $13,000 for the camcorder with 18-110mm power zoom lens kit.


Daniel Rodriguez is cinematographer and photographer living in New York City. Check out his work here. Dan took many of the pictures featured in this article.


Capturing the Olympic spirit for Coke

By Randi Altman

There is nothing like the feeling you get from a great achievement, or spending time with people who are special to you. This is the premise behind Coke’s Gold Feelings commercial out of agency David. The spot, which aired on broadcast television and via social media and exists in 60-, 30- and 15-second iterations, features Olympic athletes at the moment of winning. Along with the celebratory footage, there were graphics that feature quotes about winning and an update of the iconic Coke ribbon.

The agency brought in Lost Planet, Black Hole’s parent company for graphics, editing and final finishing. Lost Planet provided editing while Black Hole provided graphics and finishing.

Tim Vierling

Still feeling the Olympic spirit, we reached out to Black Hole producer Tim Vierling to find out more.

How early did you get involved in the project?
Black Hole became involved early on in the offline edit when initially conceptualizing how to integrate graphics. We worked with the agency creatives to layout the supers and helped determine what approach would be best.

How far along was it in terms of the graphics at that point?
Whereas the agency established the print portion of the creative beforehand, much of the animation was undiscovered territory. For the end tag, Black Hole animated various iterations of the Coke ribbon wiping onto screen and carefully considered how this would interact with each subject in the end shots.

We then had to update the existing disc animation to complement the new and improved/iconic Coke ribbon. The titles/supers that appear throughout the spot were under constant scrutiny — from tracking to kerning to font type. We held to a rule that type could never cross over an athlete’s face, which led to some clever thinking. Black Hole’s job was to locate the strongest moments to highlight and rotoscope various body parts of the athletes, having them move over and behind the titles throughout the spot.

What was the most challenging part of the project? Olympics projects tend to have a lot of moving parts, and there were some challenges caused by licensing issues, forcing us to adapt to an unusually high amount of editorial changes. This, in turn, resulted in constant rotoscoping. Often a new shot didn’t work well with the previous supers, so they were changing as frequently as the edit. This forced us to the push the schedule, but in the end we delivered something we’re really proud of.

What tools did you use?
Adobe After Effects and Photoshop, Imagineer Mocha and Autodesk Flame were all used for finishing and graphics.

A question for Lost Planet’s assistant editor Steven san Miguel: What direction were you given on the edit?
The spots were originally boarded with supers on solid backgrounds, but Lost Planet editors Kimmy Dube and Max Koepke knew this wouldn’t really work for a 60-second. It was just too much to read and not enough footage. Max was the first one to suggest a level of interactivity between the footage and the type, so from the very beginning we were working with Black Hole to lay out the type and roto the footage. This started before the agency even sat down with us. And since the copy and the footage were constantly changing there had to be really close communication between Lost Planet and Black Hole.

Early on the agency provided YouTube links for footage they used in their pitch video. We scoured the YouTube Olympic channel for more footage, and as the spot got closer to being final, we would send the clips to the IOC (International Olympic Committee) and they would provide us with the high-res material.

Check out the spot!

Digging Deeper: Dolby Vision at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for their offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. You can read about Dolby AC-4 and Dolby Atmos here. In this post, the focus will be on Dolby Vision.

First, let’s consider quantization. All digital video signals are encoded as bits. When digitizing analog video, the analog-to-digital conversion process uses a quantizer. The quantizer determines which bits are active or on (value = 1) and which bits are inactive or off (value = 0). As the bit depth for representing a finite range increases, the greater the detail for each possible value, which directly reduces the quantization error. The number of possible values is 2^X, where X is the number of bits available. A 10-bit signal has four times the number of possible encoded values than an 8-bit signal. This difference in bit depth does not equate to dynamic range. It is the same range of values with a degree of quantization accuracy that increases as the number of bits used increases.

Now, why is quantization relevant to Dolby Vision? In 2008, Dolby began work on a system specifically for this application that has been standardized as SMPTE ST-2084, which is SMPTE’s standard for an electro-optical transfer function (EOTF) and a perceptual quantizer (PQ). This work is based on work in the early 1990s by Peter G. J. Barten for medical imaging applications. The resulting PQ process allows for video to be encoded and displayed with a 10,000-nit range of brightness using 12 bits instead of 14. This is possible because Dolby Vision exploits a human visual characteristic where our eyes are less sensitive to changes in highlights than they are to changes in shadows.

Previous display systems, referred to as SDR or Standard Dynamic Range, are usually 8 bits. Even at 10 bits, SD and HD video is specified to be displayed at a maximum output of 100 nits using a gamma curve. Dolby Vision has a nit range that is 100 times greater than what we have been typically seeing from a video display.

This brings us to the issue of backwards compatibility. What will be seen by those with SDR displays when they receive a Dolby Vision signal? Dolby is working on a system that will allow broadcasters to derive an SDR signal in their plant prior to transmission. At my NAB demo, there was a Grass Valley camera whose output image was shown on three displays. One display was PQ (Dolby Vision), the second display was SDR, and the third display was software-derived SDR from PQ. There was a perceptible improvement for the software-derived SDR image when compared to the SDR image. As for the HDR, I could definitely see details in the darker regions on their HDR display that were just dark areas on the SDR display. This software for deriving an SDR signal from PQ will eventually also make its way into some set-top boxes (STBs).

This backwards-compatible system works on the concept of layers. The base layer is SDR (based on Rec. 709), and the enhancement layer is HDR (Dolby Vision). This layered approach uses incrementally more bandwidth when compared to a signal that contains only SDR video.  For on-demand services, this dual-layer concept reduces the amount of storage required on cloud servers. Dolby Vision also offers a non-backwards compatible profile using a single-layer approach. In-band signaling over the HDMI connection between a display and the video source will be used to identify whether or not the TV you are using is capable of SDR, HDR10 or Dolby Vision.

Broadcasting live events using Dolby Vision is currently a challenge for reasons beyond HDTV not being able to support the different signal. The challenge is due to some issues with adapting the Dolby Vision process for live broadcasting. Dolby is working on these issues, but Dolby is not proposing a new system for Dolby Vision at live events. Some signal paths will be replaced, though the infrastructure, or physical layer, will remain the same.

At my NAB demo, I saw a Dolby Vision clip of Mad Max: Fury Road on a Vizio R65 series display. The red and orange colors were unlike anything I have seen on an SDR display.

Nearly a decade of R&D at Dolby has been put into Dolby Vision. While Dolby Vision has some competition in the HDR war from Technicolor and Philips (Prime) and BBC and NHK (Hybrid Log Gamma or HLG), it does have an advantage in that there have been several TV models available from both LG and Vizio that are Dolby Vision compatible. If their continued investment in R&D for solving the issues related to live broadcast results in a solution that broadcasters can successfully implement, it may become the de-facto standard for HDR video production.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.