NBCUni 7.26

Category Archives: A Closer Look

VFX supervisor Lesley Robson-Foster on Amazon’s Mrs. Maisel

By Randi Altman

If you are one of the many who tend to binge-watch streaming shows, you’ve likely already enjoyed Amazon’s The Marvelous Mrs. Maisel. This new comedy focuses on a young wife and mother living in New York City in 1958, when men worked and women tended to, well, not work.

After her husband leaves her, Mrs. Maisel chooses stand-up comedy over therapy — or you could say stand-up comedy chooses her. The show takes place in a few New York neighborhoods, including the toney Upper West Side, the Garment District and the Village. The storyline brings real-life characters into this fictional world — Midge Maisel studies by listening to Red Foxx comedy albums, and she also befriends comic Lenny Bruce, who appears in a number of episodes.

Lesley Robson-Foster on set.

The show, created by Amy Sherman-Palladino and Dan Palladino, is colorful and bright and features a significant amount of visual effects — approximately 80 per episode.

We reached out to the show’s VFX supervisor, Lesley Robson-Foster, to find out more.

How early did you get involved in Mrs. Maisel?
The producer Dhana Gilbert brought my producer Parker Chehak and I in early to discuss feasibility issues, as this is a period piece and to see if Amy and Dan liked us! We’ve been on since the pilot.

What did the creators/showrunners say they needed?
They needed 1958 New York City, weather changes and some very fancy single-shot blending. Also, some fantasy and magic realism.

As you mentioned, this is a period piece, so I’m assuming a lot of your work is based on that.
The big period shots in Season 1 are the Garment District reconstruction. We shot on 19th Street between 5th and 6th — the brilliant production designer Bill Groom did 1/3 of the street practically and VFX took care of the rest, such as crowd duplication and CG cars and crowds. Then we shot on Park Avenue and had to remove the Met Life building down near Grand Central, and knock out anything post-1958.

We also did a major gag with the driving footage. We shot driving plates around the Upper West Side and had a flotilla of period-correct cars with us, but could not get rid of all the parked cars. My genius design partner on the show Douglas Purver created a wall of parked period CG cars and put them over the modern ones. Phosphene then did the compositing.

What other types of effects did you provide?
Amy and Dan — the creators and showrunners — haven’t done many VFX shows, but they are very, very experienced. They write and ask for amazing things that allow me to have great fun. For example, I was asked to make a shot where our heroine is standing inside a subway car, and then the camera comes hurtling backwards through the end of the carriage and then sees the train going away down the tunnel. All we had was a third of a carriage with two and a half walls on set. Douglas Purver made a matte painting of the tunnel, created a CG train and put it all together.

Can you talk about the importance of being on set?
For me being on set is everything. I talk directors out of VFX shots and fixes all day long. If you can get it practically you should get it practically. It’s the best advice you’ll ever give as a VFX supervisor. A trust is built that you will give your best advice, and if you really need to shoot plates and interrupt the flow of the day, then they know it’s important for the finished shot.

Having a good relationship with every department is crucial.

Can you give an example of how being on set might have saved a shot or made a shot stronger?
This is a character-driven show. The directors really like Steadicam and long, long shots following the action. Even though a lot of the effects we want to do really demand motion control, I know I just can’t have it. It would kill the performances and take up too much time and room.

I run around with string and tennis balls to line things up. I watch the monitors carefully and use QTake to make sure things line up within acceptable parameters.

In my experience you have to have the production’s best interests at heart. Dhana Gilbert knows that a VFX supervisor on the crew and as part of the team smooths out the season. They really don’t want a supervisor who is intermittent and doesn’t have the whole picture. I’ve done several shows with Dhana; she knows my idea of how to service a show with an in-house team.

You shot b-roll for this? What camera did you use, and why?
We used a Blackmagic Ursa Mini Pro. We rented one on The OA for Netflix last year and found it to be really easy to use. We liked that’s its self-contained and we can use the Canon glass from our DSLR kits. It’s got a built-in monitor and it can shoot RAW 4.6K. It cut in just fine with the Alexa Mini for establishing shots and plates. It fits into a single backpack so we could get a shot at a moment’s notice. The user interface on the camera is so intuitive that anyone on the VFX team could pick it up and learn how to get the shot in 30 minutes.

What VFX houses did you employ, and how do you like to work with them?
We keep as much as we can in New York City, of course. Phosphene is our main vendor, and we like Shade and Alkemy X. I like RVX in Iceland, El Ranchito in Spain and Rodeo in Montreal. I also have a host of secret weapon individuals dotted around the world. For Parker and I, it’s always horses for courses. Whom we send the work to depends on the shot.

For each show we build a small in-house team — we do the temps and figure out the design, and shoot plates and elements before shots leave us to go to the vendor.

You’ve worked on many critically acclaimed television series. Television is famous for quick turnarounds. How do you and your team prepare for those tight deadlines?
Television schedules can be relentless. Prep, shoot and post all at the same time. I like it very much as it keeps the wheels of the machine oiled. We work on features in between the series and enjoy that slower process too. It’s all the same skill set and workflow — just different paces.

If you have to offer a production a tip or two about how to make the process go more smoothly, what would it be?
I would say be involved with EVERYTHING. Keep your nose close to the ground. Really familiarize yourself with the scripts — head trouble off at the pass by discussing upcoming events with the relevant person. Be fluid and flexible and engaged!

A Closer Look: Why 8K?

By Mike McCarthy

As we enter 2018, we find a variety of products arriving to market that support 8K imagery. The 2020 Olympics are slated to be broadcast in 8K, and while clearly we have a way to go, innovations are constantly being released that get us closer to making that a reality.

The first question that comes up when examining 8K video gear is, “Why 8K?” Obviously, it provides more resolution, but that is more of an answer to the how question than the why question. Many people will be using 8K imagery to create projects that are finished at 4K, giving them the benefits of oversampling or re-framing options. Others will use the full 8K resolution on high DPI displays. There is also the separate application of using 8K images in 360 video for viewing in VR headsets.

Red Monstro 8K

Similar technology may allow reduced resolution extraction on-the-fly to track an object or person in a dedicated 1080p window from an 8K master shot, whether that is a race car or a basketball player. The benefit compared to tracking them with the camera is that these extractions can be generated for multiple objects simultaneously, allowing viewers to select their preferred perspective on the fly. So there are lots of uses for 8K imagery. Shooting 8K for finishing in 4K is not much different from a workflow perspective than shooting 5K or 6K, so we will focus on workflows and tools that actually result in an 8K finished product.

8K Production
The first thing you need for 8K video production is an 8K camera. There are a couple of options, the most popular ones being from Red. The Weapon 8K came out in 2016, followed by the smaller sensor Helium8K, and the recently announced Monstro8K. Panavision has the DXL, which by my understanding is really a derivation of the Red Dragon8K sensor. Canon has been demoing an 8K camera for two years now, with no released product that I am aware of. Sony announced the 8K 3-chip camera UHC-8300 at IBC 2017, but that is probably out of most people’s price range. Those are the only major options I am currently aware of, and the Helium8K is the only one I have been able to shoot with and edit footage from.

Sony UHC-8300 8K

Moving 8K content around in realtime is a challenge. DisplayPort 1.3 supports 8K at 30p, with dual cables being used for 60p. HDMI 2.1 will eventually allow devices to support 8K video on a single cable as well. (The HDMI 2.1 specification was just released at the end of November, so it will be a while before we see it implemented in products on the market. DisplayPort 1.4 exists today — GPUs, Dell monitor — while HDMI 2.1 only exists on paper and in CES technology demos.) Another approach is to use multiple parallel channels for 12G SDI, similar to how quad 3G SDI can be used to transmit 4K data. It is more likely that by the time most facilities are pushing around lots of realtime 8K content, they will have moved to video IP, and be using compression to move 8K streams on 10GbE networks, or moving uncompressed 8K content on 40Gb or 100Gb networks.

Software
The next step is the software part, which is in pretty good shape. Most high-end applications are already set for 8K, because high resolutions are already used as backplates and for other unique uses, and because software is the easiest part of allowing higher resolutions. I have edited 8K files in Adobe Premiere Pro in a variety of flavors without issue. Both Avid Media Composer and Blackmagic Resolve claim to support 8K content. Codec-wise, there are already lots of options for storing 8K, including DNxHR, Cineform, JPEG2000 and HEVC/H265, among many others.

Blackmagic DeckLink 8K Pro

The hardware to process those files in realtime is a much greater challenge, but we are just seeing the release of Intel’s next generation of high-end computing chips. The existing gear is just at the edge of functional at 8K, so I expect the new systems to make 8K editing and playback a reality at the upper end. Blackmagic has announced the DeckLink 8K Pro, a PCIe card with quad 12G SDI ports. I suspect that AJA’s new Io 4K Plus may support 8K at some point in the future, with quad bidirectional 12G SDI ports. Thunderbolt 3 is the main bandwidth limitation there, but it should do 4:2:2 at 24p or 30p. I am unaware of any display that can take that yet, but I am sure they are coming.

In regards to displays, the only one commercially available is Dell’s UP3218K monitor running on dual DisplayPort 1.4 cables. It looks amazing, but you won’t be able to hook it up to your 8K camera for live preview very easily. An adapter is a theoretical possibility, but I haven’t heard of any being developed. Most 8K assets are being recorded to be used in 4K projects, so the output and display at 8K aren’t as big of a deal. Most people will have their needs met with existing 4K options, with the 8K content giving them the option to reframe their shot without losing resolution.

Dell UP3218K

Displaying 8K content at 4K is a much simpler proposition with current technology. Many codecs allow for half-res decode, which makes the playback requirements similar to 4K at full resolution. While my dual-processor desktop workstation can playback most any intermediate codec at half resolution for 4K preview, my laptop seems like a better test-bed to evaluate the fractional resolution playback efficiency of various codecs at 8K, so that will be one of my next investigations.

Assuming you want to show your content at the full 8K, how do you deliver it to your viewers? H.264 files are hard-limited to 4K, but HEVC (or H.265) allows 8K files to be encoded and decoded at reasonable file sizes, and is hardware-accelerated on the newest GPU cards. So 8K HEVC playback should be possible on shipping mid- and high-end computers, provided that you have a display to see it on. 8K options will continue to grow as TV makers push to set apart their top-of-the-line models, and that will motivate development of the rest of the ecosystem to support them.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NBCUni 7.26

On Hold: Making an indie web series

By John Parenteau

On Hold is an eight-episode web series, created and co-written by myself and Craig Kuehne, about a couple of guys working at a satellite company for an India-based technology firm. They have little going for themselves except each other, and that’s not saying much. Season 1 is available now, and we are in prepro on Season 2.

While I personally identify as a filmmaker, I’ve worn a wide range of hats in the entertainment industry since graduating from USC School of Cinematic Arts in the late ‘80s. As a visual effects supervisor, I’ve been involved in projects as diverse as Star Trek: Voyager and Hunger Games. I have also filled management roles at companies such as Amblin Entertainment, Ascent Media, Pixomondo and Shade VFX.

That’s me in the chair, conferring on setup.

It was with my filmmaker hat on that I recently partnered with Craig, a long-time veteran of visual effects, whose credits include Westworld and Game of Thrones. We thought it might be interesting to share our experiences as we ventured into live-action production.

It’s not unique that Craig and I want to be filmmakers. I think most industry professionals, who are not already working as directors or producers, strive to eventually reach that goal. It’s usually the reason people like us get into the business in the first place, and what many of us continue to pursue. Often we’ve become successful in another aspect of entertainment and found it difficult to break out of those “golden handcuffs.” I know Craig and I have both felt that way for years, despite having led fairly successful lives as visual effects pros.

But regardless of our successes in other roles, we still identify ourselves as filmmakers, and at some point, you just have to make the big push or let the dream go. I decided to live by my own mantra that “filmmakers make film.” Thus, On Hold was born.

Why the web series format, you might ask? With so many streaming and online platforms focused on episodic material, doing a series would show we are comfortable with the format, even if ours was a micro-version of a full series. We had, for years, talked about doing a feature film, but that type of project takes so many resources and so much coordination. It just seemed daunting in a no-budget scenario. The web series concept allows us to produce something that resembles a marketable project, essentially on little or no budget. In addition, the format is easily recreated for an equally low budget, so we knew we could do a second season of the show once we had done the first.

This is Craig, pondering a shot.

The Story
We have been friends for years, and the idea for the series came from both our friendship and  our own lives. Who hasn’t felt, as they were getting older, that maybe some of the life choices they made might not have been the best? That can be a serious topic, but we took a comedic angle, looking for the extremes. Our main characters, Jeff (Jimmy Blakeney) and Larry (Paul Vaillancourt), are subtle reflections of us (Craig is Jeff, the somewhat over-thinking, obsessive nerd, and I’m Larry, a bit of a curmudgeon, who can take himself way too seriously), but they quickly took a life of their own, as did the rest of the cast. We added in Katy (Brittney Bertier), their over-energetic intern, Connie (Kelly Keaton), Jeff’s bigger-than-life sister, and Brandon (Scott Rognlien), the creepy and not-very- bright boss. The chemistry just clicked. They say casting is key, and we certainly discovered that on this project. We were very lucky to find the actors we did, and  played off of each other perfectly.

So what does it take to do a web series? First off, writing was key. We spent a few months working out the overall storyline of the first season and then honed in on the basic outlines of each episode. We actually worked out a rough overall arc of the show itself, deciding on a four-season project, which gave us a target to aim for. It was just some basic imagery for an ultimate ending of the show, but it helped keep us focused and helped drive the structure of the early episodes. We split up writing duties, each working on alternate episodes and then sharing scripts with each other. We tried to be brutally honest; It was important that the show reflect both of our views. We spent many nights arguing over certain moments in each episode, both very passionate about the storyline.

In the end we could see we had something good, we just needed to add our talented actors to make it great.

On Hold

The Production
We shot on a Blackmagic Cinema camera, which was fairly new at that point. I wanted the flexibility of different lenses but a high-resolution and high-quality picture. I had never been thrilled with standard DSLR cameras, so I thought the Blackmagic camera would be a good option. To top it off, I could get one for free — always a deciding factor at our budget level. We ended up shooting with a single Canon zoom lens that Craig had, and for the most part it worked fine. I can’t tell you how important the “glass” you shoot with can be. If we had the budget I would have rented some nice Zeiss lenses or something equally professional, and the quality of the image reflects the lack of budget. But the beauty of the Blackmagic Cinema Camera is that it shoots such a nice image already, and at such a high resolution, that we knew we would have some flexibility in post. We recorded in Apple ProRes.

As a DP, I have shot everything from PBS documentaries to music videos, commercials and EPKs (a.k.a. behind the scenes projects), and have had the luxury of working with a load of gear, sometimes with a single light. At USC Film School, my alma mater, you learn to work with what you have, so I learned early to adapt my style to the gear on hand. I ended up using a single lighting kit (a Lowell DP 3 head kit) which worked fine. Shooting comedy is always more about static angles and higher key lighting, and my limited kit made that easily accessible. I would usually lift the ambience in the room by bouncing a light off a wall or ceiling area off camera, then use bounce cards on C-stands to give some source light from the top/side, complementing but not competing with the existing fluorescents in the office. The bigger challenges were when we shot toward the windows. The bright sunlight outside, even with the blinds closed, was a challenge, but we creatively scheduled those shots for early or late in the day.

Low-budget projects are always an exercise in inventiveness and flexibility, mostly by the crew. We had a few people helping off and on, but ultimately it came down to the two of us wearing most of the hats and our associate producer, Maggie Jones, filling in the gaps. She handled the SAG paperwork, some AD tasks, ordered lunch and even operated the boom microphone. That left me shooting all but one episode, while we alternated directing episodes. We shot an episode a day, using a friend’s office on the weekends for free. We made sure we created shot lists ahead of time, so I could see what he had in mind when I shot Craig’s episodes, but also so he could act as a backup check on my list when I was directing.

The Blackmagic camera at work.

One thing about SAG — we decided to go with the guild’s new media contract for our actors. Most of them were already SAG, and while they most likely would have been fine shooting such a small project non-union, we wanted them to be comfortable with the work. We also wanted to respect the guild. Many people complain that working under SAG, especially at this level, is a hassle, but we found it to be exactly the opposite. The key is keeping up with the paperwork each day you shoot. Unless you are working incredibly long hours, or plan to abuse your talent (not a good idea regardless), it’s fairly easy to remain compliant. Maggie managed the daily paperwork and ensured we broke for lunch as per the requirements. Other than that, it was a non-issue.

The Post
Much like our writing and directing, Craig and I split editorial tasks. We both cut on Apple Final Cut Pro X (he with pleasure, me begrudgingly), and shared edits with each other. It was interesting to note differences in style. I tended to cut long, letting scenes breathe. Craig, a much better editor than I, had snappier cuts that moved quicker. This isn’t to say my way didn’t work at times, but it was a nice balance as we made comments on each other’s work. You can tell my episodes are a bit longer than his, but I learned from the experience and managed to shorten my episodes significantly.

I did learn another lesson, one called “killing your darlings.” In one episode, we had as scene where Jeff enjoyed a box of donuts, fishing through them to find the fruit-filled one he craved. The process of him licking each one and putting them back, or biting into a few and spitting out pieces, was hilarious onset, but in editorial I soon learned that too much of a good thing can be bad. Craig persuaded me to trim the scene, and I realized quickly that having one strong beat is just as good as several.

We had a variety of issues with other areas of post, but with no budget we could do little about them. Our “mix” consisted of adjusting levels in our timeline. Our DI amounted to a little color correction. While we were happy with the end result, we realized quickly that we want to make season two even better.

On Hold

The Lessons
A few things pop out as areas needing improvement. First of all, shooting a comedy series with a great group of improv comedians mandates at least two cameras. Both Craig and I, as directors, would do improv takes with the actors after getting the “scripted version,” but some of it was not usable since cutting between different improv takes from a single camera shoot is nearly impossible. We also realized the importance of a real sound mixer on set. Our single mic, mono tracks, run by our unprofessional hands, definitely needed some serious fixing in post. Simply having more experienced hands would have made our day more efficient as well.

For post, I certainly wanted to use newer tools, and we called in some favors for finishing. A confident color correction really makes the image cohesive, and even a rudimentary audio mix can remove many sound issues.

All in all, we are very proud of our first season of On Hold. Despite the technical issues and challenges, what really came together was the performances, and, ultimately, that is what people are watching. We’ve already started development on Season 2, which we will start shooting in January 2018, and we couldn’t be more excited.

The ultimate lesson we’ve learned is that producing a project like On Hold is not as hard as you might think. Sure it has its challenges, but what part of entertainment isn’t a challenge? As Tom Hanks says in A League of Their Own, “It’s supposed to be hard. If it wasn’t hard everyone would do it.” Well, this time, the hard work was worth it, and has inspired us to continue on. Ultimately, isn’t that the point of it all? Whether making films for millions of dollars, or no-budget web series, the point is making stuff. That’s what makes us filmmakers.

 

 


SMPTE ST 2110 enables IP workflows

By Tom Coughlin

At IBC2017 and this year’s SMPTE Conference there were significant demonstrations of IP-based workflows with interoperability demonstrations and conference sessions. Clearly proprietary media networking will be supplanted by IP-based workflows. This will enable new equipment economies and open up new opportunities for using and repurposing media. IP workflows will also impact the way we store and use digital content and thus the storage systems where they live.

SMPTE has just ratified ST 2110 standards for IP transport in media workflows. The standard puts video, audio and ancillary data into separate routable streams as shown in the figure below. PCM Audio streams are covered by SMPTE ST 2110-30, uncompressed video streams are covered by ST 2110-20 and ancillary data is covered by ST 2110-40. Some other parts of the standards cover traffic shaping of uncompressed video (ST 2110-21), AES3 transparent transport (ST 2110-31) and ST 2110-50 allows integration with older specification ST 2022-6 that covers legacy SDI over IP.

The separate streams have timestamps that allow proper alignment of the different streams when they are combined together — this timestamp is provided by ST 2059. Each stream contains metadata that tells the receiver how to interpret what is inside of the stream. The uncompressed video stream supports up to 32k X 32k images, HDR and all common color systems and formats.

The important thing about these IP standards is that they allow using conventional Ethernet cabling rather than special proprietary cables. This saves a lot of money on hardware. In addition, having an IP-based workflow allows easy ingest into a core IP network and distribution of content using IP-based broadcast, telco, cable and broadband technologies as well as satellite channels. As most consumers have IP content access, these IP networks connect directly to consumer equipment. The image below from an Avid presentation by Shailendra Mathur at SMPTE 2017 illustrates the workflow below.

At IBC and the SMPTE 2017 Conference there were interoperability demonstrations. Although the IBC interop demo had many more participants the SMPTE demo was pretty extensive. The photo below shows the SMPTE interoperability demonstration setup.

As many modern network storage systems, whether file or object based, use Ethernet connectivity, having the rest of the workflow using an IP network makes movement of data through the workflow to and from digital storage easier. Since access to cloud-based assets is also though IP-based networks and these can feed CDNs and other distribution networks, on-premise and cloud storage interact through IP networks and can be used to support working storage, archives as well as content distribution libraries.

IP workflows combined with IP-based digital storage provide end-to-end processing and storage of data. This provides hardware economics and access to a lot of software built to manage and monitor IP flows to help optimize a media production and distribution system. By avoiding the overhead of converting from one type of network to another the overall system complexity and efficiency will be improved, resulting in faster projects and easier repair of problems when they arise.


Tom Coughlin is president of Coughlin Associates. He is the founder and organizer of the annual Storage Visions Conference as well as the Creative Storage Conference. He has also been the general chairman of the annual Flash Memory Summit.


Working with Anthropologie to build AR design app

By Randi Altman

Buying furniture isn’t cheap; it’s an investment. So imagine having an AR app that allows you to see what your dream couch looks like in paisley, or colored dots! Well imagine no more. Anthropologie — which sells women’s clothing, shoes and accessories, as well as furniture, home décor, beauty and gifts — just launched its own AR app, which gives users the ability to design and customize their own pieces and then view them in real-life environments.

They called on production and post house CVLT to help design the app. The bi-coastal studio created over 96,000 assets, allowing users to combine products in very realistic and different ways. The app also accounts for environmental lighting and shadows in realtime.

We reached out to CVLT president Alberto Ruiz to find out more about how the studio worked with Anthroplogie to create this app.

How early did CVLT get involved in the project?
Our involvement began in the spring of 2017. We collaborated early in the planning phases when Anthropologie was concepting how to best execute the collection. Due to our background in photography, video production and CGI, we discussed the positives and pitfalls of each avenue, ultimately helping them select CGI as the path forward.

We’re often approached by a brand with a challenge and asked to consult on the best way to create the assets needed for the campaign. With specialists in each category, we look at all available ways of executing a particular project and provide a recommendation as to the best way to build a campaign with longevity in mind.

How did CVLT work with Anthropologie? How much input did you have?
We worked in close collaboration with Anthropologie every step of the way. We helped design style guides and partnered with their development team to test and optimize assets for every platform.

Our creatives worked closely with Anthropologie to elevate the assets to a high-quality reflective of the product integrity. We presented CGI as a way to engage customers now and in the future through AR/VR platforms. Because of this partnership, we understood the vision for future executions and built our assets with those executions in mind. They were receptive to our suggestions and engaged in product feedback. All in all, it was a true partnership between companies.

Has CVLT worked on assets or materials for an app before? How much of your work is for apps or the web?
The majority of the work that we produce is for digital platforms, whether for the web, mobile or experiential platforms. In addition to film and photography projects, we produce highly complex CGI products for luxury jewelers, fragrance and retail companies.

More and more clients are looking to either supplement or run full campaigns digitally. We believe that investing in emerging technologies, such as augmented and virtual reality, is paramount in the age of digital and mobile content. Our commitment to emerging technologies connects our clients with the resources to explore new ways of communicating with their audience.

What were the challenges of creating so many assets? What did you learn that could be applicable moving forward?
The biggest challenge was unpacking all the variables within this giant puzzle. There are 138 unique pieces of furniture in 11 different fabrics, with 152 colorways, eight leg finishes and a variety of hardware options. Stylistically, colors of a similar family were to live on complementary backgrounds, adding yet another variable to the project. It was basically a rubix cube on steroids. Luckily, we really enjoy puzzles.

We always believed in having a strong production team and pipeline. It was the only way to achieve the scale and quality of this project. This was further reinforced as we raced toward the finish line. We’re now engaged in future seasons and are focused on refining the pipe and workflow tools therein.

Any interesting stories from working on the project?
One of the most interesting things about working on the project was how much we learned about furniture. The level of planning and detail that goes into each piece is amazing. We talk a lot about the variables in colors, fabrics and styles because they are the big factors. What remains hidden are the small details that have large impacts. We were given a crash course in stitching details, seam placements, tufting styles and more. Those design details are what set an Anthropologie piece apart.

Another interesting part of the project was working with such an iconic brand with a strong heritage. The rich history of design at Anthropologie permeates every aspect of their work. The same level of detail poured into product design is also visible in the way they communicate with and understand their customer.

What tools were used throughout the project?
Every time we approach a new project we assess the tools that we have in our arsenal and the custom tools that we can develop to make the process smoother for our clients. This project was no different in that sense. We combined digital project management tools with proprietary software to create a seamless experience for our client and staff.

We built a bi-coastal team for this project between our New York and Los Angeles offices. Between that and our Philadelphia-based client, we relied heavily on collaborative digital tools to manage reviews. It’s a workflow we’re accustomed to as many of our clients have a global presence, which was further refined to meet the scale of this project.

What was the most difficult part of the project?
The timeframe was really the biggest challenge in this project. The sheer volume of assets — 96,000 that we created in under five months was definitely a monumental task, and one we’re very proud of.


Making 6 Below for Barco Escape

By Mike McCarthy

There is new movie coming out this week that is fairly unique. Telling the true story of Eric LeMarque surviving eight days lost in a blizzard, 6 Below: Miracle on the Mountain is the first film shot and edited in its entirety for the new Barco Escape theatrical format. If you don’t know what Barco Escape is, you are about to find out.

This article is meant to answer just about every question you might have about the format and how we made the film, on which I was post supervisor, production engineer and finishing editor.

What is Barco Escape?
Barco Escape is a wraparound visual experience — it consists of three projection screens filling the width of the viewer’s vision with a total aspect ratio of 7.16:1. The exact field of view will vary depending on where you are sitting in the auditorium, but usually 120-180 degrees. Similar to IMAX, it is not about filling the entire screen with your main object but leaving that in front of the audience and letting the rest of the image surround them and fill their peripheral vision in a more immersive experience. Three separate 2K scope theatrical images play at once resulting in 6144×858 pixels of imagery to fill the room.

Is this the first Barco Escape movie?
Technically, four other films have screened in Barco Escape theaters, the most popular one being last year’s release of Star Trek Beyond. But none of these films used the entire canvas offered by Escape throughout the movie. They had up to 20 minutes of content on the side screens, but the rest of the film was limited to the center screen that viewers are used to. Every shot in 6 Below was framed with the surround format in mind, and every pixel of the incredibly wide canvas is filled with imagery.

How are movies created for viewing in Escape?
There are two approaches that can be used to fill the screen with content. One is to place different shots on each screen in the process of telling the story. The other is to shoot a wide enough field of view and high enough resolution to stretch a single image across the screens. For 6 Below, director Scott Waugh wanted to shoot everything at 6K, with the intention of filling all the screens with main image. “I wanted to immerse the viewer in Eric’s predicament, alone on the mountain.”

Cinematographer Michael Svitak shot with the Red Epic Dragon. He says, “After testing both spherical and anamorphic lens options, I chose to shoot Panavision Primo 70 prime lenses because of their pristine quality of the entire imaging frame.” He recorded in 6K-WS (2.37:1 aspect ratio at 6144×2592), framing with both 7:1 Barco Escape and a 2.76:1 4K extraction in mind. Red does have an 8:1 option and a 4:1 option that could work if Escape was your only deliverable. But since there are very few Escape theaters at the moment, you would literally be painting yourself into a corner. Having more vertical resolution available in the source footage opens up all sorts of workflow possibilities.

This still left a few challenges in post: to adjust the framing for the most comfortable viewing and to create alternate framing options for other deliverables that couldn’t use the extreme 7:1 aspect ratio. Other projects have usually treated the three screens separately throughout the conform process, but we treated the entire canvas as a single unit until the very last step, breaking out three 2K streams for the DCP encode.

What extra challenges did Barco Escape delivery pose for 6 Below’s post workflow?
Vashi Nedomansky edited the original 6K R3D files in Adobe Premiere Pro, without making proxies, on some maxed-out Dell workstations. We did the initial edit with curved ultra-wide monitors and 4K TVs. “Once Mike McCarthy optimized the Dell systems, I was free to edit the source 6K Red RAW files and not worry about transcodes or proxies,” he explains. “With such a quick turnaround everyday, and so much footage coming in, it was critical that I could jump on the footage, cut my scenes, see if they were playing well and report back to the director that same day if we needed additional shots. This would not have been possible time-wise if we were transcoding and waiting for footage to cut. I kept pushing the hardware and software, but it never broke or let me down. My first cut was 2 hours and 49 minutes long, and we played it back on one Premiere Pro timeline in realtime. It was crazy!”

All of the visual effects were done at the full shooting resolution of 6144×2592, as was the color grade. Once Vashi had the basic cut in place, there was no real online conform, just some cleanup work to do before sending it to color as an 8TB stack of 6K frames. At that point, we started examining it from the three-screen perspective with three TVs to preview it in realtime, courtesy of the Mosaic functionality built into Nvidia’s Quadro GPU cards. Shots were realigned to avoid having important imagery in the seams, and some areas were stretched to compensate for the angle of the side screens from the audiences perspective.

DP Michael Svitak and director Scott Waugh

Once we had the final color grade completed (via Mike Sowa at Technicolor using Autodesk Lustre), we spent a day in an Escape theater analyzing the effect of reflections between the screens and its effect on the contrast. We made a lot of adjustments to keep the luminance of the side screens from washing out the darks on the center screen, which you can’t simulate on TVs in the edit bay. “It was great to be able to make the final adjustments to the film in realtime in that environment. We could see the results immediately on all three screens and how they impacted the room,” says Waugh.

Once we added the 7.1 mix, we were ready to export assets for our delivery in many different formats and aspect ratios. Making the three streams for Escape playback was a simple as using the crop tool in Adobe Media Encoder to trim the sides in 2K increments.

How can you see movies in the Barco Escape format?
Barco maintains a list of theaters that have Escape screens installed, which can be found at ready2escape.com. But for readers in the LA area, the only opportunity to see a film in Barco Escape in the foreseeable future is to attend one of the Thursday night screenings of 6Below at the Regal LA Live Stadium or the Cinemark XD at Howard Hughes Center. There are other locations available to see the film in standard theatrical format, but as a new technology, Barco Escape is only available in a limited number of locations. Hopefully, we will see more Escape films and locations to watch them in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


A closer look at some London-based audio post studios

By Mel Lambert

While in the UK recently for a holiday/business trip, I had the opportunity to visit several of London’s leading audio post facilities and catch up with developments among the Soho community.

‘Baby Driver’

I also met up with Julian Slater, a highly experienced supervising sound editor, sound designer and re-recording mixer who relocated to the US a couple of years ago, working first at Formosa Group and then at the Technicolor at Paramount facility in Hollywood. Slater was in London working on writer/director Edgar Wright’s action-drama Baby Driver, starring Lily James, Jon Hamm, Jon Bernthal and Jamie Foxx. The film follows the progress of a young getaway driver who, after being coerced into working for a crime boss, finds himself taking part in a heist that’s doomed to fail.

Goldcrest Films
Slater handled sound effects pre-dubs at Goldcrest Films on Dean Street in the heart of Soho’s film district, while co-mixer Tim Cavagin worked on dialog and Foley pre-mixes at Twickenham TWI Studios in Richmond, a London suburb west of the capital. Finals started just before Christmas at Goldcrest, with Slater handling music and SFX, while Cavagin oversaw dialog and Foley. “We are using Goldcrest’s new Dolby Atmos-capable Theater 1, which opened last May,” explains Slater. “The post crew includes sound effects editors Arthur Graley, Jeremy Price and Martin Cantwell, plus dialog/ADR supervisor Dan Morgan and Foley editor Peter Hanson.

“I cannot reveal too much about my sound design for Baby Driver,” admits Slater, “but because the lead character [actor Ansel Elgort] has a hearing anomaly, I am working with pitch changes to interweave various elements of the film’s soundtrack.”

Baby Driver is scheduled for UK and US release in August, and will be previewed in mid-March at the SXSW Film Festival in Austin. Composer Steven Price’s score for the film was recorded at Abbey Road Studios in North London. Price wrote the music for writer/director Alfonso Cuarón’s Gravity (2013), which won him the Academy Award for Best Original Score.

British-born Wright is probably best known for comedies, such as Shaun of the Dead (2004), Hot Fuzz (2007) and The World’s End (2013), several of which featured Slater’s talents as supervising sound editor, sound designer and/or re-recording mixer.

Slater is a multiple BAFTA and Emmy Award nominee. After graduating from the School of Audio Engineering (now the SAE Institute) in London, at the age of 22 he co-founded the Hackenbacker post company and designed sound for his first feature film, director Mike Figgis’ Leaving Las Vegas (1995). Subsequent films include In Bruges (2008), Dark Shadows (2012), Scott Pilgrim Vs. the World (2010) and Attack the Block (2011).

Goldcrest Films, which has a NYC-based studio as well, provides post services for film and broadcast projects, including Carol (2015), The Danish Girl (2015) and Les Misérables (2012). The facility features three Dolby dubbing theaters with DCI-compliant projection, plus ADR and Foley recording stages, sound design and editing suites, offline editorial and grading suites. “Last May we opened Theatre 1,” reports studio manager Rob Weatherall, “a fully sound-isolated mixing theater that is Dolby Atmos Premier-certified.”

Goldcrest Films Theater 1 (L-R): Alex Green, Rowan Watson, Julian Slater, Rob Weatherall and Robbie Scott.

First used to re-record writer/director Paul Greengrass’ Jason Bourne (2016), the new room houses a hybrid Avid 32-fader S6 M40 Pro Tools control surface section within a 72-fader dual-engine AMS Neve DFC3D Gemini frame. By building interchangeable AMS and S6 “buckets” in a single console frame, the facility can mix and match formats according to the re-recording engineers’ requirements — either “in the box” using the S6 surface, or a conventional workflow using the DFC sections.

“I like working in the box,” says Slater, “since it lets me retain all my sound ideas right through print mastering. For Baby Driver we premixed to a 9.1-channel bed with Atmos objects and brought this submix here to Goldcrest where we could open everything seamlessly on the S6 console and refine all my dialog, music and effects submixes for the final Atmos immersive mix. Because I have so much sound design for the music being heard by our lead character, including sound cues for the earbuds and car radios, it’s the only way to work! We also had a lot of music playback on the set.”

The supervising sound editor needed to carefully prepare myriad sound cues. “Having worked on all of his films, I have come to recognize that Edgar [Wright] is an extremely sound-conscious director,” Slater reports. “The soundtrack for Baby Driver needed to work seamlessly and sound holistic — not forced in any way. In other words, while sound is important in this film — for obvious reasons — it is critical that we don’t detract the audience from the dramatic storyline.”

Theater 1’s 55-loudspeaker Atmos array includes a mixture of Crown-powered JBL 5732s Screen Array cabinets in the front with Meyer cabinets for the surrounds. Accommodated formats include 5.1, 7.1 and DTS:X. Five Pro Tools playback systems are available with Waves Platinum plug-in packages, plus a 192-channel Pro Tools HDX 3 recorder. Each Pro Tools rig features a DAD DX32 audio interface, with both Audinate Dante- and MADI-format digital outputs. The latter can be routed to the DFC console for conventional mixing or to a sixth rig with a DAD AX32 converter system for in the box mixing on the S6 control surface. Video projection is via a Barco DP2K-10SX and an Integrated Media Server for DCP playback, and Pro Tools Native with an AJA video card. Outboards include a pair of Lexicon 960 reverbs, two TC 6000 reverb and four dbx Subharmonic synthesizers.

Hackenbacker Audio Post
Around the corner from Goldcrest, Slater’s former facility Hackenbacker Audio Post comprises a multi-room post facility that was purchased in July 2015 by Molinare from e-Post Media, owners of Halo Post. Hackenbacker handled sound for the TV series Downton Abbey, Cold Feet and Thunderbirds Are Go, plus director Richard Ayoade’s film, The Double (2013). Owner/founder Nigel Heath remains a director of the group management team for the facility’s three dubbing studios, five edit suites and a large Foley stage located a short distance away.

Hackebacker’s Studio 2

Hackenbacker Studio 1 has been Heath’s home base for more than a decade. It houses a large-format AMS Neve 48-fader MMC Neve console with three Avid HD3 Pro Tools systems, two iZ Technologies RADAR 24-track recorder/players and a Dynaudio M3F 5.1 monitoring system that was used to re-record Hot Fuzz, In Bruges, Shaun of the Dead and many other projects.

Studio 2 features Dynaudio monitoring along with an Avid Icon 16-fader D-Control surface linked to a Pro Tools HDX system. It is used for 5.1 TV mixing and ADR and includes a large booth suitable for both ADR and voice-over. Also designed for TV mixing and ADR, Studio 3 features Quested monitoring and an Avid ICON 32-fader D-control surface linked to a Pro Tools HDX system. Edit 1 and 2 handle a wide cross section of sound effects editorial assignments, with access to a large sound library and other creative tools. Edit 3 and 4 are equipped for dialog and ADR editing. Edit 5 features a transfer bay and QC facility in which all sound material is verified and checked.

Twickenham TWI Studios
According to technology development manager/re-recording mixer Craig Irving, Twickenham TWI Studios recently completed mixing of the soundtrack for writer/director Stanley Tucci’s Final Portrait, the story of Swiss painter and sculptor Alberto Giacometti, starring Armie Hammer and Geoffrey Rush. The film was re-recorded by Tim Cavagin and Irving, with sound editorial by Tim Hands on dialog and Jack Gillies on effects.

The lounge at Twickenham-TWI.

“Dialog tracks for Baby Driver were pre-mixed by Tim in our Atmos-capable Theatre 1,” explains Irving. “Paul Massey will be returning soon to complete the mix in Theatre 1 for director Ridley Scott’s Alien Covenant, which reunites the same sound team that worked on The Martian — with Oliver Tarney supervising, Rachel Tate on dialog, and Mark Taylor and our very own Dafydd Archard on effects.” Massey also mixed Scott’s Exodus: Gods and Kings (2014) at Twickenham TWI. He also worked on director Rufus Norris’ London Road (2015) and director Tim Miller’s Deadpool (2016). He recently completed the upcoming Pirates of the Caribbean: Dead Men Tell No Tales. While normally based at Fox Post Production Services in West Los Angeles, Massey also spends time in his native England overseeing a number of film projects.

“Their stages have also been busy with production of Netflix’s Black Mirror series, which consists of six original films looking at the darker side of modern life. Episode 1 was directed by Jodie Foster. “To service an increase in production, we are investing in new infrastructure that will feature a TV mixing stage,” explains Irving. “The new room will be based around an Avid S6 control surface and used as a bespoke area to mix original TV programming, as well as creating TV mixes of our theatrical titles. Our Picture Post area is also being expanded with a second FilmLight Baselight Two color grading system with full 4K projection for both theatrical and broadcast projects.”

Twickenham TWI’s rooftop bar and restaurant opened its doors to clients and staff last year. “It has proved extremely popular and is open to membership from within the industry,” Irving says. The facility’s remodeled front office and reception area was designed Barbarella Design. “We have chosen a ‘’60s retro, Mad Men theme in greys and red,” says the studio’s COO Maria Walker. In addition to its two main re-recording theaters, TWI offers 40 cutting rooms, an ADR/Foley stage and three shooting stages.

Warner Bros. De Lane Lea
Just up the street from Goldcrest Films is Warner Bros. De Lane Lea, which started as a multi-room studio. It also has a rather unusual ancestry. In the 1940s, Major De Lane Lea was looking to improve the way dialog for film and later TV could be recorded and replaced in order to streamline dubbing between French and English. This resulted in his setting up a company called De Lane Lea Processes and a laboratory in Soho. The company also developed a number of other products aimed at post, and over the next 30 years opened a variety of studios in London for voice recording, film, TV and jingle mixing, music recording and orchestral-score recording.

De Lane Lea’s Stage 1.

Around 1970, the operation moved into its current building on Dean Street and shifted its focus toward film and TV sound. The facility, which was purchased by Warner Bros. in 2012, currently includes four re-recording stages, two ADR stages for recording dialog, voiceovers and commentaries, plus 50 cutting rooms, a preview theater, transfer bay and a café/bar. Three of the Dolby-certified mixing stages are equipped with AMS Neve DFC Gemini consoles or Avid S6 control surfaces and Meyer monitoring. A TV mixing stage boasts an Avid Pro Tools control surface and JBL monitoring.

Stage 1 features an AMS Neve 80-fader DFC Gemini digital two-mixer console with an Avid control surface, linked to a Meyer Sound EXP system providing Dolby Atmos monitoring. Six Pro Tools playback systems are available — three 64-channel HDX and three 128-channel HDX2 rigs — together with a 128-channel HDX2 Pro Tools recorder. Film projection is from a Kinoton FP38ECII 35mm unit, with a Barco DP2K-23B digital cinema projector offering resolutions up to 2K. Video playback within Pro Tools is via a VCubeHD nonlinear player or a Blackmagic card. Outboards include a Lexicon 960 and a TC 6000 reverb, plus two dbx Subharmonic Synthesizers. Stage 2 is centered around an Avid S6 M40 24-fader console linked to three Pro Tools playback systems — a pair of 64-channel HDX2 and a single 128-channel HDX2 rig — plus a 64-channel HDX recorder. Monitoring is via a 7.1-channel Meyer Sound EXP system.

Warner Bros. Studios Leavesden
Located 20 miles north west of Central London and serving as its UK-based shooting lot, Warner Bros. Studios Leavesden offers a number of well-equipped stages for large-scale productions, in addition to a large tank for aquatic scenes. The facility’s history dates back almost 70 years, to when it was originally acquired by the UK Ministry of Defense in 1939 as a WWII production base for building aircraft, including the iconic Mosquito Fighter and Halifax Bombers. When hostilities ceased, the site was purchased by Rolls Royce and continued as a base for aircraft manufacture, progressing onto large engines. It eventually closed in 1992.

Warner Bros. Leavesden’s studio layout.

In 1994, Leavesden began a new life as a film studio and over the following decades was home to a number of high-profile productions, including the James Bond film Goldeneye (1995), Mortal Kombat: Annihilation (1997), Star Wars Episode One: The Phantom Menace (1999), An Ideal Husband (1999) and director Tim Burton’s Sleepy Hollow (1999).

By 2000, Heyday Films had acquired use of the site on behalf of Warner Bros. for what would be the first in a series of Harry Potter films — Harry Potter and the Philosopher’s Stone (2001) — with each subsequent film in the franchise during the following decade being shot at Leavesden. While other productions, almost exclusively Warner Bros. productions, made partial use of the complex, the site was mostly occupied by permanent standing sets for the Harry Potter films.

In 2010, as the eighth and final Harry Potter film was nearing completion, Warner Bros. announced its intention to purchase the studio as a permanent European base, the first studio to do so since MGM in the 1940s. By November of that year, the studio had completed purchase of Leavesden Studios and announced plans to invest more than £100 million (close to $200 million at the time) on the site they had occupied, converting Stages A through H into sound stages. As part of the redevelopment, Warner Bros. created two entirely new soundstages to house a permanent public exhibition called Warner Bros. Studio Tour London — The Making of Harry Potter, creating 300 new jobs. It opened to the public in early 2012.

With over 100 acres, WBSL features one of the most extensive backlots in Europe, with level, graded areas, including a former aircraft runway, a variety of open fields, woodlands, hills and clear horizons. In addition, it offers bespoke art departments, dry-hire edit suites and VFX rooms, in addition to a pair of the largest water tanks in Europe, with a 60-by-60 foot filtered and heated indoor tank, and a 250-by-250 foot exterior tank.

Main Image: Goldcrest London’s Theater 1.


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Utopic editor talks post for David Lynch tribute Psychogenic Fugue

Director Sandro Miller called on Utopic partner and editorCraig Lewandowski to collaborate on Psychogenic Fugue, a 20-minute film starring John Malkovich in which the actor plays seven characters in scenes recreated from some of filmmaker David Lynch’s films and TV shows. These characters include The Log Lady, Special Agent Dale Cooper, and even Lynch himself as narrator of the film.

It is part of a charity project called Playing Lynch that will benefit the David Lynch Foundation, which seeks to introduce at-risk populations affected by trauma to transcendental meditation.

craigChicago-based Utopic handled all the post, including editing, graphics, VFX and sound design. The film is part of a multimedia fundraiser hosted by Squarespace and executed by Austin-based agency, Preacher. The seven vignettes were released one at a time on Playinglynch,com.

To find out more about Utopic’s work on the film, we reached out to Lewandowski with some questions.

How early were you brought in on the film?
We were brought in before the project was even finalized. There were a couple other ideas that were kicked around before this one rose to the top.

We cut together a timing board using all the pieces we would later be recreating. We also pulled some hallway scenes from an old Playstation commercial that he directed, and we then scratched in all the “Lynch” lines for timing.

You were on set. Can you talk about why and what the benefits were for the director and you as an editor?
My job on the set was to have our reference movie at the ready and make sure we were matching timing, framing, lighting, etc. Sandro would often check the reference to make sure we were on track.

For scenes like the particles in Eraserhead, I had the DP shoot it at various frame rates and at the highest possible resolution, so we could shoot it vertical and use the particles falling. I also worked with the Steadicam operator to get a variety of shots in the hallway since I knew we’d need to create some jarring cutaways.

How big of a challenge was it dealing with all those different iconic characters, especially in a 20-minute film?
Sandro was adamant that we not try to “improve” on anything that David Lynch originally shot. Having had a lot of experience with homages, Sandro knew that we couldn’t take liberties. So the sets and action were designed to be as close as possible to the original characters.

In shots where it was only one character originally (The Lady in the Radiator, Special Agent Dale Cooper, Elephant Man) it was easier, but in scenes where there were originally more characters and now it was just Malkovich, we had to be a little more creative (Frank Booth, Mystery Man). Ultimately, with the recreations, my job was to line up as closely as possible with what was originally done, and then with the audio do my best to stay true to the original.

Can you talk about your process and how you went about matching the original scenes? Did you feel much pressure?
Sandro and I have worked together before, so I didn’t feel a lot of pressure from him, but I think I probably put a fair amount on myself because I knew how important this project was for so many people. And, as is the case with anything I edit, I don’t take it lightly that all of that effort that went into preproduction and production now sits on my shoulders.

Again, with the recreations it was actually fairly straightforward. It was the corridor shots where Malkovich plays Lynch and recites lines taken from various interviews that offered the biggest opportunity, and challenge. Because there was no visual reference for this, I could have some more fun with it. Most of the recreations are fairly slow and ominous, so I really wanted these corridor shots to offset the vignettes, kind of jar you out of the trance you were just put in, make you uneasy and perhaps squirm a bit, before being thrust into the next recreation.

What about the VFX? Can you talk about how they fit in and how you worked with them?
Many of the VFX were either in-camera or achieved through editorial, but there were spots — like where he’s in the corridor and snaps from the front to the back — that I needed something more than I could accomplish on my own, so I used our team at Utopic. However, when cutting the trailer, I relied heavily on our motion graphics team for support.

Psychogenic Fugue is such an odd title, so the writer/creative director, Stephen Sayadin, came up with the idea of using the dictionary definition. We took it a step further, beginning the piece with the phonetic spelling and then seamlessly transitioning the whole thing. They then tried different options for titling the characters. I knew I wanted to use the hallway shot, close-ups of the characters and ending on Lynch/Malkovich in the chair. They gave me several great options.

What was the film shot on, and what editing system did you use?
The film was shot on Red at 6K. I worked in Adobe Premiere, using the native Red files. All of our edit machines at Utopic are custom-built, high-performance PCs assembled by the editors themselves.

What about tools for the visual effects?
Our compositor/creative finisher used an Autodesk Flame, and our motion graphics team used Adobe After Effects.

Can you talk about the sound design?
I absolutely love working on sound design and music, so this was a dream come true for me. With both the film and the trailer, our composer Eric Alexandrakis provided me with long, odd, disturbing tracks, complete with stems. So I spent a lot of time just taking his music and sound effects and manipulating them. I then had our sound designer at Brian Lietner jump in and go crazy.

Is there a scene that you are most proud of, or that was most challenging, or both?
I really like the snap into the flame/cigarette at the very beginning. I spent a long time just playing with that shot, compositing a bunch of shots together, manipulating them, adjusting timing, coming back in the next morning and changing it all up again. I guess that and Eraserhead. We had so many passes of particles and layered so many throughout the piece. That shot was originally done with him speaking to camera, but we had this pass of him just looking around, and realized it was way more powerful to have the lines delivered as though they were internal monologue. It also allowed us to play with the timings in a way that we wouldn’t be able to with a one-take shot.

As far as what I’m most proud of, it’s the trailer. We worked really hard to get the recreations and full film done. Then I was able to take some time away from it all and come back fresh. I knew that there was a ton of great footage to work with and we had to do something that wasn’t just a cutdown. It was important to me that the trailer feel every bit as demented as the film itself, if not more. I think we accomplished that.

Check out the trailer here:


The creative process behind The Human Rights Zoetrope

By Sophia Kyriacou

As an artist working in the broadcast industry of almost 20 years, I’ve designed everything from opening title sequences to program brands to content graphics. About three years into my career, I was asked to redesign a program entirely in 3D. The rest, as they say, is history.

Over two years ago I was working full-time at the BBC doing the same work as I am doing now, broadcast designer and 3D artist, but decided it was time to cut my time in half and allow myself to focus on my own creative ventures. I wanted to work with external and varied clients, both here in the UK and internationally. I also wanted to use my spare time for development work. In an industry where technology is constantly evolving it’s essential to keep ahead of the game.

One of those creative ventures was commissioned by Noon Visual Creatives — a London-based production and post company that serves several Arabic broadcasters in both the United Kingdom and worldwide — to create a television branding package for a program called Human Rights.

I had previously worked with Noon on a documentary about the ill-fated 1999 EgyptAir plane crash (which is still awaiting broadcast), so when I was approached again I was more than happy to create their Human Rights brand.

My Inspiration
I was very lucky in that my client essentially gave me free rein, which I find is a rarity these days. I have always been excited and inspired by the works of the creative illusionist M.C Escher. His work has always made me think and explore how you can hook your viewer by giving them something to unravel and interact with. His 1960 lithograph, called Ascending and Descending, was my initial starting point. There was something about the figures going round and round but getting nowhere.The Human Rights Zeotrope Titles

While Escher’s work kickstarted my creative process I also wanted to create something that was illusion-based, so I revisited Mark Gertler’s Merry-Go-Round. As a young art student I had his poster on my wall. Sometimes I would find myself staring at it for hours, looking at the people’s expressions and the movement Gertler had expressed in the figures with his onion-skin-style strokes. There was so much movement within the painting that it jumped out at me. I loved the contrasting colors of orange and blue, the composition was incredibly strong and animated.

I have always been fascinated by the mechanics of old hand-cranked metal toys, including zoetropes, and I have always loved how inanimate objects could come alive to tell you a story. It is very powerful. You have the control to be given the narrative or you can walk away from it — it’s about making a choice and being in control.

Once I had established I was going to build a 3D zoetrope, I explored the mechanics of building one. It was the perfect object to address the issue of human rights because without the trigger it would remain lifeless. I then starting digging into the declaration of Human Rights to put forward a proposal of what I thought would work within their program. I shortlisted 10 rights and culled that down to the final eight. Everything had to be considered. The positioning of the final eight had their own hierarchy and place.

At the base of the zoetrope are water pumps, signifying the right to clean water and sanitation. This is the most important element of the entire zoetrope, grounding the entire structure, as without water, there simply is no life, no existence. Above, a prisoner gestures for attention to the outside world, its environment completely contradicting, given hope by an energetic burst of comforting orange. The gavel references the rights for justice and are subliminally inspired by the hammers walking defiantly within the Pink Floyd video, Another Brick in the Wall. The gavel within the zoetrope becomes that monumental object of power, helped along by the dynamic camera with repetitions of itself staggered over time like echoes on a loop. Surrounding the gavel of justice is a dove flying free from a metal birdcage in a shape of the world. This was my reference to the wonderful book, I Know Why the Caged Bird Sings, by Maya Angelou.

My client wanted to highlight the crisis of the Syrian refugees, so I decided to depict an exhausted child wearing a life jacket, suggesting he had travelled across the Mediterranean Sea, while a young girl at his side, oblivious, happily plays with a spinning top. I wanted to show the negativity being cancelled out by optimism.

To hammer home the feeling of isolation and emptiness that the lack of human rights brings forth, I placed the zoetrope into a cold and almost brutal environment: an empty warehouse. My theme of the positivity canceling out negativity once again is echoed as the sunlight penetrates through hitting the cold floor in an attempt to signify hope and reconnect with the outside world.

the-human-rights-zoetrope_gavel-shotEvery level of detail was broken up into sections. I created very simple one-second loops of animation that were subtle, but enough to tell the story. Once I had animated each section, it was a case of painstakingly pulling apart each object into a stop-frame animated existence so once they were placed in their position and spun, they would animate back into life again.

My Workflow
For ease and budget, I used Poser Pro, a character-based software to animate all the figures in isolation first. Using both the PoserFusion plug-in and the Alembic export, I was able to import each looping character into Maxon Cinema 4D where I froze and separated each 3D object one by one. Any looping objects that were not figure-based were all modelled and animated within Cinema 4D. Once the individual components were animated and positioned, I imported everything into a master 3D scene where I was able to focus on the lighting and camera shots.

For the zoetrope centrepiece, I built a simple lighting rig made up of the GSG Light Kit Pro, two soft boxes, that I had adapted and placed within a NULL and an area Omni light above. This allowed me to rotate the rig around according to my camera shot. Having a default position and brightness set-up was great and helped to get me out of trouble if I got a little too carried away with the settings, and the lighting didn’t change too dramatically on each camera shot. I also added a couple of Visible Area Spotlights out of the warehouse pointing inwards to give the environment a foggy distant feel.

I deliberately chose not to render using volumetric lighting because I didn’t want that specific look and did not want any light bursts hitting my zoetrope. The zoetrope was the star of the show and nothing else. Another lighting feature I tend to use within my work is the combination of the Physical Sky and the Sun. Both give a natural warm feel and I wanted sunlight to burst through the window; it was conceptually important and it added balance to the composition.

The most challenging part of the entire project was getting the lighting to work seamlessly throughout, as well as the composition within some of the camera shots. Some shots were very tight in frame, so I could not rely on the default rig and needed additional lighting to catch objects where the 3-point lights didn’t work so well. I had decided very early on, that rather than work from a single master file, as with the lighting, I had a default “get me out of trouble” master, saving each shot with its own independent settings as I went along to keep my workflow clean. Each scene file was around a gigabyte in size as none of the objects within the zoetrope were parametric anymore once they had been split, separated-out and converted to polygons.

My working machine was a 3.2GHz 8-core Mac Pro with 24GB of RAM, rendered out on a PC — custom-built 3X3 machine — with an Intel Core Processor i7 5960X with water cooling, 32GB RAM and clockable to 4.5GHz.

Since completion, The Human Rights Zoetrope titles have won several awards, including a Gold at the Muse Creative Awards in the Best Motion Graphics category, a Platinum Best of Show in the Art Direction category, and a Gold in the Best Graphic Design category at the Aurora Awards.

The Human Rights Zoetrope is also a Finalist at the New York Festivals 2017 in the Animation: Promotion/Open & IDs category. The winners will be announced at the NAB Show.

 

Sophia Kyriacou is a London-based broadcast designer and 3D artist.

GoPro intros Karma foldable drone, Hero5 with voice-controlled recording

By Brady Betzel

“Hey, GoPro, start recording!” That’s right, voice-controlled recording is here. Does this mean pros can finally start all their GoPros at the same time? More on this in a bit…

I’m one of the lucky few journalists/reviewers who have been brought out to Squaw Valley, California, to hear about GoPro’s latest products first hand — oh, and I got to play with them as well.

So, the long awaited GoPro Karma drone is finally here, but it’s not your ordinary drone. It is small and foldable so it can fit in a backpack, but the three-axis camera stabilizer can be attached to the included Karma grip so you can grab the drone before it lands and carry it or mount it. This is huge! If worked out correctly you can now fake a gigantic jib swing with a GoPro, or even create some ultra-long shots. One of the best parts is that the controller is a videogame style remote that doesn’t require you use your phone or tablet! Thank you GoPro! No, really, thank you.

The Karma is priced at $799, the Karma plus Session is $999, and the Karma plus Hero5 Black is $1,099. And it’s available one day before my birthday next month — hint, hint, nudge, nudge — October 23.

To the Cloud! GoPro Plus and Quik Apps
So you might have been wondering how GoPro intends to build a constant revenue stream. Well, it seems like they are banking on the new GoPro Plus cloud-based subscription service. While your new Hero5 is charging it can auto-upload photos and videos via a computer or phone. In addition you will be able to access, edit and share all from GoPro Plus. For us editing nerds, this is the hot topic because want to edit everything from anywhere.

My question is this: If everyone gets on the GoPro Plus train, are they prepared for the storage and bandwidth requirements? Time will tell. In addition to being able to upload to the cloud with your GoPro Plus subscription, you will have a large music library at your disposal, 20 percent off accessories from GoPro.com, exclusive GoPro Apparel and Premium Support.

The GoPro Subscription breaks down to $4.99 and is available in the US on October 2 — it will be in more markets in January 2017.

Quik App is GoPro’s ambitious attempt at creating an autonomous editing platform. I am really excited about this (even though it basically eliminates the need for an editor — more on this later). While many of you may be hearing about Quik for the first time, it actually has been around for a bit. If you haven’t tried it yet, now is the time. One of the most difficult parts of a GoPro’s end-to-end workflow is the importing, editing and exporting. Now, with GoPro Plus and Quik you will be automatically uploading your Hero5 footage while charging so you can be editing quickly (or Quik-ly. Ha! Sorry, I had to.)

Hero5 Black and Hero5 Session
It’s funny that the Hero5 Black and Session are last on my list. I guess I am kind of putting what got GoPro to the dance last, but last doesn’t in any way mean least!

Hero5 Black

Available on October 2, the Hero5 Black is $399, and includes the following:
● Two-inch touch display with simplified controls.
● Up to 4K video at 30fps
● Auto-upload to GoPro Plus while charging
● Voice Control with support for seven languages, with more to come
● Simplified one-button control
● Waterproof, without housing, to 33 feet
● Compatible with existing mounts, including Karma
● Stereo audio recording
● Video Stabilization built-in
● Fish-eye-free wide-angle video
● RAW and WDR (wide dynamic range) photo modes
● GPS built-in!

Hero5 Session is $299 and offers these features:
● Same small design
● Up to 4K at 30fps
● 10 Megapixel photos
● Auto upload to GoPro Plus while charging
● Voice Control support for seven languages with more to come
● Simplified one-button control
● Waterproof, without housing, to 33 feet
● Compatible with existing mounts, including Karma
● Video Stabilization built in
● Fish-eye-free wide-angle video

Summing Up
GoPro has made power moves. They not only took the original action camera — the Hero — to the next level with upgrades like image stabilization, waterproof without housing, and simplifying the controls in the Hero5 Black and Hero5 Session, they added 4K recording a 30fps and stereo audio recording with Advanced Wind Noise Reduction.

Not only did they upgrade their cameras, GoPro is attempting to revolutionize the drone market with the Karma. The Karma has potential to bring the limelight back to GoPro and steal some thunder from competitors, like DJI, with this foldable and compact drone whose three-axis gimbal can be held by the included Karma handle.

Hero5 Session

Remember that drone teaser video that everyone thought was fake!? Here it is just in case. Looks like that was real and with some pre-planning you can recreate these awesome shots. What’s even more awesome is that later this year GoPro will be launching the “Quik Key,” a micro-USB card reader that plugs into your phone to transfer your videos and photos to your phone, as well as REMO — a voice-activated remote control for the Hero5 (think Apple TV, but for your camera: “GoPro, record video.”

Besides the incredible multimedia products GoPro creates, I really love the family feeling and camaraderie within the GoPro company and athletes they bring in to show off their tools. Coming from the airport to Squaw Valley, I was in the airport shuttle with some mega-pro athletes/content creators like Colin, and they were just as excited as I was.

It was kind of funny because the people who are usually in the projects I edit were next to me geeking out. GoPro has created this amazing, self-contained, ecosphere of content creators and content manipulators that are fan-boys and fan-girls. The energy around the GoPro Karma and Hero5 announcement is incredible, and they’ve created their own ultra-positive culture. I wish I could bottle it up and give it out to everyone reading this news.

Check out some video I shot here.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

‘Suicide Squad’: Imageworks VFX supervisor Mark Breakspear 

By Randi Altman

In Warner Bros.’ Suicide Squad, a band of captured super-villains are released from prison by the government and tasked with working together to fight a common enemy, the evil Joker. This film, which held top box office honors for weeks, has a bit of everything: comic book antiheroes, super powers, epic battles and redemption. It also features a ton of visual effects work that was supervised by Sony Imageworks’ Mark Breakspear, who worked closely with production supervisor Jerome Chen and director David Ayer (see our interview with Ayer).

Mark Breakspear

Mark Breakspear

Breakspear is an industry veteran with more than 20 years of experience as a visual effects supervisor and artist, working on feature films, television and commercials. His credits include American Sniper, The Giver, Ender’s Game, Thor: The Dark World, The Great Gatsby… and that’s just to name a few.

Suicide Squad features approximately 1,200 shots, with Imageworks doing about 300, including the key fight at the end of the film between Enchantress, the Squad, Incubus and Mega Diablo. Imageworks also provided shots for several other sequences throughout the movie.

MPC worked on the majority of the other visual effects, with Third Floor creating postviz after the shoot to help with the cutting of the film.

I recently threw some questions at Breakspear about his process and work on Suicide Squad.

How early did you get involved in the project?
Jerome Chen, the production supervisor, involved us from the very beginning in the spring of 2015. We read the script and started designing one of the most challenging characters — Incubus. We spent a couple of months working with designer Tim Borgmann to finesse the details of his overall look, shape and, specifically, his skin and sub-surface qualities.


How did Imageworks prepare for taking on the film?

We spent time gathering as much information as we could about the work we were looking to do. That involved lengthy calls with Jerome to pick over every aspect of the designs that David Ayer wanted. As it was still pretty early, there was a lot more “something like” rather than “exactly like” when it came to the ideas. But this is what the prepro was for, and we were able to really focus on narrowing down the many ideas in to key selections and give the crew something to work with during the shoot in Toronto.

Can you talk about being on set?
The main shoot was at Pinewood in Toronto. We had several soundstages that were used for the creation of the various sets. Shoot days are usually long and arduous, and this was no exception. For VFX crews, the days are typically hectic, quiet, hectic, very hectic, quiet and then suddenly very hectic again. After wrap, you still have to download all the data, organize it and prep everything for the next day.

I had fantastic help on set from Chris Hebert who was our on-set photographer. His job was to make sure we had accurate records (photographic and data sets) of anything that could be used in our work later on. That meant actors, props, witness cameras, texture photography and any specific one-off moments that occur 300 times a day. Every movie set needs a Chris Hebert or it’s going to be a huge struggle later on in post!

gb0140_comp_breakdown_plate.1052.tif


Ok, let’s dig into the workflow. Can you walk us through it?

Workflow is a huge subject, so I’ll keep the answer somewhat concise! The general day would begin with a team meet between all the various VFX departments here at Imageworks. The work was split across teams in both Culver and Vancouver, so we did regular video Hangouts to discuss the daily plan, the weekly targets and generally where we were at, plus specific needs that anyone had. We would usually follow this by department meetings prior to AM dailies where I would review the latest work from the department leads, give notes, select things to show Jerome and David, and give feedback that I may have received from production.

We tried our best to keep our afternoons meeting-free so actual work could get done! Toward the end of the day we would have more dailies, and the final days selection of notes and pulls to the client would take place. Most days ended fairly late, as we had to round off the hundreds of emails with meaningful replies, prep for the next day and catch any late submission arrivals from the artists that might benefit from notes before the morning.

What tool, or tools, did you use for remote collaboration?
We used Google Hangouts for video conferencing, and Itview for shot discussion and notes with Jerome and David. Itview is our own software that replaces the need to use [off-the-shelf tools], and allows a much faster, more secure and accurate way to discuss and share shots. Jerome had a system in post and we would place data on it remotely for him to view and comment on in realtime with us during client calls. The notes and drawings he made would go straight in to our note tracker and then on to artists as required.

gb1156_comp_breakdown.1242.tif
What was the most challenging shot or shots, and why?

Our most challenging work was in understanding and implementing fractals into the design of the characters and their weapons. We had to get up to speed on three-dimensional mandlebulbs and how we can render them into our body of work. We also had to create vortical flow simulations that came off the fractal weapons, which created their own set of challenges due to the nature of how particles uniquely behave when near high velocity emissions.

So there wasn’t a specific shot that was more challenging than another, but the work that went in to most of them required a very challenging pre-design and concept solve involving fractal physics to make them work.

Can you talk about tools — off-the-shelf or proprietary — you used for the VFX? Any rendering in the cloud?
We used Side Effects Houdini and Autodesk Maya for the majority of shots and The Foundry’s Nuke to comp everything. When it came to rendering we used Arnold, and in regards to cloud rendering, we did render remotely to our own cloud, which is about 1,000 miles away — does that count (smiles)?

VFX Supervisor Volker Engel: ‘Independence Day,’ technology and more

Uncharted Territory’s Volker Engel is one of Hollywood’s leading VFX supervisors, working on movies as diverse as White House Down, Hugo and Roland Emmerich’s Shakespeare movie Anonymous. Most recently he was in charge of the huge number of effects for Emmerich’s Independence Day: Resurgence.

Engel was kind enough to make time in his schedule to discuss his 28-year history with Emmerich, his favorite scenes from Independence Day, his experience with augmented reality on set and more.

When did you get involved with Independence Day?
I was probably the earliest person involved after Roland Emmerich himself! He kept me posted over the years while we were working on other projects because we were always going to do this movie.

I think it was 2009 when the first negotiations with 20th Century Fox started, but the important part was early 2014. Roland had to convince the studio regarding the visuals of the project. Everyone was happy with the screenplay, but they said it would be great to get some key images. I hired a company called Trixter — they are based in Germany, but also have an office in LA. They have a very strong art department. In about six weeks we finished 16 images that are what you can call “concept art,” but they are extremely detailed. Most of these concepts can be seen as finished shots in the movie. This artwork was presented to 20th Century Fox and the movie was greenlit.

Concept art via Trixter.

You have worked with Emmerich many times. You must have developed a sort of shorthand?
This is now a 28-year working relationship. Obviously, we haven’t done every movie as a team but I think this is our eighth movie together. There is a shorthand and that helps a lot. I don’t think we really know what the actual shorthand is other than things that we don’t need to talk about because we know what needs to happen.

Technology continues to advance. Does that make life easier, or because you have more options does it make it even more complex?
It’s less the fact that there’s more options, it’s that the audience is so much more sophisticated. We now have better tools available to make better pictures. We can do things now that we were not able to do before. So, for example, now we can imagine a mothership that’s 3,000 miles in diameter and actually lands on Earth. There is a reason we had a smaller mothership in the first movie and that it didn’t touch down anywhere on the planet.

The mothership touching down in DC.

So it changes the way you tell stories in a really fundamental way?
Absolutely. If you look at a movie like Ex Machina, for example, you can show a half-human/half-robot and make it incredibly visually convincing. So all of a sudden you can tell a story that you wouldn’t have been able to tell before.

If you look at the original Independence Day movie, you really only see glimpses of the aliens because we had to do it with practical effects and men in suits. For Independence Day: Resurgence we had the chance to go much further. What I like actually is that Roland decided not to make it too gratuitous, but at least we were able to fully show the aliens.

Reports vary, but they suggest about 1,700 effects shots in Independence Day: Resurgence. Is that correct?
It was 1,748. Close to two-thirds of the movie!

What was your previs process like?
We had two different teams: one joined us from Method Studios and the other was our own Uncharted Territory team, and we split the task in half. The Method artists were working in our facility, so we were all under one roof.

Method focused on the whole lunar sequence, for example, while our in-house team started with the queen/bus chase toward the end of the movie. Roland loves to work with two specific storyboard artists and has several sessions during the week with them, and we used this as a foundation for the previs.

Trixter concept art.

So Roland was involved at the previs stage looking at how it was all going to fit together?
He had an office where the previs team was working, so we could get him over and go literally from artist to artist. We usually did these sessions twice a day.

What tools were you using?
Our in-house artists are Autodesk 3D Studio Max specialists, and the good folks from Method worked with Autodesk Maya.

The live shoot used camera-tracking technology from Ncam to marry the previs graphics and the live action in realtime to give a precise impression of how the final married shot would work.

How were you using the Ncam exactly?
The advantage is that we took the assets we had already built for previs and then re-used them inside the Ncam set-up, doing this with Autodesk Motion Builder. But some of the animation had to be done right there on set.

After: Area 51

I’ll give you an example. When we’re inside the hangar at Area 51, Roland wanted to pan from an actor’s face looking at 20 jet fighters lifting off and flying into the distance, and he wanted to pan off the actors face to show the jets. The Ncam team and Marion [Spates, the on-set digital effects supervisor] had to right there, on the spot, do the animation for the fighters. In about five minutes, they had to come up with something there and then and do the animation, and what’s more, it worked. That’s why Roland also loves to work with Ncam, because it gives him the flexibility to make some decisions right there in the moment.

So you’re actually updating or even creating shots on set?
Yes, exactly. We have the toolbox there — the assets like the interior of the hangar — but then we do it right there to the picture. Sometimes for both the A-camera and the B-camera.

We did a lot of extensions and augmentations on this movie and what really helped was our experience of working with Ncam on White House Down. For Roland, as the director, it helps him compose his images instead of just looking at a gigantic bluescreen. That’s really what it is, and he’s really good at that.

The Ncam at use on set.

I explain it this way: imagine you already have your first composite right there, which goes straight to editorial. They immediately have something to work with. We just deliver two video files: the clean one with the bluescreen and another from Ncam that has the composite.

Did using Ncam add to the shooting time?
Working with AR on set always adds some shooting time, and it’s really important that the director is briefed and wants to use this tool. The Ncam prep often runs parallel to the rehearsals with the actors, but sometimes it adds two or three additional minutes. When you have someone who’s not prepared for it, two or three minutes can feel like a lifetime. It does, however, save a lot of time in post.

On White House Down, when we used Ncam for the first time, it actually took a little over a week until everything grooved and everyone was aware of it — especially the camera department. After a little while they just knew this is exactly what needed to be done. It all became instant teamwork. It is something that supports the picture and it’s not a hindrance. It’s something that the director really wants.

Do you have a favorite scene from Resurgence?
There is a sequence inside the mothership where our actors are climbing up one of these gigantic columns. We had a small set piece being built for the actors to climb, and it was really important for Roland to compose the whole image. He could ask for a landing platform to be removed and more columns to be added to create a sense of depth, then move the view around another 50 or 60 degrees.

He was creating his images right there, and that’s why the guys have to be really quick on their feet and build these things in and make it work. At the same time, the assistant director is there saying the cameras are ready, the actors are ready and we’re ready to shoot, and of course no one wants them to wait around, so they better have their stuff ready!

Destruction of Singapore

The destruction of Singapore.

Some of my other favorite sequences from the film are the destruction of Singapore while the mothership enters the atmosphere and the alien queen chasing the school bus!

What is next for you?
In 1999, when I started Unchartered Territory with my business partner Marc Weigert, we set it up as a production company and started developing our own projects. We joke that Roland interrupts us from developing our projects because he comes with projects of his own that we just cannot say no to! But we have just come back from a trip to Ireland where we scouted two studios and met with several potential production partners for a new project of our own. Stay tuned!

Talking with new Shade VFX NY executive producer John Parenteau

By Randi Altman

John Parenteau, who has a long history working in visual effects, has been named executive producer of Shade VFX’s New York studio. Shade VFX, which opened in Los Angeles in 2009, provides feature and television visual effects, as well as design, stereoscopic, VR and previs services. In 2014, they opened their New York office to take advantage of the state’s fairly aggressive tax incentives and all that it brings to the city.

Shade-1“As a native New Yorker, with over a decade of working as an artist there, the decision to open an office back home was an easy one,” explains owner Bryan Godwin. “With John coming on board as our New York executive producer, I feel our team is complete and poised to grow — continuing to provide feature-film-level visuals. John’s deep experience running large facilities, working with top tier tent-pole clients and access to even more potential talent convinced me that he is the right choice to helm the production efforts out east.”

Shade’s New York office is already flush with work, including Rock that Body for Sony, The OA and The Get Down for Netflix, Mosaic for HBO and Civil for TNT. Not long ago, the shop finished work on Daredevil and Jessica Jones, two of Marvel’s Netflix collaborations. As John helps grow the client list in NYC, he will be supporting NY visual effects supervisor Karl Coyner, and working directly with Shade’s LA-based EP/VP of production Lisa Maher.

John has a long history in visual effects, starting at Amblin Entertainment in the early ‘90s all the way through to his recent work with supercomputer company Silverdraft, which provides solutions for VFX, VR and more. I’ve known him for many years. In fact, I first started spelling John Parenteau’s name wrong when he was co-owner and VFX supervisor at Digital Muse back in the mid to late ‘90s — kidding, I totally know how to spell it… now.

We kept in touch over the years. His passion and love for filmmaking and visual effects has always been at the forefront of our conversations, along with his interest in writing. John even wrote some NAB blogs for me when he was managing director of Pixomondo (they won the VFX Oscar for Hugo during that time) and I was editor-in-chief of Post Magazine. We worked together again when he was managing director of Silverdraft.

“I’ve always been the kind of guy who likes a challenge, and who likes to push into new areas entertainment,” says John. “But leaving visual effects was less an issue of needing a change and more of a chance to stretch my experience into new fields. After Pixomondo, Silverdraft was a great opportunity to delve into the technology behind VFX and to help develop some unique computer systems for visual effects artists.”

Making the decision to leave the industry a couple years ago to take care of his mother was difficult, but John knew it was the right thing to do. “While moving to Oregon led me away from Hollywood, I never really left the industry; it gets under your skin, and I think it’s impossible to truly get out, even if you wanted to.”

Parenteau realized quickly that the Portland scene wasn’t a hot-bed of film and television VFX, so he took the opportunity to apply his experience in entertainment to a new market, founding marketing boutique Bigfoot Robot. “I discovered a strong need for marketing for small- to mid-sized companies, including shooting and editing content for commercials and marketing videos. But I did keep my hand in media and entertainment thanks to one of my first clients, the industry website postPerspective. Randi and I had known each other for so many years, and our new relationship helped her out technically while allowing me to stay in touch with the industry.”

John’s mom passed over a year ago, and while he was enjoying his work at Bigfoot Robot, he realized how much he missed working in visual effects. “Shade VFX had always been a company I was aware of, and one that I knShade-2ew did great work,” he says. “In returning to the industry, I was trying to avoid landing in too safe of a spot and doing something I’d already done before. That’s when Bryan Godwin and Dave Van Dyke (owner and president of Shade, respectively) contacted me about their New York office. I saw a great opportunity to help build an already successful company into something even more powerful. Bryan, Lisa and Dave have become known for producing solid work in both feature and television, and they were looking for a missing component in New York to help them grow. I felt like I could fill that role and work with a company that was fun and exciting. There’s also something romantic about living in Manhattan, I have to admit.”

And it’s not just about building Shade for John. “I’m the kind of guy who likes to become part of a community. I hope I can contribute in various ways to the success of visual effects for not only Shade but for the New York visual effects community as a whole.”

While I’ll personally miss working with John on a day-to-day basis, I’m happy for him and for Shade. They are getting a very talented artist, who also happens to be a really nice guy.

Blending Ursa Mini and Red footage for Aston Martin spec spot

By Daniel Restuccio

When producer/director Jacob Steagall set out to make a spec commercial for Aston Martin, he chose to lens it on the Blackmagic Ursa Mini 4.6k and the Scarlet Red. He says the camera combo worked so seamlessly he dares anyone to tell which shots are Blackmagic and which are Red.

L-R Blackmagic’s Moritz Fortmann and Shawn Carlson with Jacob Steagall and Scott Stevens.

“I had the idea of filming a spec commercial to generate new business,” says Steagall. He convinced the high-end car maker to lend him an Aston Martin 2016 V12 Vanquish for a weekend. “The intent was to make a nice product that could be on their website and also be a good-looking piece on the demo reel for my production company.”

Steagall immediately pulled together his production team, which consisted of co-director Jonathan Swecker and cinematographers Scott Stevens and Adam Pacheco. “The team and I collaborated together about the vision for the spot which was to be quick, clean and to the point, but we would also accentuate the luxury and sexiness of the car.”

“We had access to the new Blackmagic Ursa Mini 4.6k and an older Red Scarlet with the MX chip,” says Stevens. “I was really interested in seeing how both cameras performed.”

He set up the Ursa Mini to shoot ProRes HQ at Ultra HD (3840×2160) and the Scarlet at 8:1 compression at 4K (4096×2160). He used both Canon still camera primes and a 24-105mm zoom, switching them from camera to camera depending on the shot. “For some wide shots we set them up side by side,” explains Stevens. “We also would have one camera shooting the back of the car and the other camera shooting a close-up on the side.”

In addition to his shooting duties, Stevens also edited the spot, using Adobe Premiere, and exported the XML into Blackmagic Resolve Studio 12. Stevens notes that, in addition to loving cinematography, he’s also “really into” color correction. “Jacob (Steagall) and I liked the way the Red footage looked straight out of the camera in the RedGamma4 color space. I matched the Blackmagic footage to the Red footage to get a basic look.”

Blackmagic colorist Moritz Fortmann took Stevens’ basis color correction and finessed the grade even more. “The first step was to talk to Jacob and Scott and find out what they were envisioning, what feel and look they were going for. They had already established a look so we saved a few stills as reference images to work off. The spot was shot on two different types of cameras, and in different formats. Step two was to analyze the characteristics of each camera and establish a color correction to match the two.  Step three was to tweak and refine the look. We did what I would describe as a simple color grade, only relying on primaries, without using any Power Windows or keys.”

If you’re planning to shoot mixed footage, Fortmann suggests you use cameras with similar characteristics, matching resolution, dynamic range and format. “Shooting RAW and/or Log provides for the highest dynamic range,” he says. “The more ‘room’ a colorist has to make adjustments, the easier it will be to match mixed footage. When color correcting, the key is to make mixed footage look consistent. One camera may perform well in low light while another one does not. You’ll need to find that sweet spot that works for all of your footage, not just one camera.”

Daniel Restuccio is a writer and chair of the multimedia department at California Lutheran University.

Sony at NAB with new 4K OLED monitor, 4K, 8X Ultra HFR camera

At last year’s NAB, Sony introduced its first 4K OLED reference monitor for critical viewing — the BVM-X300. This year, Sony added a new monitor, the the PVM-X550, a 55-inch, OLED panel with 12-bit signal processing, perfect for client viewing. The Trimaster EL PVM-X550 supports HDR through various Electro-Optical Transfer Functions (EOTF), such as S-Log3, SMPTE ST.2084 and Hybrid Log-Gamma, covering applications for both cinematography and broadcast. The PVM-X550 is a quad-view OLED monitor, which allows customized individual display settings across four distinct views in HD. It is equipped with the same signal-processing engine as the BVM-X300, providing a 12-bit output signal for picture accuracy and consistency. It also supports industry standard color spaces including the wider ITU-R BT.2020 for Ultra High Definition.

HFR Camera
At NAB 2016, Sony displayed their newest camera system: the HDC-4800 combines 4K resolution with enhanced high frame rate capabilities, capturing up to 8X at 4K, and 16X in full HD. “This camera system can do a lot of everything — very high frame rate, very high resolution,” said Rob Willox, marketing manager for content creation, Sony Electronics.

I broke the second paragraph into two, and they are now: The HDC-4800 uses a new Super 35mm 4K CMOS sensor, supporting a wide color space (both BT.2020 and BT.709), and provides an industry standard PL lens mount, giving the system the capability of using the highest quality cinematic lenses for clear and crisp high resolution images.The new sensor brings the system into the cinematic family of RED and Alexa, making it well suited as a competitor to today’s modern, high end cinematic digital solutions.

An added feature of the HDC-4800 is how it’s specifically designed to integrate with Sony’s companion system, the Sony HDC-4300, a 2/3 inch image sensor 4k/HD camera. Using matching colorimetry and deep toolset camera adjustments, and with the ability to take advantage of existing build-up kits, remote control panels and master setup units, the two cameras can blend seamlessly.

Archive
Sony also showed the second generation of its Optical Disc Archive System, which adopts new, high-capacity optical media, rated with a 100 year shelf life with double the transfer rate and double the capacity of a single cartridge at 3.3 TB. The Generation 2 Optical Disc Archive System also adds an 8-channel optical drive unit, doubling the read/write speeds of the previous generation, helping to meet the data needs of real-time 4K production.

Making our dialogue-free indie feature ‘Driftwood’

By Paul Taylor and Alex Megaro

Driftwood is a dialogue-free feature film that focuses on a woman and her captor in an isolated cabin. We chose to shoot entirely MOS… because we are insane. Or perhaps we were insane to shoot a dialogue-free feature in the first place, but our choice to remove sound recording from the set was both freeing and nerve wracking due to the potential post production nightmare that lay ahead.

Our decision was based on how, without speech to carry along the narrative, every sound would need to be enhanced to fill in the isolated world of our characters. We wanted draconian control over the soundscape, from every footstep to every door creak, but we also knew the sheer volume of work involved would put off all but the bravest post studios.

The film was shot in a week with a cast of three and a crew of three in a small cabin in Upstate New York. Our camera of choice was a Canon 5D Mark II with an array of Canon L-series lenses. We chose the 5D because we already owned it — so more bang for our buck — and also because it gave us a high-quality image, even with such a small body. Its ease of use allowed us to set up extremely quickly, which was important considering our extremely truncated shooting schedule. Having no sound team on set allowed us to move around freely without the concerns of planes passing overhead or cars rumbling in the distance delaying a shot.

The Audio Post
The editing was a wonderfully liberating experience in which we cut purely to image, never once needing to worry about speech continuity or a host of other factors that often come into play with dialogue-driven films. Driftwood was edited on Apple’s Final Cut Pro X, a program that can sometimes be a bit difficult for audio editing, but for this film it was a non-issue. The Magnetic Timeline was actually quite perfect for the way we constructed this film and made the entire process smooth and simple.

Once picture locked, we brought the project to New York City’s Silver Sound Studios, who jumped at the chance to design the atmosphere for an entire feature from the ground up. We sat with the engineers at Silver Sound and went through Driftwood shot-by-shot, creating a master list of all the sounds we thought necessary to include. Some were obvious, such as footsteps, breathing, clocks ticking and others less so, such as the humming of an old refrigerator or creaking of a wooden chair.

Once the initial list was set, we discussed whether or not to use stock audio or rerecord everything at the original location. Again, because we wanted complete control to create something wholly unique, we concluded it was important to return to the cabin and capture its particular character. Over the course of a few days, the Silver Sound gang rerecorded nearly every sound in the film, leaving only some basic Foley work to complete in their studio.

Once their library was complete, one of the last steps before mixing was to ADR all of the breathing. We had the actors come into the studio over a one-week period during which they breathed, moaned and sighed inside Silver Sound’s recording booth. These subtle sounds are taken for granted in most films, but for Driftwood they were of utter importance. The way the actors would sigh or breath could change the meaning behind that sound and change the subtext of the scene. If the characters cannot talk, then their expressions must be conveyed in other ways, and in this case we chose a more physiological track.

By the time we completed the film we had spent over a year recording and mixing the audio. The finished product is a world unto itself, a testament to the laborious yet incredibly exciting work performed by Silver Sound.

Driftwood was written, directed and photographed by Paul Taylor. It was produced and edited by Alex Megaro.

Raytracing today and in the future

By Jon Peddie

More papers, patents and PhDs have been written and awarded on ray tracing than any other computer graphic technique.

Ray tracing is a subset of the rendering market. The rendering market is a subset of software for larger markets, including media and entertainment (M&E), architecture, engineering and construction (AEC), computer-aided design (CAD), scientific, entertainment content creation and simulation-visualization. Not all users who have rendering capabilities in their products use it. At the same time there are products that have been developed solely as rendering tools and there are products that include 3D modeling, animation and rendering capabilities, and they may be used primarily for rendering, primarily for modeling or primarily for animation.

Because ray tracing is so important, and at the same time computationally burdensome, individuals and organizations have spent years and millions of dollars trying to speed things up. A typical ray traced scene on an old-fashioned HD screen can tax a CPU so heavily the image can only be upgraded maybe every second or two — certainly not the 33ms needed for realtime rendering.

GPUs can’t help much because one of the characteristics of ray tracing is it has no memory and every frame is a new frame, so the computational load is immutable. Also, the branching that occurs in raytracing defeats the power of a GPU’s SIMD architecture.

Material Libraries Critical
Prior to 2015, all ray tracer engines came with their own materials libraries. Cataloging the characteristics of all the types of materials in the world is beyond the resources of any company’s ability to develop and support. And the lack of standards has held back any cooperative development in the industry. However, a few companies have agreed to work together and share their libraries.

I believe we will see an opening up of libraries and the ability of various ray tracing engines to be able to avail themselves of a much larger library of materials. Nvidia is developing a standard-like capability they are calling the Material Definition Language — (MDL) and using it to allow various libraries to work with a wide range of ray tracing engines.

Rendering Becomes a Function of Price
In the near future, I expect to see 3D rendering become a capability offered as an online service. While it’s not altogether clear how this will affect the market, I think it will boost the use of ray tracing and lower the cost to an as-needed basis. It also offers the promise of being able to apply huge quantities of processing power limited only by the amount of money the user is willing to pay. Ray tracing will resolve to time (to render a scene) divided by cost.

That will continue to bring down the time to generate a ray traced frame for an animation for example, but it probably won’t get us to realtime ray tracing at 4K or beyond.

Shortcuts and Semiconductors
Work continues on finding clever ways to short circuit the computational load by using intelligent algorithms to look at the scene and deterministically allocate what objects will be seen, and which surfaces need to be considered.

Hybrid techniques are being improved and evolved where only certain portions of a scene are ray traced. Objects in the distance for example don’t need to be ray traced and flat, dull colored objects don’t need it.

Chaos Group says the use of variance-based adaptive sampling on this model of Christmas cookies from Autodesk 3ds Max provided a better final image in record time. (Source: Chaos Group)

Semiconductors are being developed to specifically accelerate ray tracing. Imagination Technologies, the company that designs Apple’s iPhone and iPad GPU, has a specific ray tracing engine that, when combined with the advance techniques just described can render an HD scene with partial ray traced elements several times a second. Siliconarts is a startup in Korea that has developed a ray tracing accelerator and I have seen demonstrations of it generating images at 30fps. And Nvidia is working ways to make a standard GPU more ray-tracing friendly.

All these ideas and developments will come together in the very near future and we will begin to realize realtime ray tracing.

Market Size
It is impossible to know how many users there are of ray tracing programs because the major 3D modeling and CAD programs, both commercial and free (e.g., Autodesk, Blender, etc.) have built-in ray tracing engines, as well as the ability to use pluggable add-on software programs for ray tracing.

The potentially available market vs. the totally available market (TAM).

Also, not all users make use of ray tracing on a regular basis— some use it every day, others maybe occasionally or once a project. Furthermore, some users will use multiple ray tracing programs in a project, depending upon their materials library, user interface, specific functional requirements or pipeline functionality.

Free vs. Commercial
A great deal of the raytracing software available on the market is the result of university projects. Some of the developers of such programs have formed companies, others have chosen to stay in academia or work as independent programmers.

The number of new suppliers has not slowed down indicating a continued demand for ray tracing

The non-commercial developers continue to offer their ray tracing rendering software as an open source and for free — and continue to support it, either individually or as part of a group.

Raytracing Engine Suppliers
The market for ray tracing is entering into a new phase. This is partially due to improved and readily available low-cost processors (thank you, Moore’s law), but more importantly it is because of the demand and need for accurate virtual prototyping and improved workflows.

Rendering in the cloud using GPUs (Source OneRender).

As with any market, there is a 20/80 rule, where 20 percent of the suppliers represent 80 percent of the market. The ray tracing market may be even more unbalanced. There would appear to be too many suppliers in the market despite failures and merger and acquisition activities. At the same time many competing suppliers have been able to successfully coexist by offering features customized for their most important customers.

Conclusion
Ray tracing is to manufacturing what a storyboard is to film — the ability to visualize the product before it’s built. Movies couldn’t be made today with the quality they have without ray tracing. Think of how good the characters in Cars looked — that imagery made it possible for you to suspend disbelief and get into the story. It used to be: “Ray tracing — Who needs it?” Today it’s: “Ray tracing? Who doesn’t use it?”

Our Main Image: An example of different materials being applied to the same object (Source Nvidia)

Dr. Jon Peddie is president of Jon Peddie Research, which just completed an in-depth market study on the ray tracing market. He is the former president of Siggraph Pioneers and  serves on advisory boards of several companies. In 2015, he was given the Life Time Achievement award from the CAAD society. His most recent book is “The History of Visual Magic in Computers.”

Quick Chat: Ian Stynes on mixing two Sundance films

By Kristine Pregot

A few years back, I had the pleasure of working with talented sound mixer Ian Stynes on a TV sketch comedy. It’s always nice working with someone you have collaborated with before. There is a comfort level and unspoken language that is hard to achieve any other way. This year we collaborated once again for So Yong Kim’s 2016 film Lovesong, which made its premiere at this year’s Sundance and had its grade at New York’s Nice Shoes via colorist Sal Malfitano.

Ian has been busy. In fact, another film he mixed recently had its premiere at Sundance as well — Other People, from director Chris Kelly.

Ian Stynes

Ian Stynes

Since we were both at the festival, I thought what better time to ask him how he approached mixing these two very different films.

Congrats on your two films at Sundance, Lovesong (which is our main image) and Other People. How did the screenings go?
Both screenings were great; it’s a different experience to see the movie in front of an excited audience. After working on a film for a few months it’s easy to slip into only watching it from a technical standpoint — wondering, if a certain section is loud enough, or if a particular sound effect works — but seeing it with an engaged crowd (especially as a world premiere at a place like Sundance) is like seeing it with fresh eyes again. You can’t help but get caught up.

What was the process like to work with each director for the film?
I’ve been lucky enough to work with some wonderful directors, and these movies were no exception. Chris Kelly, the director for Other People, who is a writer on a bunch of TV shows including SNL and Broad City is so down to earth and funny. The movie was based on the true story of his mother, who died from cancer. So he was emotionally attached to the film in a unique way. He was very focused about what he wanted but also knew when to sit back and let me do my thing. This was Chris’s first movie, but you wouldn’t know it.

For Lovesong, I worked with director So Yong Kim once again. She makes all her films with her husband Bradley Rust Gray. They switch off with directorial duties but are both extremely involved in each other’s movies. This is my third time working on a film with the two of them — the other two were For Ellen with Paul Dano and Jon Heder, and Exploding Girl with Zoe Kazan. So is an amazing director to work with; it feels like a real collaboration mixing with her. She is creative and extremely focused with her vision, but always inclusive and kind to everyone involved in the crew.

With both films a lot of work was done ahead of time. I try and get it to a very presentable place before the directors come in. This way we can focus on the creative tasks together. One of the fun parts of my job is that I get to sit in a room for a good while and work closely with creative and fun people on something that is very meaningful to them. It’s usually a bit of a bonding experience by the end of it.

How long did each film take you to mix?
I am also extremely lucky to work with some great people at Great City Post. I was the mixer, supervising sound editor and sound designer on both films, but I have an amazing team of people working with me.

Matt Schoenfeld did a huge amount of sound designing on both movies, as well as some of the mixing on Lovesong. Jay Culliton was the dialogue editor on Other People. Renne Bautista recorded Foley and dealt with various sound editing tasks. Shaun Brennan was the Foley artist, and additional editing was done by Daniel Heffernan and Houston Snyder. We are a small team but very efficient. We spent about eight to 10 weeks on each film.

Lovesong

How is it different to mix comedy than it is to mix a drama?
When you add sound to a film it’s important to think about how it is helping the story — how it augments or moves the story along. The first level of post sound work involves cleaning and removing anything that might take the viewer out of the world of the story (hearing mics, audio distortion, change in tone etc.).

Beyond that, different films need different things. Narrative features usually call for the sound to give energy to a film but not get in the way. Of course, there are always specific moments where the sound needs to stand out and take center stage. Most people usually aren’t aware of it or know what post sound specifically entails, but they certainly notice when it is missing or a bad sound job was done. Dramas usually have more intensity to the story and comedy’s can be a bit lighter. This often informs the sound design, edit and mix. That said, every movie is still different.

What is your favorite sound design on a film of all time?
I love Ben Burtt, who did all the Star Wars movies. He also did Wall-E, which is such a great sound design movie. The first 40 or so minutes have no direct dialogue — all the audio is sound design. You might not realize it, but it is very effective. On the DVD extra Ben Burtt did a doc about the sound for that movie. The documentary ends up being about the history of sound design itself. It’s so inspiring, even for non-sound people. Here is the link.

I urge anyone reading this to watch it. I guarantee it will get you thinking about sound for film in a way you never have before.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


Encore colorist Laura Jans Fazio goes dark with ‘Mr. Robot’

By Randi Altman

After watching Mr. Robot when it premiered on USA Network last year, I changed all of my computer passwords and added a degree of difficulty that I’m proud of. I’m also not 100 percent convinced that my laptop’s camera isn’t on even when there’s no green light. That’s right, I completely and gleefully bought into the paranoia, and I wasn’t alone. Mr. Robot won Best Television Series Drama at this year’s Golden Globes, and one of the show’s supporting actors, Christian Slater, took home a statue.

The show, about a genius New York-based computer hacker (Rami Maleck) who believes corporations control, well, everything, has been getting its color grade by Laura Jans Fazio, lead colorist at Deluxe’s Encore, since its second episode.

Laura Jans Fazio

If you watch any TV at all, you’ve very likely seen some of Jans Fazio’s work. Her resume lists House of Cards, Hawaii 5-0, Proof, Empire and The Lottery, and she’s currently gearing up to work on the updated Gilmore Girls and Lady Dynamite.

Jans Fazio was kind enough to take some time out from grading this upcoming season of House of Cards to chat about her work on Mr. Robot.

Were you on Mr. Robot from the very start?
Sam Esmail, the show’s creator, asked me to help out with one of the first scenes in the pilot — the one that took place in Ron’s Coffee Shop. We made some changes, Sam loved it and wanted me to hit the whole show, so I did!

What kind of direction were you given about the look of that scene?
For Ron’s Coffee Shop, the direction was, “just do your thing.” So I was fortunate enough to do my own thing on it, and make it what I felt it should be.

What about when you started the season?
That’s part of what coloring has been — at least in my career — trying to interpret what the client, or the creator, is saying to me, because everybody has a different way of describing things, whether they’re technically savvy or not. I have to take that description and interpret it, and apply that to the image through my tool set on the computer.

That’s the process for this show, like many others I’ve worked on… I’ve been lucky enough to be entrusted to just do what I think feels right, and then I wait for notes. And more often than not, my notes are pretty minimal.

So minimal notes on Mr. Robot?
It was either “go darker” or ” let’s change this room in its entirety — I want it to be colder, and I’m not feeling the emotion of the scene.” In other instances, I’ll take a scene that’s lit completely warm and I’ll go cool with it because I think it looks better. Then I’ll send it out and be happily pleased that it’s liked.

(Photo: David Giesbrecht/USA Network)

Can you describe a scene and give me an example?
The All Safe office, where Elliot worked, actually stayed similar to the pilot. The only difference was I took a lot of magenta out of it. So it had the feeling of a cold, sterile, distant corporate environment with a “working for the man” kind of feel. It’s not dark. It’s airy and lofty, but not airy in a good way. It basically allows the talent to come through — to see the emotion of what the characters are going through, and what they’re talking about. The rest just seems to melt behind them.

How do you build on what the DP Tod Campbell captures on set?
This is the way I approach all images — I take what I’ve got to work with, play with different styles of contrast, densities and color tones and let the image take me where it wants to be. How it feels in the story and what’s it’s cut against, and where are it’s going.

Usually I’ll tap into it straight away, but it’s always that way on the first episode or two of a new show, because you don’t really know where it needs to be. It’s kind of like the first color of paint that you put on a canvas that has been prepped — that’s not always the color that’s going to come through. It’s going to start out one way, and evolve as you go.

Sometimes colorists talk about being given stills or told to emulate the look of a certain film. It’s pretty amazing that they’re just saying, “Go.”
But that’s not always the case. There are many times where people come in with a photography coffee table book, and say, “I want this, this or that.” Or they will reference a movie from 1972 or say, “Let’s make it look like this Japanese film shot in 1942,” and I reference those clips.

That’s a common practice. In this situation I was approached based on my work on House of Cards and entrusted with Mr. Robot.

Mr. Robot - Season 1     

How do you prefer to work? Or do you enjoy both?
I enjoy both. It’s always good to get feedback, and I need an idea of what it is. When I saw the pilot for MrRobot, I of knew automatically what I would do with it.

Is there anything that stuck out from the season that you are most proud of?
The fact that the show is super dark. Dark is good. People are hesitant to do dark because they need to see what’s going on, but I look at it this way: if you’re in a dark forest and see an opening of light, that’s when you want to see more. And going dark was well received, both by the audience and my peers. That was cool.

Your tool of choice is FilmLight Baselight. Why do you like this particular system?
It’s just makes sense, from the way it allows you to layer colors and grade inside/outside, therefore eliminating keystrokes. It allows me to be really fast, and it deals with different color spaces and gammas. Also, the development always seems to be on the cutting edge of the latest technology coming from the camera manufacturers. They are also great about keeping up with where our business is going, including paying attention to different color spaces and HDR and VR.

Mr. Robot - Pilot Where do you find your inspiration?
It’s everywhere. I notice everything. I notice what somebody is wearing, what the colors are, where the contrasts lie and how the light is hitting them. I notice the paint sheens in a room and where the light that is falling onto objects and creating depth. I get lost online viewing design and color palettes and architecture and photography and gardens. The list goes on.

Growing up in New York, I was walking all the time and was just immersed in visual stimulation — from people, buildings, objects, architecture, art and design. I look to all of the man-made things, but I also look to nature, landscapes and skies… the color contrasts of it all.

What’s next for you, and how many shows do you work on at the same time?
Sometimes I’m on multiple shows within a week, and that overlaps. Right now, I’m doing Hawaii 5-0, House of Cards and Lady Dynamite. House of Cards will end soon, but Hawaii 5-0 will still be going on. Gilmore Girls will start up. Lady Dynamite will still be going, and then Robot will start. Then who knows what else is going to come in between those times.

That’s a lot.
The more the merrier!

The Molecule: VFX for ‘The Affair’ and so much more

By Randi Altman

Luke DiTommaso, co-founder of New York City’s The Molecule, recalls “humble”
beginnings when he thinks about the visual effects, motion graphics and VR studio’s launch as a small compositing shop. When The Molecule opened in 2005, New York’s production landscape was quite a bit different than the tax-incentive-driven hotbed that exists today.

Rescue Me was our big break,” explains DiTommaso. “That show was the very beginning of this wave of production that started happening in New York. Then we got Damages and Royal Pains, but were still just starting to get our feet wet with real productions.”

The Molecule partners (L-R) Andrew Bly, Chris Healer and Luke DiTommaso.

Then, thanks to a healthy boost from New York’s production and post tax incentives, things exploded, and The Molecule was at the right place at the right time. They had an established infrastructure, talent and experience providing VFX for television series.

Since then DiTommaso and his partners Chris Healer and Andrew Bly have seen the company grow considerably, doing everything from shooting and editing to creating VFX and animation, all under one roof. With 35 full-time employees spread between their New York and LA offices — oh, yeah, they opened an office in LA! — they also average 30 freelance artists a day, but can seat 65 if needed.

While some of these artists work on commercials, many are called on to create visual effects for an impressive list of shows, including Netflix’s Unbreakable Kimmy Schmidt, House of Cards and Bloodline, Showtime’s The Affair, HBO’s Ballers (pictured below), FX’s The Americans, CBS’ Elementary and Limitless, VH1’s The Breaks, Hulu’s The Path (for NBC and starring Aaron Paul) and the final season of USA’s Royal Pains. Also completed are the miniseries Madoff and Behind the Magic, a special on Snow White, for ABC.

Ballers-before      Ballers-after

The Molecule’s reach goes beyond the small screen. In addition to having completed a few shots for Zoolander 2 and a big one involving a digital crowd for Barbershop 3, at the time of this interview the studio was gearing up for Jodie Foster’s Money Monster; they will be supplying titles, the trailer and a ton of visual effects.

There is so much for us to cover, but just not enough time, so for this article we are going to dig into The Molecule’s bread and butter: visual effects for TV series. In particular, the work they provided for Showtime’s The Affair, which had its season finale just a few weeks ago.

The Affair
Viewers of The Affair, a story of love, divorce and despair, might be surprised to know that each episode averages between 50 to 70 visual effects shots. The Molecule has provided shots that range from simple clean-ups to greenscreen driving and window shots — “We’ll shoot the plates and then composite a view of midtown Manhattan or Montauk Highway outside the car window scene,” says DiTommaso — to set extensions, location changes and digital fire and rain.

One big shot for this past season was burning down a cabin during a hurricane. “They had a burn stage so they could captFire-stageure an amount of practical fire on a stage, but we enhanced that, adding more fire to increase the feeling of peril. The scene then cuts to a wide shot showing the location, which is meant to be on the beach in Montauk during a raging hurricane. We went out to the beach and shot the house day for night — we had flicker lighting on the location so the dunes and surrounding grass got a sort of flickering light effect. Later on, we shot the stage from a similar angle and inserted the burning stage footage into the exterior wide location footage, and then added a hurricane on top of all of that. That was a fun challenge.”

During that same hurricane, the lead character Noah gets his car stuck in the mud but they weren’t able to get the tires to spin practically, so The Molecule got the call. “The tires are spinning in liquid so it’s supposed to kick up a bunch of mud and water and stuff while rain is coming down on top of it, so we had our CG department create that in the computer.”

Another scene that features a good amount of VFX was one that involved a scene that took place on the patio outside of the fictitious Lobster Roll restaurant. “It was shot in Montauk in October and it wasn’t supposed to be cold in the scene, but it was about 30 degrees at 2:00am and Alison is in a dress. They just couldn’t shoot it there because it was just too cold. We shot plates, basically, of the location, without actors. Later we recreated that patio area and lined up the lighting and the angle and basically took the stage footage and inserted it into the location footage. We were able to provide a solution so they could tell the story without having the actors’ breath and their noses all red and shivering.”

Lobster_Roll-before      Lobster_Roll-after

Being on Set
While on-set VFX supervision is incredibly important, DiTommaso would argue “by the time you’re on set you’re managing decisions that have already been set into motion earlier in the process. The most important decisions are made on the tech scouts and in the production/VFX meetings.”

He offers up an example: “I was on a tech scout yesterday. They have a scene where a woman is supposed to walk onto a frozen lake and the ice starts to crack. They were going to build an elaborate catwalk into the water. I was like, ‘Whoa, aren’t we basically replacing the whole ground with ice? Then why does she need to be over water? Why don’t we find a lake that has a flat grassy area leading up to it?’ Now they’re building a much simpler catwalk — imagine an eight-foot-wide little platform. She’ll walk out on that with some blue screens and then we’ll extend the ice and dress the rest of the location with snow.

According to DiTommaso being there at the start saved a huge amount of time, money and effort. “By the time you’re on set they would have already built it into the water and all that stuff.”

But, he says, being on set for the shoot is also very important because you never know what might happen. “A problem will arise and the whole crew kind of turns and looks at you like, ‘You can fix this, right?’ Then we have to say, ‘Yeah. We’re going to shoot this plate. We’re going to get a clean plate, get the actors out, then put them back in.’ Whatever it is; you have to improvise sometimes. Hopefully that’s a rare instance and that varies from crew to crew. Some crews are very meticulous and others are more freewheeling.”

Tools
The Molecule is shooting more and more of their own plates these days, so they recently invested in a Ricoh S camera for shooting 360-degree HDR. “It has some limitations, but it’s perfect for CG HDRs,” explains DiTommaso. “It gives you a full 360-degree dome, instantly, and it’s tiny like a cell phone or a remote. We also have a Blackmagic 4K Cinema camera that we’ll shoot plates with. There are pros and cons to it, but I like the latitude and the simplicity of it. We use it for a quick run and gun to grab an element. If we need a blood spurt, we’ll set that up in the conference room and we’ll shoot a plate.”

The Molecule added John Hamm’s head to this scene for Unbreakable Kimmy Schmidt.

They call on a Canon 74 for stills. “We have a little VFX kit with little LED tracking points and charts that we bring with us on set. Then back at the shop we’re using Nuke to composite. Our CG department has been doing more and more stuff. We just submitted an airplane — a lot of vehicles, trains, planes and automobiles are created in Maya.”

They use Side Effects Houdini for simulations, like fire and rain; for rendering they called on Arnold, and crowds are created in Massive.

What’s Next?
Not ones to be sitting on the sidelines, The Molecule recently provided post on a few VR projects, but their interest doesn’t end there. Chris Healer is currently developing a single lens VR camera rig that DiTommaso describes as essentially “VR in a box.”

The pipeline experts behind Shotgun’s ‘Two Guys and a Toolkit’ blog

Jeff Beeland and Josh Tomlinson know pipelines, and we are not exaggerating. Beeland was a pipeline TD, lead pipeline TD and pipeline supervisor at Rhythm and Hues Studios for over nine years. After that, he was pipeline supervisor at Blur Studio for over two years. Tomlinson followed a similar path, working in the pipeline department at R&H starting in 2003. In 2010 he moved over to the software group at the studio and helped develop its proprietary toolset. In 2014 he took a job as senior pipeline engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

This fall the pair joined Shotgun Software’s Pipeline Toolkit team, working on creating even more efficient — wait for it —pipelines!  In the spirit of diving in head first, they decided to take on the complex challenge of deploying a working pipeline in 10 weeks — and blogging the good, the bad and the ugly of the process along the way. This was the genesis of their Two Guys and a Toolkit series of blogs, which ended last week.

IMG_6655 jbee
Josh Tomlinson and Jeff Beeland.

Before we dig in to find out more, this is what you should know about the Pipeline Toolkit: The Shotgun Pipeline Toolkit (sgtk) is a suite of tools and building blocks designed to help users to set up, customize and evolve their pipelines. Sgtk integrates with apps such as Maya, Photoshop and Nuke and makes it easy to access Shotgun data inside those environments. Ok, let’s talk to the guys…

What made you want to start the Two Guys and a Toolkit series?
Josh: Since we were both relatively new to Shotgun, this was originally just a four-week exercise for us to get up and running with Toolkit; there wasn’t really any discussion of a blog series. The goal of this exercise was to learn the ins and outs of Toolkit, identify what worked well, and point out things we thought could be improved.

After we got started, the word spread internally about what we were up to and the idea for the blog posts came up. It seemed like a really good way for us to meet and interact directly with the Shotgun community and try to get a discussion going about Toolkit and pipeline in general.

Did you guys feel exposed throughout this process? What if you couldn’t get it done in 10 weeks?
Jeff: The scope of the original exercise was fairly small in terms of the requirements for the pipeline. Coupled with the fact that Toolkit comes with a great set of tools out of the box, the 10-week window was plenty of time to get things up and running.

We had most of the functional bits working within a couple of weeks, and we were able to dive deep into that experience over the first five weeks of the blog series. Since then we’ve been able to riff a little bit in the posts and talk about some more sophisticated pipeline topics that we’re passionate about and that we thought might be interesting to the readers.

pipe_layout copy

What would you consider the most important things you did to ensure success?
Josh: One of the most important ideas behind the blog series was that we couldn’t just talk about what worked well for us. The team really stressed the importance of being honest with the readers and letting them in on the good, the bad and the ugly bits of Toolkit. We’ve tried our best to be honest about our experience.

Jeff: Another important component of the series was the goal of starting up a dialogue with the readers. If we just talked about what we did each week, the readers would get bored quickly. In each post we made it a point to ask the readers how they’ve solved a particular problem or what they think of our ideas. After all, we’re new to Toolkit, so the readers are probably much more experienced than us. Getting their feedback and input has been critical to the success of the blog posts.

Josh: Now that the series is over, we’ll be putting together a tutorial that walks through the process of setting up a simple Toolkit pipeline from scratch. Hopefully users new to Toolkit will be able to take that and customize it to fit their needs. If we can use what we’ve learned over the 10 weeks and put together a tutorial that is helpful and gives people a good foundation with Toolkit, then the blog series will have been successful.

Do you feel like you actually produced a pipeline path that will be practical and realistic for applying in real-world production studios?
Jeff: The workflow designs that we model our simple pipeline off of are definitely applicable to a studio pipeline. While our implementations are often at a proof-of-concept level, the ideas behind how the system is designed are sound. Our hope has always been to present how certain workflows or features could be implemented using Toolkit, even if the code we’ve produced as part of that exercise might be too simplistic for a full-scale studio pipeline.

During the second half of the blog series we started covering some larger system designs that are outside of the scope of our simple pipeline. Those posts present some very interesting ideas that studios of any size — including the largest VFX and animation studios — could introduce into their pipelines. The purpose of the later posts was to evoke discussion and spread some possible solutions to very common challenges found in the industry. Because of that, we focused heavily on real-world scenarios that pipeline teams everywhere will have experienced.

What is the biggest mistake you made, what did you do to solve it and how much time did it set you back?
Josh: To be honest, we’ve probably made mistakes that we’ve not even caught yet. The fact that this started as an exercise to help us learn Toolkit means we didn’t know what we were doing when we dove in.

In addition, neither of us have a wealth of modern Maya experience, as R&H used mostly proprietary software and Blur’s pipeline revolved primarily around 3DS Max. As a result, we made a complete mess out of Maya’s namespaces on our first pass through getting the pipeline up and running. It took hours of time and frustration to unravel that mess and get a clean, manageable namespacing structure into place. In fact, we nearly eliminated Maya namespaces from the pipeline simply so we could move on to other things. In that regard, there would still be work to do if we wanted to make proper use of them in our workflow.

You spent 10 weeks building a pipeline essentially in a vacuum… how much time realistically would this take in an operational facility where you would need to integrate pipeline into existing tech infrastructure?
Jeff: That all depends on the scope of the pipeline being developed. It’s conceivable that a small team could get a Toolkit-driven pipeline up and running in weeks, if relying on mostly out-of-the-box functionality provided.

This would require making use of well-supported DCC applications, like Maya and Nuke, as custom integrations with others would require some development time. This sort of timeframe would also limit the pipeline to supporting a single physical studio location, as multi-location or cloud-based workflows would require substantial development resources and time.

It’s worth noting that R&H’s pipeline was initially implemented in a very short period of time by a small team of TDs and engineers, and was then continually evolved by a larger group of developers over the course of 10-plus years. Blur’s pipeline evolved similarly. This goes to show that developing a pipeline involves hitting a constantly moving target, and shouldn’t be viewed as a one-time development project. The job of maintaining and evolving the pipeline will vary in scope and complexity depending on a number of factors, but is something that studios should keep in mind. The requirements laid out by production and artists often change with time, so continued development is not uncommon.

Any lessons learned, parting words of wisdom for others out there taking on pipeline build-out?
Jeff: This really goes for software engineering in general — iterate quickly and set yourself up to fail as fast as possible. Not all of your ideas are going to pan out, and even when they do, your implementation of the good ones will often let you down. You need to know whether the direction you’re going in will work as early as possible so that you can start over quickly if things go wrong.

Josh: A second piece of advice is to listen to the users. Too often, developers think they know how artists should work and fail to vet their ideas with the people that are actually going to use the tools they’re writing. In our experience, many of the artists know more about the software they use than we do. Use that to your advantage and get them involved as early in the process as possible. That way you can get a better idea of whether the direction you’re going in aligns with the expectations of the people that are going to have to live with your decisions.

Checking in with Tattersall Sound & Picture’s Jane Tattersall

By Randi Altman

Toronto-based audio post house Tattersall Sound & Picture has been a fixture in audio post production since 2003, even though the origins of the studio go back further than that. Tattersall Sound & Picture’s work spans films, documentaries, television series, spots, games and more.

Now part of the SIM Group of companies, the studio is run by president/supervising sound editor Jane Tattersall and her partners Lou Solakofski, Peter Gibson and David McCallum. Tattersall is an industry veteran who found her way to audio post in a very interesting way. Let’s find out more…

(back row, L-R) David McCallum, Rob Sim, and Peter Gibson (front row) Jane Tattersall and Lou Solakofski.

How did you get your start in this business?
My start was an accident, but serendipitous. I had just graduated from university with a degree in philosophy and had begun to think of what options I might have — law and journalism were the only fields that came to mind, but then I got a call from my boyfriend’s sister who was an art director. She had just met a producer at a party who was looking for a philosopher to do research on a documentary series. I got the job, did all the research and ended up working with the picture editor. I found his work using sound brought the scenes to life, so decided I would try to learn that job. After that I apprenticed with an editor and learned on the job. I’m still learning!

When did you open Tattersall Sound & Picture?
I started the original Tattersall Sound, which was just sound editing in 1992 but sold it in 1999 to run a larger full post facility. I opened Tattersall Sound & Picture in 2003, along with my partners.

Why did you open it?
After three years running a big post facility I missed the close involvement with projects that comes with being an editor. I was ready for a change and keen to be more hands on.

How has it evolved over the years?
When we started the company it was just sound editing. The first year we shared warehouse space with a framing factory. We had a big open workplace and we all worked with headphones. After a year we moved to where we are today. We had space for picture editing suites as well as sound editing. Over time we expanded our services and facilities. Now we have five mix stages including a Dolby Atmos stage, ADR, as well as offline and sound editorial.

How have you continued to be successful in what can be a tough business?
We focus simultaneously on good creative work and ensuring we have enough resources to continue to operate. Without good and detailed and good work we would lose our clients, but without earning enough money we couldn’t pay people properly, pay the rent and upgrade the stages and edit rooms. I like to think we attract good talent and workers because we care about doing great work, and the great work keeps the clients coming to us.

Does working on diverse types of projects play a role in that success?
Yes, that’s true as well. We have a diversity of projects — TV series, documentaries, independent feature films, some animation and some children’s TV series. Some years ago we were doing mostly indie features and a small amount of television, but our clients moved into television and brought us along with them. Now we are doing some wonderful higher-end series like Vikings, Penny Dreadful and Fargo (pictured below). We continue to do features and love doing them, but it is a smaller part of the business.

FARGO_207_0510_CL_d_hires1 FARGO_209_0108_CL_d_hires1

If you had one tip about keeping staff happy and having them stay for the long-term, what would it be?
Listen to them, and keep them involved and make them feel like an appreciated part of the business.

What is the biggest change in audio post that you’ve seen since your time in the business?
The biggest change would be the change in technology — from Moviolas to Pro Tools and all the digital plug-ins that have become the regular way of editing and mixing. Related to that would be the time allotted to post sound. Our schedules are shorter because we can and do work faster.

The other change is that we work in smaller teams or even alone. This means fewer opportunities for more junior people and assistants to learn by doing their job in the same room. This applies to picture editing as well, of course.

There is no denying that our industry is filled with more males than females, and having one own an audio post house like yours is rare. Can you talk about that?
I certainly didn’t set out to own or run anything! Just to work on interesting projects for directors and producers who wanted to work with me. The company you see today has grown organically. I attracted like minded co-workers and complementary team members and went after films and directors that I wanted to work with.

We would never have built any mix stages if we didn’t have re-recording mixer Lou Solakofski on board as partner. And he in turn would never have got involved if he didn’t trust us to keep the values of good work and respectful working environment that were essential to him. We all trusted one another to retain and respect our shared values.

It has not always been easy though! There were many projects that I just couldn’t get, which was immensely frustrating. Some of these projects were of the action/violent style. Possibly the producers thought a man might be able to provide the right sounds rather than a woman. No one ever said that, so there may have been other reasons.

However, not getting certain shows served to make me more determined to do great work for those producers and directors who did want me/us. So it seems that having customers with the same values is crucial. If there weren’t enough clients who wanted our quality and detail we wouldn’t have got to where we are today.

What type of gear do you have installed? How often to do you update the tech?
Our facility is all Avid and Pro Tools, including the mix stages. We have chosen an all-Pro Tools workflow because we feel it provides the most flexibility in terms of work flow and the easiest way to stay current with new service options. Staying current can be costly but being up to date with equipment is advantageous for both our clients and creative team.

Hyena Road had a Dolby Atmos mix

Hyena Road had a Dolby Atmos mix

We update frequently usually, driven by the requirements of a specific project. For example, in July 2015 we were scheduled to mix the Canadian war film Hyena Road and the producer, distributor and director all wanted to work in Dolby Atmos. So our head tech engineer Ed Segeren and Lou investigated to see how feasible it would be to upgrade one of the stages to accommodate the Dolby requirements. It took some careful research and some time but that stage was updated to facilitate that film.

Another example is when we began the Vikings series and knew the composer was going to deliver very wide — all separate stems as 5.0 — so we needed a dedicated music Pro Tools. This meant we had to expand the console.

As a rule when we update one mix stage, but we know we will soon update the others in order to be able to move sessions between rooms transparently. This is an expense, but it also provides us flexibility — essential in post production as project schedules inevitably shift from their original bookings.

David McCallum, fellow sound supervisor and partner, has a special interest in acoustic listening spaces and providing editors with the best environment to make good decisions. His focus on editorial upgrades help to ensure we can send good tracks to the stage.

Our head tech engineer Ed Segeren attends NAB and AES every year to see new developments, and the staff is very interested in learning about what’s out there and how we might apply new technology. We try to be smart about our upgrades, and it’s always about improving workflow and work quality.

What are some recent projects completed at Tattersall?
We recently completed the series Fargo (mixing), and the feature films Beeba Boys (directed by Deepa Mehta) and Hyena Road (directed by Paul Gross), and we are in the midst of the TV series Sensitive Skin for HBO Canada. We are also doing Saving Hope, Vikings (pictured below) Season 4 and will start Season 3 of Penny Dreadful in early 2016.

v3_09_9262014_bw__13709 v3_10_10132014_bw_14071

Are you still directing?
I’m surprised you even know about that! I’m trying to! Last spring I directed a very short film, a three-minute thriller called Wildlife. This month I am co-directing a short film about a young women indirectly involved in a police shooting and her investigation into what really happened. I have an advantage, which is that I know when a story point can be made using sound rather than needing a shot to convey something, and I have a good idea of how ADR can be employed so no need to worry about the production recording.

The wonderful thing about these non-work film projects is that I learn a huge amount every time, including just how hard producers must work to get something made, and just how vulnerable a director is when putting something of themselves out there for anyone to watch.

Quick Chat: Rampant’s Sean Mullen on new mograph release

Rampant Design Tools who has been prolific about introducing new offerings and updates to its motion graphics products is at it again. This time with two new Style Effects volumes for motion graphic artists offer 1,500 new 2K, 4K and 5K effects.

Rampant Motion Graphics for Editors v1 and v2, are QuickTime elements that users can drag and drop into the software of their choice; Rampant effects are not plug-ins and, therefore, not platform dependent.

The newly launched Rampant Textured Overlays library features 230 effects for editors, also in ultra-high resolution 2K, 4K and 5K elements.

This volume provides a large amount of overlay effects for editors or anyone else looking to add a unique and modern look to video projects. Rampant Textured Overlays are suited for editing, motion graphics, photography and graphic design.

We reached out to Rampant’s Sean Mullen, who runs the company with his wife Stefanie. They create all of the effects themselves. Ok, let’s find out more.

What’s important about this new release?
The Motion Graphics for Editors series is a completely new direction for us. We’ve designed thousands of animated elements so that busy editors or anyone who doesn’t have time to design their own can easily create great looking motion graphics without having to start from scratch. These designs are the same that you see in current television and commercial trends.

Volume one is more of a base, something that you can use in just about any kind of situation.  Volume 2 is more edgy and is similar to the kinds of designs that I’ve previously created for the X Games, MTV, Fuel and the National Guard. Motion Graphics for Editors is the beginning of a new trend at Rampant. You can expect to see a variety of different projects coming out of our shop in the near future.  All vastly different from what people are used to seeing from Rampant. I’m super stoked about the next six to eight months.

Were the new offerings based on user feedback?
In part, yes. I have hundreds of project ideas on my whiteboards that I’d like to build out. We’re only limited by time and resources. We don’t have a studio full of artists cranking out our designs. Rampant is just Stefanie and myself. I’m always roughing out ideas and letting them percolate. The great thing about being a small company is that we get to travel and talk to editors and artists directly. We often visit with amazing groups like the Blue Collar Post Collective in NYC and talk with assistant editors, editors and colorists. This allows us to hear first hand about what people want and need in their respective workflows.

What do you expect to be the most used of the bunch?
Motion Graphics for Editors v1 was designed as a base. It’s got more of a universal appeal. It’s perfect for everything from corporate work to infographics and commercials. Volume 2 is a lot more edgy and has a specific feel.

Rampant_Motion_Graphics_for_Editors_V2_010

What’s your process when developing a new release?
There are dozens of projects in various stages of development at any given time. When an idea pops into my head, I’ll start camera, compositing and animation tests right away. Everything starts at 5K resolution or higher. Typically, I’ll let a project sit for a while after the initial R&D. This allows the idea to mature and gives us time to attack the project from multiple angles. Once we decide that something is worth pursuing, I’ll shoot or animate every possible thing I can think of. This can take days or weeks, depending on the amount of post work and transcoding that is involved. From there we’ll have a vat of hundreds, or in some cases thousands, of elements.

We toss out the ones that don’t work or aren’t deemed as useful. Then we organize the elements and give them a proper naming structure. After the elements are named, we output 4K and 2K versions of our 5K master elements and begin the long process of zipping and uploading them to our servers for download delivery.  The final elements, camera masters and project files are then archived. Lastly, we cut a promo video showing our new products in use, build a new product page on our site and develop a newsletter to let our customers know about the latest release. Once that cycle is complete, it’s back to the whiteboard.

Anything you want to add that’s important?
It’s our mission to save editors time and money.If something normally takes hours or days to complete and our effects can help reduce that time, we’ve achieved our goal. There are many editors out there who use our effects in a pre-visual manner. They use our effects to quickly design something in order to get a green light from their producer or client and this saves a ton of time and money.

Others look at our effects as a starting off point. They start with our elements and combine them to make something new. We receive emails every day from editors who just don’t have time to make anything from scratch. Their budgets are too tight, turnaround time is insane or they simply aren’t mograph designers but still want good-looking motion graphics. These are our people, they are why we work every single day. We read each and every email and take every phone call, even at 3am.

FotoKem’s Alastor Arnold helps set look for ‘Ash vs Evil Dead’

The colorist worked hand in hand with director Sam Raimi and editor Bob Murawski

By Randi Altman

Halloween is known for its ghosts, goblins and gruesome zombies, but this year we got an extra serving of the non-alive, dished up by Sam Raimi and Starz Network. Fans of Raimi’s The Evil Dead (1981) and its sequels (Evil Dead 2, Army of Darkness) were treated to the pilot episode of Ash vs Evil Dead. Many consider The Evil Dead films cult classics, but they are so much more than that. Yes, they are campy and gory and more bloody than necessary, but it’s all done in an effort to make people laugh.

Back for this comedy/action/horror series on Starz is Bruce Campbell as Ash, the man who lost his hand in battle and then cleverly replaced it with a chainsaw. His quick wit and sarcasm have amazingly not diminished over the years. You know, it’s not easy to keep your sense of humor when evil dead people are after you!

Alistor Arnold

Alastor Arnold

Raimi, who directed the first episode, worked very closely with long-time editor and collaborator Bob Murawski and FotoKem colorist Alastor Arnold to create the look of the pilot.

While the show was shot digitally on Arri Alexa (with a couple of pickups shot via a Sony F55), Raimi wanted a filmic look, and that is a big part of what Murawski and Arnold worked to accomplish.

Arnold has some history with Raimi and Murawski — he remastered The Evil Dead for theatrical and Blu-ray release. While Murawski and Arnold work together often, Ash vs Evil Dead is only the second project for the colorist and Raimi.

“I do a lot of work with Bob. In addition to being an Oscar-award winning editor (The Hurt Locker), he has a company called Grindhouse Releasing,” explains Arnold. “They specialize in the restoration and distribution of exploitation and horror films, and I’ve had the pleasure of remastering numerous titles with Bob over the years. When he can bring me in to work with him, he does. And that’s how we got to do the pilot of Ash vs Evil Dead.”

Let’s find out more about the color grade and creating the look for the pilot and series.

How early were you brought on?
Just after shooting — when they started cutting. They had some questions about what work could be accomplished in the color suite when they were doing their rough cuts for the executive screeners. There was one scene in particular… they wanted to see if we could accomplish a specific look without having to go to visual effects.

What was that look?
There was a scene in a room with no lights, and it needed to be lit by a spinning flashlight. So the actors would be coming in and out of darkness, illuminated by only a flashlight. Originally when they shot it, they intended it to be a visual effect, so it was shot brighter than intended. Through color correction, we were able to create the effect they were going for.

How did they describe the look that they wanted for the pilot and the series?
Bob and Sam are both fans of a “filmic” look. They like the image to stay warm and high contrast. Based on their relationship, Sam entrusted Bob with the first pass of color. When Sam walked in for his first day of grading, the show was already in a good place for dialing in looks and trims, with a focus on shaping the frame with Power Windows and integrating visual effects more thoroughly. The look of the pilot is very warm, saturated and punchy, very chromatic — not what I would call a typical kind of horror movie look. A lot of times horror movies are drab or pretty desaturated and a lot of the times they are very cool. This is against that grain.

The pilot was shot almost entirely with an Arri Alexa. How did that play a role in getting the filmic look?
Arri has done a fantastic job with their color science. It responds in a natural way. All the base grades started with a film emulation, internally built at FotoKem with our color scientist, and based on our film lab experience.

The series has a campy feel. Would you say that’s reflected in the look?
The first Evil Dead was much more of a horror movie when compared to Evil Dead 2 and Army of Darkness. The tone of the series has evolved. Sam always injects humor into his movies, even in the first Evil Dead. In the TV show, there’s lots of horror and definitely gore, but it’s actually really funny. There’s an ingrained sense of humor in what Sam does, and that really comes through. Maybe that is reflected in the chromatic, warm look. It may complement that.

What kind of terms or language do you like to use when talking to someone about a look? And do you get examples, such as stills?
I like to approach color from an instinctual artistic level. When I start a project it’s important for me to engage with clients and discuss not only the literal of what they might like to achieve but also what it is emotionally they’re going for, and how color might enhance that. In addition, visual references are always great. I’m always happy when they reference other movies or projects or bring in stills. It’s common these days for looks to be set somewhat in dailies. Any visual reference is always good, but for me, I find it more important to engage artistically and emotionally with people to derive a look for a project.

What about the technical aspects of the grade and the system, in your case Blackmagic’s DaVinci Resolve?
There’s an expectation when people walk into a room with a professional colorist that the technical side of things won’t be an issue; that the colorist is going to be able to help you reach your creative goals. Solidifying and understanding what those creative goals are in the beginning is very important. So, I’m generally less concerned with how to technically arrive somewhere than creatively. Often the technical side of things can be driven by the creative goals.

It’s very important to experiment and have fun; that’s what this process is all about. Engage creatively and artistically; that is the most important part. The technical will happen.

Were Sam and Bob open to suggestions and experimenting?
Bob has been involved in just about everything Sam has done since Darkman (1990), which was their first project together; they have a short hand. Sam was very involved in this episode, and we spent probably two or three days together going through the show, but Sam is less technically driven. When he walked into the room, Bob had already gone through it and gotten it to a good starting place, based on his knowledge of Sam’s sensibilities.

Sam is generally more concerned with what is going to enhance the performances or the emotion of a scene. There’s lots of Windowing in different parts of the frame to either bring things up or down, or tinting things slightly to enhance an emotional feel. That’s where Sam comes from.

So the initial sessions with Bob are where you did the heavy lifting and decided on the overall look?
Yes, the technical grading — matching shots, fixes, general levels and looks. That’s what Bob focuses on during the pre-grading.

Ash vs Evil Dead

Can you talk about the lighting and working with the Resolve?
Lighting wise, it’s actually pretty up, even though the intent may be to have it slightly darker in final color. The nice thing about Resolve is its tracking tools are very good, so you can bring up parts of the frame individually while still keeping other areas very dark.

We did have to do some noise reduction in certain parts as well. The built-in noise reduction tool is very good. I find it very easy to use — I don’t find myself struggling to reach a look or correction, it generally happens quick and easy. That’s important when you have a client in the room. You don’t want to take too long to come up with something.

FotoKem used Resolve for the online as well?
Yes. With the exception of the visual effects, the entire online edit was completed in Resolve, in addition to the color and deliverables.

How does being able to do so much in that one system help you?
I came up working on a system that was more of a hero suite, so it did the color, it did the graphics, it did the minor visual effects work. So it’s nice to see Resolve now competing at that level.

Although I didn’t do the bulk of the editorial work, it was nice to be in the room with Bob and be able to slip a shot a couple of frames, or drop in the visual effects as they came in last minute along with their associated mattes… it all happens very quickly and easily in Resolve.

Where do you find your inspiration?
I love movies and find my inspiration in them. I always try to stay artistically engaged; I like to work on my own projects, in addition to enjoying and contributing to other people’s work. I make an effort to get to the theater two or three times a week. I’m a member of the Visual Effects Society, so I go to lots of their member screenings too. To me, it’s important to stay current in my craft and to be inspired by other people’s work. I enjoy seeing what people are doing with different cameras and how things hold up in different theaters. I like seeing films in a theater as they’re intended and viewing them with an audience. To see how other people are practicing the craft is important. If you’re a painter, you’re going to go to the museum. If you’re a colorist, you should go to the movies, and lots of them.

What have you seen recently that you respected?
I really liked the movie The Diary of a Teenage Girl. It was beautiful. Also Cartel Land, which was lovely, especially considering it was a documentary. Those are small movies, but I saw Sicario recently and that was a very impressive and pretty movie… beautifully shot.

Another movie I enjoyed this year was Tangerine, which was shot entirely on an iPhone. The artist in me wanted to see it for the story and craft. But it was also really important for me to view it in the theater on a large screen and see how well it held up technically. For a colorist it’s an artistic and technical exercise to watch movies.

—–
Ash vs. Evil Dead can be seen weekly on Starz at 9pm EST.

Mick Audsley: Editing ‘Everest’

This veteran editor walks us through his process

By Randi Altman

Mount Everest, the highest peak in the world and the Holy Grail for many climbers, often is the symbol of a personal struggle to achieve an incredibly difficult task. It also happens to be the subject of a new film from director Baltasar Kormákur that is based on the (sometimes contradictory) true story of two climbers who, in the spring of 1996, got caught in a violent blizzard — and fought to survive.

The goal of the filmmakers behind Universal’s Everest was to tell this story of tragedy and survival and, in doing so, make the audience feel the desperation of the characters on screen, and to tell this story of tragedy and survival. To give us a glimpse into the process, we reached out to Everest’s editor, Mick Audsley, whose work includes Harry Potter and the Goblet of Fire, Twelve Monkeys, Interview with the Vampire: The Vampire Chronicles, Dangerous Liaisons and many more.

Starting 4th from left: Director Baltasar Kormákur, Glenn Freemantle and Mick Audsley, with the audio post crew.

He acknowledges that, “from a storytelling point, there was a huge responsibility to make a film that worked, but also to be as honest as possible. We wanted to help preserve the legacy of the story for the families and climbers who are still alive.”

Audsley cut the film — shot with the Arri Alexa by DP 
Salvatore Totino — on an Avid Media Composer in DNX36 over 55 weeks, which, amazingly, isn’t the longest edit he’s been involved in. Goblet of Fire takes that award, coming in at 73 weeks.

Let’s find out more about Audsley’s process on this film and his philosophy on editing.

How did you give the audience that “you are here” feeling of peril in your edit?
There’s a montage early on, which shows the sorts of dangers they had on the way up, including the altitude, which has a huge impact on your health. There’s a great deal of peril in the sheer physics of it all, but as the story unfolds, we never felt we had to overly dramatize what went wrong, because it’s a series of small, rather human, mistakes and misjudgments with catastrophic consequences. Editorially, we felt it should just relentlessly build up, tightening its grip around the audience’s throat, if possible, in order to engage them.

How did you work with director Baltasar Kormákur, and how early did you get involved in the film?
I began at the start of shooting, although we weren’t together. Balthazar and the crew spent 10 shooting days in Nepal while we were setting up in the the mountains in Northern Italy —basically at a ski resort — where we were for about six to eight weeks doing the photography… with climbers in real snow. We were accessible to everybody and would show the work as it progressed. We then split up, because they built a base camp in Cinecitta Studios in Rome. That was only going to last two weeks, so it made sense to come back to London for the rest of the schedule, which was completed in Pinewood Studios on the big 007 stage.

We were all very busy, and I didn’t see a great deal of Balthazar during shooting, but we would meet. It was a very tough shoot, as you can imagine, and he was kind enough to trust me just to carry on.

storm

When did you get into a room with him?
After they finished shooting and Balt had gone back home to Reykjavik. We were meeting everyday, based at RVX https://www.rvx.is/, his visual effects company’s building in the center of Reykjavik. We then spent the best part of 14 weeks working together in Iceland.

Fourteen weeks, just in Iceland?
It was the director’s cut period, which is normally 10 weeks, but we stayed longer since it worked so well for Balt as he was able to carry on with things and visit my team and I almost every day. We would get together in the afternoon and I would show the work I’d done the day before, discuss it and make the plan for the next day.

Were you given direction in terms of how it was going to be edited? Or where you given free rein?
I was given a large amount of free rein. Balt is extremely trusting of me, and we would just bat ideas around and constantly try to move the film to where we felt it was functioning in the way we needed it to. There were many strands of the story that were shot, which then had to be whittled down or re-balanced or changed or taken out. The editorial process was not just cutting; there was a certain amount of changing of dialogue, rewording things and re-recording things in order to make the narrative move forward in the right way. It was a big job, and we were constantly throwing things at each other.

I obviously had the task of doing the actual editorial work of realizing it, cutting the scenes and putting it all together, but I was given an enormous amount of freedom and could express myself very freely. So it was very much a joint venture.

Everest Film Title: Everest

Can you describe your editing set up?
We had three Avid Media Composers with shared storage. Actually, we had four because my visual effects editor, Keith Mason, joined us in Iceland for that period. We had to turn over material as quickly as we could so the visual effects work could be started and run in parallel with us as the cut progressed.

I had two assistants on Everest because it was a very labor-intensive film. There was a lot of material. On average I was receiving between five and six hours a day from each day’s shooting. So over a period of 16 — 18 weeks that builds up quite a big library of material to be evaluated, understood and cut. It worked very smoothly and efficiently.

Do you typically cut on a Media Composer?
Yes, it’s a very good tool, one that I’ve been using for the last… God knows. What we need is something that’s reliable and fast and allows us the freedom to think and to store the many versions and permutations we need. A lot of the work that we do is keeping a very tidy cutting room in terms of organization of material and the different versions and what we’re doing and where we’re putting our efforts.

How do you work with your assistant editors, specifically on this film?
Pani Ahmadi-Moore is my first assistant, and we’ve worked together for about six years now. But she’s much more than just an assistant editor; she’s a filmmaker in her own right, in the sense of being a collaborator in the project. Although she’s not actually cutting the movie, she’s very much involved in it.

So, all of the prep work, and making things available to me, is handled by Pani and her assistant. They present bins for each scene of material. I keep an absolutely precise log of what comes in and when and what it relates to, which is also presented by Pani. This frees me up to concentrate on cutting the scenes, putting the film together and aiming towards a first cut. We generally present this within two weeks of the end of principal photography.

Everest 5

The film was released in 3D stereo and Imax. Can you talk about that?
We didn’t put on 3D glasses, or anything like that, in the cutting room. When we got back to London and we had a cut, we then started sending sections to the 3D house, StereoD, and the stereo process began to run in parallel with us and those scenes would be updated as the cut changed.

It’s a bit like a VFX process in its own right. There are three strands of things going along in parallel on the pictorial side: the cut developing and being shaped and editorializing in the traditional way; the turning over of visual effects within that cut and keeping them up-to-date with the changes editorially; and, similarly, the same process going on for turnovers to Stereo D in Burbank, California.

After the conversions are made, do you get to see the final product?
Yes, we do. In fact, though, in this case, I was so busy with the cut that Balthazar, bless him, took a lion’s share of directing the 3D. We had to divide our labor, as it were, and I was still very busy shaping the content of the film. It comes to a point when it’s “How do we best use our time and what is the best distribution of our time?”

You mentioned VFX, were you working with temp VFX? How did that work?
We did have temp VFX, and we would be given early versions of critical shots. A lot of the summit material, where we had the climbers on the set without the backgrounds, took quite a while for us to get. For me, it was quite hard to judge the impact of some of these shots until they were available in a form where we could see how good they were going to be and how extreme the environment was. It takes time… it’s a slow cooked meal.

Can you talk about the film’s sound?
We had extremely difficult audio. There was a high percentage of ADR and replacement on this film — we had wind machines, we had people on the real mountains with clothes blowing and making noise, so the audio in its early stages was very difficult to hear and use. It wasn’t until we got substantial ADR and tracks back that were clean that we could build it all up again. That was very challenging.

Who worked on the sound?
The sound designer was the wonderful Glenn Freemantle and the dialogue editor was my old friend Nina Heartstone. She did an amazing job scheduling the artists to come back for ADR. They also had to do very physical things — now in a studio environment —in order to get the vocalization and the physicality to sound convincing. The sound is quite extraordinary.
It wasn’t until we had a temp dubbed and temporary visuals that started to feel that the film was being realized how we had intended it to be, and we could start to read it properly.

Is there any one scene or section that you found the most challenging?
I think the whole film was challenging. Obviously, the storm sequence is a visceral experience that you want to have the audience experience — the complexity of what was going on apart from the physical hardship, and the way in which the tragedy unfolded.

We had filmmaking issues to resolve as well. The identification of people was one, actually seeing the climbers’ faces since they were hidden most of the time. There were lots of issues that we had to understand, to decide whether to solve or to accept. For example, the story of what happened with the oxygen is confusing, and nobody really understands exactly what when wrong. In filmmaking terms, that can be tricky to communicate. Whether we got away with that, I don’t know. Everybody was very confused about the oxygen, and that’s how it was.

It goes back to what I was saying at the beginning, Randi, which is this responsibility towards the subjected storytelling for those who survived and the reality of what happened.

What’s next for you?
I was to be making another film for Terry Gilliam (we worked together previously on The Imaginarium of Doctor Parnassus), which is his long-awaited Don Quixote movie, but this has been postponed until the spring..

In the meantime, I’m helping set up a film networking organization here in London. It’s called Sprocket Rocket Soho (@srsoho) It’s an international endeavor aimed at bringing young filmmakers together with older filmmakers, because in the digital world we’re all feeling a bit isolated and people need to get into a room and talk. I’m very pro-education for young filmmakers, and this is part of that incentive.

Sam Daley on color grading HBO’s ‘Show Me a Hero’

By Ellen Wixted

David Simon’s newest and much-anticipated six-part series Show Me a Hero premiered on HBO in the US in mid-August. Like The Wire, which Simon created, Show Me a Hero explores race and community — this time through the lens of housing desegregation in late-‘80s Yonkers, New York. Co-written by Simon and journalist William F. Zorzi, the show was directed by Paul Haggis with Andrij Parekh as cinematographer, and produced by Simon, Haggis, Zorzi, Gail Mutrux and Simon’s long-time collaborator, Nina Noble. Technicolor PostWorks‘ Sam Daley served as the colorist. I caught up with him recently to talk about the show.

A self-described “film guy,” New York-based Daley has worked as colorist on films ranging from Martin Scorsese’s The Departed to Lena Dunham’s Girls with commercial projects rounding out his portfolio. When I asked Daley what stood out about his experience on Show Me a Hero, his answer was quick: “The work I did on the dailies paid off hugely when we got to finishing.” Originally brought into the project as dailies colorist, Daley’s scope quickly expanded to include finishing — and his unusual workflow set the stage for high-impact results.

Sam Daly

Sam Daly

Daley’s background positioned him perfectly for his role. After graduating from film school and working briefly in production, Daley worked in a film lab before moving into post production. Daley’s deep knowledge of photochemical processing, cameras and filters turned him into a resource for colorists he worked alongside and piqued his interest in the craft. He spent years paying his dues before eventually becoming known for his work as a colorist. “People tend to get pigeonholed, and I was known for my work on dailies,” Daley notes. “But ultimately the cinematographers I worked with insisted that I do both dailies and finishing, as Ed Lachman (cinematographer) did when we worked together on Mildred Pierce.”

The Look
Daley and Show me a Hero’s cinematographer, Andrij Parekh, had collaborated on previous projects, and Parekh’s clear vision from the project’s earliest stages set the stage for success. “Andreij came up with this beautiful color treatment, and created a look book that included references to Giorgio de Chirico’s painted architecture, art deco artist Tamara de Lempicka’s highly stylized faces, and films from the 1970s, including The Conformist, The Insider, The Assassination of Richard Nixon and The Yards. Sometimes look books are aspirational, but Andrij’s footage delivered the look he wanted‚ and that gave me permission to be aggressive with the grade,” says Daley. “Because we’ve worked together before, I came in with an understanding of where he likes his images to be.”

bar before

Parekh shot the series using the Arri Alexa and Leica Summilux-C lenses. Since the show is set in the late ‘80s, a key goal for the production was to ground the look of the show firmly in that era. A key visual element was to have different visual treatments for the series’ two worlds to underscore how separate they are: the cool, stark political realm, and the warmer, brighter world of the housing projects. The team’s relatively simple test process validated the approach, and introduced Daley to the Colorfront On-Set Dailies system, which proved to be a valuable addition to his pipeline.

“Colorfront is really robust for dailies, but primitive for finishing — it offers simple color controls that can be translated by other systems later. Using it for the first time reminded me of when I was training to be a colorist — when everything tactile was very new to me — and it dawned on me that to create a period look you don’t have to add a nostalgic tint or grain. With Colorfront I was able to create the kind of look that would have been around in the ’80s with simple primary grades, contrast, and saturation adjustments.”

meeting before

“This is the crazy thing: by limiting my toolset I was able to get super creative and deliver a look that doesn’t feel at all modern. In a sense, the system handcuffed me — but Andrij wasn’t looking for a lot of razzle-dazzle. Using Colorfront enabled me to create the spine of an appropriate period style that makes the show look like it was created in the ‘80s. Everyone loved the way the dailies looked, and they were watching them for months. By the time we got to finishing, we had something that was 90% of the way there.”

Blackmagic’s DaVinci Resolve 11 was used for finishing, a process that was unusually straightforward because of the up-front work done on the dailies. “Because all shots were already matched, final grading was done scene by scene. We changed the tone of some scenes, but the biggest decision we made was to desaturate everything by an additional 7% to make the flesh tones less buzzy and to set the look more firmly in the period.”

Belushi beforeBelushi after

Daley was enthusiastic about the production overall, and HBO’s role in setting a terrific stage for moving the art of TV forward. “HBO was awesome — and they always seem to provide the extra breathing space needed to do great work. This show in particular felt like a symphony, where everyone had the same goal.”

I asked Daley about his perspective on collaboration, and his answer was surprising. “’The past is prologue.’ Everything you did in the past is preparation for what you’re doing now, and that includes relationships. Andrij and I had a high level of trust and confidence going into this project. I wasn’t nervous because I knew what he wanted, and he trusted that if I was pushing a look it was for a reason. We weren’t tentative, and as a result the project turned into a dream job that went smoothly from production through post.”  He assures this is true for every client — you always have to give 110 percent. “The project I’m working on today is the most important project I’ve ever worked on.”

Daley’s advice for aspiring colorists? “Embrace technology. I was a film guy who resisted digital for a long time, but working on Tiny Furniture threw all of my preconceptions about digital out the window. The feature was shot using a Canon 7D because the budget was micro and the producer already owned the camera. The success of that movie made me stop being an old school film snob — now I look at new tech and think ‘bring it on.’”

 

 

‘Sharknado 3’: The Asylum’s Mark Quod talks color, post

By Randi Altman

As I sat down to write about the post process on SyFy’s Sharknado 3, the news was full of shark sightings and attacks, including one on a surfer during a competition in South Africa.

While there is nothing funny about the latest happenings, the public’s fascination with these beasts of the ocean continues. Coming on the heels of Discovery’s Shark Week is the latest iteration of the Sharknado series from The Asylum, Sharknado 3: Oh Hell No!

This time our hero, Fin (Ian Ziering), makes his way from Washington, DC, to Orlando, all the while fighting this killer storm with teeth. In addition to the expected camp, also keep an eye out for those fun cameos, including David Hasselhoff, Bo Derek, Mark Cuban, Frankie Muniz, and so many more.

Mark Quod

Mark Quod

Last year, around this time, I interviewed Sharknado 2 editor Vashi Nedomansky (Christopher Roth edited 3 using Apple Final Cut 7). This year, we touch base with colorist and post supervisor Mark Quod, who is based at The Asylum in Los Angeles. Quod, like many others, including director Anthony C. Ferrante, has worked on all three Sharknado films.

Quod, who has a post supervisor title on all three films and colored Sharknado 2, was also significantly involved in Oh Hell No!, which was shot on Red Epic. He provided the entire color grade via Blackmagic’s DaVinci Resolve. He spoke to us about the film’s color and post process.

How does the look of Sharknado 3 differ from the first two, if at all?
The films really don’t have a strong look stylistically; we just wanted them to look good. But what did come into play was the weather. These movies take place during a storm, but production takes place in all types of weather. A good portion of Sharknado 3 was shot in a very sunny Florida.

So that’s where you in the color suite comes in?
Yes. There’s always the challenge of reducing the contrast, and making it look stormy even when it’s not. So my process involves taking blue skies and making them grey, so it looks as if there’s a storm coming, or even darker for when the storm is happening. Sometimes they’re shooting in sunny weather and sometimes they’re not, and you have to try to balance it out so it looks as if it was taking place at the same time. While this also happens on other films, because the storm is an integral part of the story of Sharknado 3, there was more of a need.

Was some of that visual effects?
Yes. When the storm is full on, that’s where we get some visual effects shots. But we can’t have a visual effect shot for every shot in the movie. So darkening the sky and that sort of thing does fall on color correction.

Sharknado 3: Oh Hell No - 2015

So it’s more about conformity than about desaturation or saturation, etc.?
At the beginning, I said to the director, Anthony Ferrante, “Tell me what you like and what you don’t like, and give me some of your notes right off the bat.” I also sent some stills asking if they looked good. He would provide notes about certain scenes. “It’s way too sunny, we need to bring it down, but don’t make it too desaturated.”

It’s a struggle to come up with a look that looks good, but also looks appropriate at the same time. You’re sort of riding the fine line between just trying to make it look nice and have a pretty image, while also looking stormy at the same time. It’s a little bit of walking a tight rope to finding that appropriateness for the color.

The partners at The Asylum have a say in the look of the film as well.

How did you work with the DP on this one?
There were actually a few DPs on Sharknado 3, which changed things up a little bit. Ben Demaree started off as the main DP, and he and I had conversations before the shoot. I asked him “to be careful on sunny days, and to do what you can to make sure that it’s not too contrasty, and doesn’t look bright and sunny when it’s supposed to be cloudy.” He knows that, of course, but it’s a low-budget movie, so there’s only so much anybody can do.

Sharknado 3 - Season 2015 Sharknado 3 - Season 2015
A big part of Quod’s job was to turn bright skies grey.

You had a tight deadline. When did you start getting footage to work on, and what was the workflow like?
I got footage near the end of May and started working on it then and pretty much through all of June. That sounds like a long time, but it’s actually not on a crazy movie like this, because there are just so many visual effect shots.

When I started on shots, not many of the visual effects were done. Sometimes it’s hard to do scenes when you don’t have all the shots yet. Or shots will come in, and then end up changing later, so I’ll get newer versions of them. I’m constantly color correcting the newer versions as we get them. It’s a lot of work.

You have been using the Da Vinci Resolve? Is that typically your system of choice?
I’ve been a colorist for 19 years, and I started working on the classic DaVinci. It wasn’t until recently that I started using the new Resolve. In fact, Sharknado 3 is only the second feature film that I’ve done on the DaVinci Resolve. 3 Headed Shark Attack, which is also going to be airing during Sharknado week, was the first.

It didn’t take me that long to get the hang of it, but any time you’re switching to a new system, you want to make sure you’ve got all the ins and outs conquered. I was pleasantly happy with the tools that it has. I’m very glad that I used DaVinci Resolve with this.

What did you use to color Sharknado 2?
Apple Color. I had used that for many years, even when they stopped supporting it. Once you get to be an old hand at a certain system, you’re always reluctant to change. Plus, with The Asylum’s schedule being so busy, I never found that I had time to sit down and learn a new system. I was always in a rush to get the next movie out.

Univ Orlando Before_1.311.1Univ Orlando After_1.311.2
Before and After: Quod called on Resolve to make this shot Universal Studios Orlando brighter.

Anything in particular about Resolve impress you?
Recently, I’ve been getting some shots that are a little under-exposed. With Resolve, I can push them a little further than I could in Color and still get a usable image out of it. Resolve has a pretty good noise reducer built into it. Every once in a while with Color, I’d have to bring a shot into After Effects, which has got a pretty good noise reducer. It’s was a little more cumbersome, and when you don’t have a lot of time that’s an issue. With Resolve, the tools are right there, so it’s not adding any more time to my workload. I found I could push shots further than I could ever have done before.

Being able to track things very quickly and having more control over the shapes and mattes is a big help. I use those tools constantly use while color correcting — I’m always using Power Windows and isolating colors and that sort of thing.

Sharknado 3: Oh Hell No - 2015

Does that mean you can do more?
Well, it’s a double-edged sword. Thanks to the powerful tools I can be more creative, but that’s when I run into, “How much time can I really afford to make these shots look good?” My instinct, because I’m a bit of a perfectionist, is to spend more time and make everything look as good as I possibly can, but at a certain point it’s got to get out the door. There’s only so much time I can spend tweaking and perfecting things.

I’m going to ask you to put on your post supervisor hat for a bit. The Asylum produces around 30 films a year, how tight does your post workflow need to be to handle all of that?
It’s definitely streamlined! For many of our movies, the rough cut is due only days after they’re done shooting. It’s wasn’t that fast with Sharknado because of all the other facets that go into it. With some of our other movies, the editing process is very quick. Typically we have assistant editors getting all the dailies to the editors — sometimes multiple editors — depending on the nature of the film and how much time we have for it.

When you say “all the facets,” you mean the visual effects?
If there are a lot of visual effects shots in the film, it takes longer in post. A lot of times they’re working on visual effect shots long before the movie is locked, because there just isn’t time to get them all done otherwise. In the case of Sharknado 3, which had about 800 visual effects shots, they had to get started right away.

White House Before_1.75.2White House After_1.75.1
Before and After: The White House without and with color correction.

What did you use in terms of storage?
We usually have Glyph drives on set where we keep all of our raw data, and they’ll keep an on-set copy of everything that is shot. The footage comes back to The Asylum on shuttle drives and gets copied to an in-house set of drives. This way we have a duplicates of all the footage — one for in-house and one for on set. We also back that up to DLT tape as well.

Because there were so many drives on Sharknado 3, we actually bought a 24TB drive to copy everything onto. That’s what I used for the color correction. I could just move around one drive to move the footage.

What’s next for you?
Working as post supervisor and colorist on the extended “Sharktacular” version of Sharknado 3, which will be on DVD, Blu-ray and VOD. I’m actually working on it right now, color correcting additional shots, extended scenes and new VFX shots.

A Closer Look: Interstate’s work on Master & Dynamic headphones short

By Randi Altman

To tell the story of how their high-end MH40 headphones are made, Master & Dynamic called on New York-based production company Interstate to create a film educating potential buyers. Interstate is the US branch of the Montreal-based integrated production company BLVD. It’s run by managing partner Danny Rosenbloom (formerly with Psyop, Brand New School) and creative director Yann Mabille (formerly with The Mill, Mill+).

This almost 1.5-minute piece talks about the materials that go into creating the headphones and describes the manufacturing process and why it’s all meant to maximize sound quality. There are images of molten metal, flowing liquids and magnetized metal shavings that transform into headphone components. To create the finished look, Interstate captured as much as they could in-camera, shooting with a birds eye view, and a mix of stop motion and visual effects.

For the liquid aluminum sequence, Interstate used a material called gallium for the melting aluminum effect — also used in the original Terminator movies — and cast and melted an aluminum ingot from it on camera.

According to Interstate EP Rosenbloom, “The material melts at roughly 80 degrees Fahrenheit. It’s the same stuff some magicians use to bend spoons with their minds — not all of them, of course, because the good ones really can bend spoons with their minds!”

Interstate’s Mabille, who co-directed the piece with Adam Levite, answered our questions and helped us dig a bit deeper.

yann_mabille

How early did Interstate get involved with the project? 
We started to get involved during the final stages of setting up Interstate, which makes this project our very first. We thought it was a great way to start.

 

Were you involved in the creative, or did the client come with that already spelled out?
Miles Skinner, who is a freelance creative director for Master & Dynamic, wanted to create a sequence that suggested a building process that had a specific elegance and artistic value while showcasing the beautiful qualities of the raw materials used to build the headphones.

At the same time, the goal was to stray away from the traditional pipeline representations, which are usually hands or machines interacting with objects etc. We were tasked with finding creative solutions to implement Miles’ ideas. We conceived a semi-abstract representation of each of the main steps of the building process, starting by glorifying the raw materials, processing these materials in an interesting manner, and eventually ending it in an elegant way to showcase the finished product.

How much is live-action versus VFX?
The product is very well designed and has a great finish, so we knew that it would look great on camera. Adam and I love macro-photography and were keen to feature the natural beauty of raw and noble materials on a small scale. This naturally led to trying to shoot as much as we could in-camera, therefore limiting the role of VFX in the sequence.

That said, CGI was used to animate certain elements that would have been challenging to puppeteer on such a small scale. In order to add light interactions across the lens, cleanup and retime shots, we used 2D. We wanted to retain a physical approach from the very beginning to keep all the wonderful qualities of the raw materials as genuine as possible.

Did you do any prepro?
Indeed. Most of the prepro was spent getting to know the materials we were going to work with and how to best represent the headphones, as well as all the components used for the construction process. For example, we ended up using gallium to simulate melting aluminum, and a specific metallic powder was brought to life to shape components, such as steel screws, which were also made out of wax that we then melted. Overall, it was obviously much easier to film deconstruction and reverse the footage to give the illusion of construction.

Can you walk us through the production and post workflow? What did Interstate shoot on?
Alexander Hankoff was the DP. I had worked with him when I was at The Mill, and I always wanted to work with him again as I knew he had a great eye for macro-photography. He can find beauty where you expect it the least. He did a great job over the two-day shoot.

We shot the whole spot on a Red Epic camera, most of it at about 120fps. Also, production designer Jaime Moore and her prop master, Gino Fortebuono, were indispensable to the process and did a great job bringing this to life. We shot the whole sequence in a fairly big studio to make sure we could use different set-ups at the same time.

new2

Interstate produced and provided some post, but you also worked with BLVD. What exactly did they provide in terms of post?
Most of the time we will do all the post internally, but in this case we could not do all of it as we were just starting the company. BLVD was the right choice to help with the 3D and some of the 2D components, but their audio experience was key, and they also did a great job with the sound design.

How did you work with them on approvals?
We had daily reviews, which were all remote, but hassle free. Everyone was really responsive and engaged thoughout the process.

What tools were used for the edit, VFX and color?
Apple Final Cut, Autodesk Maya and Blackmagic DaVinci Resolve for color.

How did you describe to your colorist, Tristan Kneschke, the look and feel you wanted for the piece?
A very favorite part of the process for me is to establish a color look, but I also think it’s crucial to sleep on it. It’s important to step back when you do coloring since your first pass will often be either too extreme or off tone. Keeping a fresh eye is the hardest thing to do while coloring. Luckily we were able to do that with Tristan. We established a look, which we then refined over the course of a week.

What was the most challenging part of this project?
Besides figuring out how to get the most out of the materials we had — the components that make the headsets or the materials used to shape specific objects — the conceptual phase was crucial and the most challenging. It was key to find the right balance between an overly abstract and removed representation of the actual building process, and an elegant and somewhat explicit representation of that same process. It was important not to get too far away from a clear and palpable depiction of what happens to the materials in order to constantly keep the audience hooked and able to relate to the product.

What haven’t we asked that’s important?
The client was amazing — they really gave us total freedom. Miles is a rock star, every idea he had was great and everything we proposed he quickly came on board with. As a company, we really wanted to make sure that our first piece out of the gate was memorable. I think we got there.