Category Archives: Collaboration

Xytech intros MediaPulse Sky mobile interface

Xytech will be at NAB with its new MediaPulse Sky, a mobile interface for its MediaPulse facility management platform. The company is also showing over 400 new features for MediaPulse, a platform used by studios, networks, media companies and post facilities for functions such as scheduling, asset management, resource allotment, personnel, equipment and financials.

Expanding access and mobility for users, MediaPulse Sky is available for all portable devices. Greg Dolan, COO of Xytech, says, “Full and timely access to data is standard in today’s media business. We developed Sky to give our clients secure access to what they need.”

Configurable through MediaPulse Layout Editor, the MediaPulse Sky interface is available with new dashboards and charts for key performance indicators. Additionally, MediaPulse Sky is optimized for cellular networks. Users can actualize orders, confirm crewing assignments, provision video feeds, schedule sessions and review assignments wherever they may be, and with whatever device they choose.

MediaPulse Transmission 2017 also has a new, configurable operation screen with on-screen alerts and notifications. A new auto-routing feature allows users to find the best routes between two points quickly and leverages the MediaPulse Rules Engine to add user-defined criteria to the routing choices. MediaPulse Transmission offers enterprise-class operations and financial management tools designed specifically for the transmission environment, and the new features help to manage the complexities of circuits for the broadcast sector.

At NAB, Xytech will also demonstrate an automated camera-to-distribution workflow encapsulating metadata management, order management, transcoding and quality control, all performed without requiring user actions. In addition, the company’s MediaPulse development platform now offers the ability to integrate easily with any other system in the client ecosystem, enabling MediaPulse to act as a single console.

Frame.io 2.0 offers 100 new features, improvements for collaboration

Frame.io, developers of the video review and collaboration platform for content creators, has unveiled Frame.io 2.0 , an upgrade offering over 100 new features and improvements. This new version features new client Review Pages, which expands content review and sharing. In addition, the new release offers deeper workflow integration with Final Cut Pro X and Avid Media Composer, plus a completely re-engineered player.

“Frame.io 2 is based on everything we’ve learned from our customers over the past two years and includes our most-requested features,” says Emery Wells, CEO of Frame.io.

Just as internal teams can collaborate using Frame.io’s comprehensive annotation and feedback tools, clients can now provide detailed feedback on projects with Review Pages, which is designed to make the sharing experience simple, with no log-in required.

Review Pages give clients the same commenting ability as collaborators, without exposing them to the full Frame.io interface. Settings are highly configurable to meet specific customer needs, including workflow controls (approvals), security (password protection, setting expiration date) and communication (including a personalized message for the client).

The Review Pages workflow simplifies the exchange of ideas, consolidating feedback in a succinct manner. For those using Adobe Premiere or After Effects, those thoughts flow directly into the timeline, where you can immediately take action and upload a new version. Client Review Pages are also now available in the Frame.io iOS app, allowing collaboration via iPhones and iPads.

Exporting and importing comments and annotations into Final Cut Pro X and Media Composer has gotten easier with the upgraded, free desktop companion app, which allows users to open downloaded comment files and bring them into the editor as markers. There is now no need to toggle between Frame.io and the NLE.

Users can also now copy and paste comments from one version to another. The information is exportable in a variety of formats, whether that’s a PDF containing a thumbnail, timecode, comment, annotation and completion status that can be shared and reviewed with the team or as a .csv or .xml file containing tons of additional data for further processing.

Also new to Frame.io 2.0 is a SMPTE-compliant source timecode display that works with both non-drop and drop-frame timecode. Users can now download proxies straight from Frame.io.

The Frame.io 2.0 player page now offers better navigation, efficiency and accountability. New “comment heads” allow artists to visually see who left a comment and where so they can quickly find and prioritize feedback on any given project. Users can also preview the next comment, saving them time when one comment affects another.

The new looping feature, targeting motion and VFX artists, lets users watch the same short clip on loop. You can even select a range within a clip to really dive in deep. Frame.io 2.0’s asset slider makes it easy to navigate between assets from the player page.

The new Frame.io 2.0 dashboard has been redesigned for speed and simplicity. Users can manage collaborators for any given project from the new collaborator panel, where adding an entire team to a project takes one click. A simple search in the project search bar makes it easy to bring up a project. The breadcrumb navigation bar tracks every move deeper into a sub-sub-subfolder, helping artists stay oriented when getting lost in their work. The new list view option with mini-scrub gives users the birds-eye view of everything happening in Frame.io 2.0.

Copying and moving assets between projects takes up no additional storage, even when users make thousands of copies of a clip or project. Frame.io 2.0 also now offers the ability to publish direct to Vimeo, with full control over publishing options, so pros can create the description and set privacy permissions, right then and there.

MTI 3.31

Building a workflow for The Great Wall

Bling Digital, which is part of the SIM Group, was called on to help establish the workflow on Legendary/Universal’s The Great Wall, starring Matt Damon as a European mercenary imprisoned within the wall. While being held he sees exactly why the Chinese built this massive barrier in the first place — and it’s otherworldly. This VFX-heavy mystery/fantasy was directed by Yimou Zhang.

We reached out to Bling’s director of workflow services, Jesse Korosi, to talk us through the process on the film, including working with data from the Arri 65, which at that point hadn’t yet been used on a full-length feature film. Bling Digital is a post technology and services provider that specializes in on-set data management, digital dailies, editorial system rentals and data archiving

Jesse Korosi

When did you first get involved on The Great Wall and in what capacity?
Bling received our first call from the unit production manager Kwame Parker about providing on-set data management, dailies, VFX and stereo pulls, Avid rentals and a customized process for the digital workflow for The Great Wall in December of 2014.

At this time the information was pretty vague, but outlined some of the bigger challenges, like the film being shot in multiple locations within China, and that the Arri 65 camera may be used, which had not yet been used on a full-length feature. From this point on I worked with our internal team to figure out exactly how we would tackle such a challenge. This also required a lot of communication with the software developers to ensure that they would be ready to provide updated builds that could support this new camera.

After talks with the DP Stuart Dryburgh, the studio and a few other members of production, a big part of my job and anyone on my workflow team is to get involved as early as possible. Therefore our role doesn’t necessarily start on day one of principal photography. We want to get in and start testing and communicating with the rest of the crew well ahead of time so that by the first day, the process runs like a well-oiled machine and the client never has to be concerned with “week-one kinks.”

Why did they opt for the Arri 65 camera and what were some of the challenges you encountered?
Many people who we work with love Arri. The cameras are known for recording beautiful images. For anyone who may not be a huge Arri fan, they might dislike the lower resolution in some of the cameras, but it is very uncommon that someone doesn’t like the final look of the recorded files. Enter the Arri 65, a new camera that can record 6.5K files (6560×3100) and every hour recorded is a whopping 2.8TB per hour.

When dealing with this kind of data consumption, you really need to re-evaluate your pipeline. The cards are not able to be downloaded by traditional card readers — you need to use vaults. Let’s say someone records three hours of footage in a day — that equals 8.7TB of data. If you’re sending that info to another facility even using a 500Mb/s Internet line, that would take 38 hours to send! LTO-ing this kind of media is also dreadfully slow. For The Great Wall we ended up setting up a dedicated LTO area that had eight decks running at any given time.

Aside from data consumption, we faced the challenge of having no dailies software that could even read the files. We worked with Colorfront to get a new build-out that could work, and luckily, after having been through this same ordeal recording Arri Open Gate on Warcraft, we knew how to make this happen and set the client at ease.

Were you on set? Near set? Remote?
Our lab was located in the production office, which also housed editorial. Considering all of the traveling this job entailed, from Beijing and Qingdao to Gansu, we were mostly working remotely. We wanted to be as close to production as possible, but still within a controlled environment.

The dailies set-up was right beside editor Craig Wood’s suite, making for a close-knit workflow with editorial, which was great. Craig would often pull our dailies team into his suite to view how the edit was coming along, which really helped when assessing how the dailies color was working and referencing scenes in the cut when timing pickup shots.

How did you work with the director and DP?
At the start of the show we established some looks with the DP Stuart Dryburgh, ASC. The idea was that we would handle all of the dailies color in the lab. The DIT/DMT would note as much valuable information on set about the conditions that day and we would use our best judgment to fulfill the intended look. During pre-production we used a theatre at the China Film Group studio to screen and review all the test materials and dial in this look.

With our team involved from the very beginning of these color talks, we were able to ensure that decisions made on color and data flow were going to track through each department, all the way to the end of the job. It’s very common for decisions to be made color wise at the start of a job that get lost in the shuffle once production has wrapped. Plus, sometimes there isn’t anyone available who recognizes why certain decisions were made up front when you‘re in the post stage.

Can you talk us through the workflow? 
In terms of workflow, the Arri 65 was recording media onto Codex cards, which were backed up onset with a VaultS. After this media was backed up, the Codex card would be forwarded onto the lab. Within the lab we had a VaultXL that would then be used to back this card up to the internal drive. Unfortunately, you can’t go directly from the card to your working drive, you need to do two separate passes on the card, a “Process” and a “Transfer.”

The Transfer moves the media off the card and onto an internal drive on the Vault. The Process then converts all the native camera files into .ARI files. Once this media is processed and on the internal drive, we were able to move it onto our SAN. From there we were able to run this footage through OSD and make LTO back-ups. We also made additional back-ups to G-Tech GSpeed Studio drives that would be sent back to LA. However, for security purposes as well as efficiency, we encrypted and shipped the bare drives, rather than the entire chassis. This meant that when the drives were received in LA, we were able to mount them into our dock and work directly off of them, i.e no need to wait on any copies.

Another thing that required a lot of back and forth with the DI facility was ensuring that our color pipeline was following the same path they would take once they hit final color. We ended up having input LUTs for any camera that recorded a non-LogC color space. In regards to my involvement, during production in China I had a few members of my team on the ground and I was overseeing things remotely. Once things came back to LA and we were working out of Legendary, I became much more hands-on.

What kind of challenges did providing offline editorial services in China bring, and how did that transition back to LA?
We sent a tech to China to handle the set-up of the offline editorial suites and also had local contacts to assist during the run of the project. Our dailies technicians also helped with certain questions or concerns that came up.

Shipping gear for the Avids is one thing, however shipping consoles (desks) for the editors would have been far too heavy. Therefore this was probably one of the bigger challenges — ensuring the editors were working with the same caliber of workspace they were used to in Los Angeles.

The transition of editorial from China to LA required Dave French, director of post engineering, and his team to mirror the China set-up in LA and have both up and running at the same time to streamline the process. Essentially, the editors needed to stop cutting in China and have the ability to jump on a plane and resume cutting in LA immediately.

Once back in LA, you continued to support VFX, stereo and editorial, correct?
Within the Legendary office we played a major role in building out the technology and workflow behind what was referred to as the Post Hub. This Post Hub was made up of a few different systems all KVM’d into one desk that acted as the control center for VFX and stereo reviews, VFX and stereo pulls and final stereo tweaks. All of this work was controlled by Rachel McIntire, our dailies, VFX and stereo management tech. She was a jack-of-all-trades who played a huge role in making the post workflow so successful.

For the VFX reviews, Rachel and I worked closely with ILM to develop a workflow to ensure that all of the original on set/dailies color metadata would carry into the offline edit from the VFX vendors. It was imperative that during this editing session we could add or remove the color, make adjustments and match exactly what they saw on set, in dailies and in the offline edit. Automating this process through values from the VFX Editors EDL was key.

Looking back on the work provided, what would you have done differently knowing what you know now?
I think the area I would focus on next time around would be upgrading the jobs database. With any job we manage at Bling, we always ensure we keep a log of every file recorded and any metadata that we track. At the time, this was a little weak. Since then, I have been working on overhauling this database and allowing creative to access all camera metadata, script metadata, location data, lens data, etc. in one centralized location. We have just used this on our first job in a client-facing capacity and I think it would have done wonders for our VFX and stereo crews on The Great Wall. It is all too often that people are digging around for information already captured by someone else. I want to make sure there is a central repository for that data.


Origins: The Creative Storage Conference

By Tom Coughlin

I was recently asked how the Creative Storage Conference came to be. So here I am to give you some background.

In 2006, the Storage Visions Conference that my colleagues and I had been organizing just before the CES show in January was in its fifth year. I had been doing more work on digital storage for professional media and entertainment, including a report on this important topic. In order to increase my connections and interaction with both media and entertainment professionals, and the digital storage and service companies that support them, it seems that a conference focusing on digital storage for media and entertainment would be in order.

That same year, my partner Ron Dennison and I participated in the MediaTech Conference in the LA area, working with Bryan Eckus, the director of the group at the time. In 2007, we held the first Creative Storage Conference in conjunction with the MediaTech Conference in Long Beach, California. It featured a dynamite line-up of storage companies and end users.

The conference has grown in size over the years, and we have had a stream of great companies showing their stuff, media and entertainment professional attendees and speakers, informative sessions and insightful keynote talks on numerous topics related to M&E digital storage.

The 2017 Creative Storage Conference
This year, the Creative Storage Conference is taking place on May 24 in Culver City. Attendees can learn more about the use of Flash memory in M&E as well as the growth in VR content in professional video, and how this will drive new digital storage demand and technologies to support the high data rates needed for captured content and cloud-based VR services. This is the 11th year of the conference and we look forward to having you join us.

We are planning for six sessions and four keynotes during the day and a possible reception in the evening on May 24.

Here is a list of the planned sessions:
• Impact of 4K/HDR/VR on Storage Requirements From Capture to Studio
• Collaboration in the Clouds: Storing and Delivering Content Where it is Needed
• Content on the Move: Delivering Storage Content When and Where it is Needed
• Preserving Digital Content — the Challenges, Needs and Options
• Accelerating Workflows: Solid State Storage in Media and Entertainment
• Independent Panel — Protecting the Life of Content

Don’t miss this opportunity to meet giants in the field of VR content capture and post production and meet the storage and service companies to help you make sure your next professional projects are a big success.

• Hear how major media equipment suppliers and entertainment industry customers use digital storage technology in all aspects of content creation and distribution.
• Find out the role that digital storage plays in new content distribution and marketing opportunities for a rapidly evolving market.
• See presentations on digital storage in digital acquisition and capture, nonlinear editing and special effects.
• Find out how to convert and preserve content digitally and protect it in long-term dependable archives.
• Learn about new ways to create and use content metadata, making it easier to find and use.
• Discover how to combine and leverage hard disk drives, flash memory, magnetic tape and optical storage technology with new opportunities in the digital media market.
• Be at the juncture of digital storage and the next generation of storage for the professional media market.

Online registration is open until May 23, 2017. As a media and entertainment professional you can register now with a $100 discount using this link:

—–
Thomas Coughlin, president of Coughlin Associates is a storage analyst and consultant with over 30 years in the data storage industry. He is active with SNIA, SMPTE, IEEE, and other professional organizations.


Creating the color of Hacksaw Ridge

Australian colorist Trish Cahill first got involved in the DI on Mel Gibson’s Hacksaw Ridge when cinematographer Simon Duggan enquired about her interest and availability for the film. She didn’t have to consider the idea long before saying yes.

Hacksaw Ridge, which earned Oscar nominations for Best Picture, Director, Lead Actor, Film Editing (won), Sound Editing and Sound Mixing (won), is about a real-life World War II conscientious observer, Desmond Doss, who refused to pick up a gun but instead used his bravery to save lives on the battlefield.

Trish Cahill

Let’s find out more about Cahill’s work and workflow on Hacksaw Ridge

What was the collaboration like between you and director Mel Gibson and cinematographer Simon Duggan?
I first met Mel and the editor John Gilbert when I visited them in the cutting room halfway through the edit. We looked through the various scenes and — in particular, the different battle sequences — and discussed the different tone that was needed for each.

Simon had already talked through the Kodachrome idea with a gradual and subtle desaturation as the film progressed and it was very helpful to be spinning through the actual images and listening to Mel and John talk through their thoughts. We then chose a collection of shots that were representative of the different looks and turning points in the film to use in a look development session.

Simon was overseas at the time, but we had a few phone conversations and he sent though some reference stills prior to the session. The look development session not only gave us our look template for the film but it also gave us a better idea of how smoke continuity was shaping up and what could be done in the grade to help.

During the DI, Mel, John and producer Bill Mechanic came in see my work every couple of days for a few hours to review spools down. Once the film was in good shape, Simon flew in with a nice fresh eye to help tighten it further.

What was the workflow for this project?
Being a war film, there are quite a few bullet hits, blood splatter, smoke elements and various other VFX to be completed across a large number of shots. One of the main concerns was the consistency of smoke levels, so it was important that the VFX team had a balanced set of shots put into sequence reflecting how they would appear in the film.

While the edit was still evolving, the film was conformed and assistant colorist Justin Tran started a balance grade of the war sequences on FilmLight Baselight at Definition Films. This provided VFX supervisor Chris Godfrey and the rest of the team with a better idea of how each shot should be treated in relation to the shots around them and if additional treatment was required for shots not ear-marked for VFX. The balance grading work was carried across to the DI grade in the form of BLGs and were applied to the final edit with the use of Baselight’s multi-paste, so I had full control and nothing was baked in.

Was there a particular inspiration or reference that you used for the look of this film?
Simon sent through a collection of vintage photograph references from the era to get me started. There were shots of old ox blood red barns, mechanics and machinery, train yards and soldiers in uniform — a visual board of everyday pictures of real scenes from the 1930s and 1940s, which was an excellent starting point to spring from. Key words were desaturated, Kodachrome and, the phrase “twist the primaries a touch” was used a bit!

The film starts when our hero, Desmond Doss, is a boy in the 1930s. These scenes have a slight chocolaty sepia tone, which lessens when Doss becomes a young man and enters the military training camp. Colors become more desaturated again when he arrives in Okinawa and then climbs the ridge. We wanted the ridge to be a world unto itself — the desolate battlefield. Each battle from there occurs at different times of day in different environmental conditions, so each has been given its own color variation.

What were the main challenges in grading such a film?
Hacksaw Ridge is a war film. A big percentage of screen time is action-packed and fast-paced with a high-cut ratio. So there are many more shots to grade, there are varied cameras to balance between and fluctuating smoke levels to figure out. It’s more challenging to keep consistency in this type of film than the average drama.

The initial attack on top of the ridge happens just after an aerial bombing raid, and it was important to the story for the grade to help the smoke enhance a sense of vulnerability and danger. We needed to keep visibility as low as possible, but at the same time we wanted it still to be interesting and foreboding. It needed analysis at an individual shot level: what can be done on this particular image to keep it interesting and tonal but still have the audience feel a sense of “I can’t see anything.”

Then on a global level — after making each shot as tonal and interesting as possible — do we still have the murkiness we need to sell the vulnerability and danger? If not, where is the balance to still provide enough visual interest and definition to keep the audience in the moment?

What part of the grading process do you spend most of your time on?
I would say I spend more time on the balancing and initial grade. I like to keep my look in a layer at the end of the stack that stays constant for every shot in the scene. If you have done a good job matching up, you have the opportunity of being able to continue to craft the look as well as add secondaries and global improvements with confidence that you’re not upsetting the apple cart. It gives you better flexibility to change your mind or keep improving as the film evolves and as your instincts sharpen on where the color mood needs to sit. I believe tightening the match and improving each shot on the primary level is time very well spent.

What was the film shot on, and did this bring any challenges or opportunities to you during the grade?
The majority of Hacksaw Ridge was shot with an Arri Alexa. Red Dragon and Blackmagic pocket cameras were also used in the battle sequences. Whenever possible I worked with the original camera raw. I worked in LogC and used Baselight’s generalized color space to normalize the Red and Blackmagic cameras to match this.

Matching the flames between Blackmagic and Alexa footage was a little tricky. The color hues and dynamic range captured by each camera are quite different, so I used the hue shift controls often to twist the reds and yellows of each closer together. Also, on some shots I had several highlight keys in place to create as much dynamic range as possible.

Could you say more about how you dealt with delivering for multiple formats?
The main deliverables required for Hacksaw Ridge were an XYZ and a Rec709 version. Baselight’s generalized color space was used to do the conversions from P3 to XYZ and Rec709. I then made minimal tweaks for the Rec709 version.

Was there a specific scene or sequence you found particularly enjoyable or challenging?
I enjoyed working with the opening scene of the film, enhancing the golden warmth as the boys are walking through the forest in Virginia. The scenes within the Doss house were also a favorite. The art direction and lighting had a beautiful warmth to it and I really enjoyed bringing out the chocolaty, 1930’s and 1940’s tones.

On the flip side of that I also loved working with the cooler crisper dawn tones that we achieved in the second battle sequence. I find when you minimize the color palette and let the contrast and light do the tonal work it can take you to a unique and emotionally amplified place.

One of the greater challenges of grading the film was eliminating any hint of green plant life throughout the Okinawa scenes. With lush, green plants happily existing in the background, we were in danger of losing the audience’s belief that this was a bleak place. Unfortunately, the WW II US military uniforms were the same shade of green found in many parts of the surrounding landscape of the location, making it impossible to get a clean key. There is one scene in particular where a convoy of military trucks rolls through a column of soldiers adding clouds of dust to an already challenging situation.


The A-List: Manchester by the Sea director Kenneth Lonergan

By Iain Blair

It’s been 16 years since filmmaker and playwright Kenneth Lonergan made his prize-winning debut at Sundance with You Can Count on Me, which he wrote and directed. The film won the Sundance Grand Jury Prize and was an Academy Award and Golden Globe nominee for Best Screenplay.

Lonergan’s most recent film is also garnering award attention. Directed by one of the most distinctive writing talents on the American indie scene today, Manchester by the Sea, fulfills that earlier promise and extends Lonergan’s artistic vision.

Kenneth Lonergan

Both an ensemble piece and an intense character study, Manchester by the Sea tells the story of how the life of Lee Chandler (Casey Affleck), a grieving and solitary Boston janitor, is transformed when he reluctantly returns to his hometown to take care of his teenage nephew Patrick (Lucas Hedges) after the sudden death of his older brother Joe (Kyle Chandler). It’s also the story of the Chandlers, a working-class family living in a Massachusetts fishing village for generations, and a deeply poignant, unexpectedly funny exploration of the power of familial love, community, sacrifice and hope.

Co-produced by Matt Damon, the film from Roadside Attractions and Amazon Studios — which received four SAG nominations, a crucial Oscars barometer — has a stellar behind-the-scenes list of collaborators, including DP Jody Lee Lipes (Trainwreck, Martha Marcy May Marlene), editor Jennifer Lame (Mistress America, Paper Towns), composer Lesley Barber (You Can Count on Me) and production designer Ruth De Jong (The Master, The Tree of Life).

I recently spoke with Lonergan about making the film and his workflow.

I heard Matt Damon was very involved in the genesis of this. How did this project come about?
Matt, his producer Chris Moore and John Krasinski were talking on the set of this film they were shooting about ideas for Matt’s directing debut. Matt and John brought me the basic idea and asked me to write it. So, I took some of their suggestions and went off and spent a couple of years working on it and expanding it. I don’t really start off with themes when I write. I always start with characters and stories that seem compelling, and then let the themes emerge as I go, and with this it became about people dealing with terrible loss, with the story of this man who’s carrying a weight that’s just too much to bear. It’s about loss, family and how people cope.

Is it true that Damon was going to star in it originally?
Yes, but what actually happened was that John was going to star and Matt was going to direct it, but then John’s schedule got too busy and then Matt was going to star and direct it, and then he also got too busy, so then I came onboard to also direct.

You ended up with a terrific cast. What did Casey Affleck, Michelle Williams and Lucas Hedges bring to their roles?
Casey’s a truly wonderful actor who brings tremendous emotional depth even without saying much in a scene. He’s very hard working, never has a false moment and really has the ability to navigate through the complicated relationships and in the way he deals with people.

Michelle has a tremendous sense of character and is just brilliant, I think. She brings a beautiful characterization to the film and has to go through some pretty intense emotions. They’re both very generous actors, as there are a lot of people they have to interact with. They’re not show-boaters who just want to get up there and emote. And Lucas is this real find, a very talented young actor just starting out who really captured this character.

You shot this on location all over Cape Ann. How tough was it?
It was a bit grueling, as we shot from March until April and it was pretty cold a lot of the time, especially during prep and scouting in February. We had some schedule and budget pressures, but nothing out of the ordinary. I loved shooting around Cape Ann — the locals were great, and the place really seeped into the film in a way that I’m very happy about.

Do you like the post process?
I love post because of the quiet and the chance to really concentrate on making the film. I also like the lack of administrative duties and the sudden drop in the large number of people I’m responsible for on a set. It’s just you, the editor and editorial staff. Some of the technical finishing procedures can be a bit tiring after you’ve seen the film so many times, but overall post is very enjoyable for me.

I loved my editor, and doing all the sound mixing; it was so much fun putting it all together and seeing the story work, all without the stress of the shoot. You still have pressures, but not on the same scale. We did all the post in New York at Technicolor Postworks, and we worked from May through September so it was a pretty relaxed schedule. We had our basic template done by October, and then we did a bunch of little fixes from that point on so it would be ready for Sundance. Then we did a bit more work on it, but didn’t change much — we added four minutes.

Talk about working with editor Jennifer Lame. Was she on the set?
No, we sent her dailies in New York and we never actually met face-to-face until after the shoot. I had to interview her on the phone when she was in LA working on another job, and we got along right away. She’s a wonderful editor. We began cutting on Avid Media Composer at Technicolor Postworks and then did some over the summer at my rental house in Long Island, where she’d come over and set up. Then we finished up back in New York.

How challenging were all the flashbacks to cut, as they’re quite abrupt?
All the flashbacks were very interesting to put together, but they didn’t really present more of a challenge than anything else because they’re such an intrinsic part of the whole story. We didn’t want to telegraph them and warn the audience by doing them differently. We discussed them a lot. Should they be color-timed differently? Should they be shot differently? Look and sound different?

In the end, we decided they should be indistinguishable from the rest, and it’s mainly only because of the content and behavior that you know they’re flashbacks. They were fun to weave into the story, and the more seamless they were the better we liked it. Jennifer actually pointed out that it was almost like telling two stories, not just one, because that’s how Lee experiences the world. He’s always dealing with memories which pop up when they’re least wanted, and when he returns home to Manchester he’s flooded by memories — for him the past and present are almost the same.

You shot in early spring, but there’s a lot of winter scenes, so you must have needed some visual effects?
Some, but not that much. Hectic Electric in Amsterdam did them all. We had some snow enhancement, we added some smoke, clean-up and did some adjustments for light and weather, but scenes like the house fire were all real.

How important is sound and music to you?
It’s hard to overstate. For me, music has the biggest influence on the feeling of a scene after the acting — even more than the cinematography in how it can instantly change the tone and feeling. You can make it cheerful or sad or ominous or peaceful just with the right music, and it adds all these new layers to the story and goes right to your emotions. So I love working with my composer and finding the right music.

Then I asked [supervising sound editor/re-recording mixer] Jacob Ribicoff to record sounds up in Cape Ann at all our locations — the particular sound of the marina, the woods, the bars — so it was all grounded in reality. The whole idea of post sound, which we did at Technicolor Postworks with Jacob, was to support that verisimilitude. He used Avid Pro Tools. There’s no stylization, and it was also about the ocean and that feeling of never being far from water. So the sound design was all about placing you in this specific environment.

Where did you do the DI?
We did the color correction with Jack Lewars, also at Technicolor Postworks. He did the final grade on Autodesk Flame. We shot digitally but I think the film looks very filmic. They did a great job.

Did it turn out the way you first envisioned it?
Pretty much, but it always changes from the script to the screen, and once you bring in your team and all their contributions and the locations and so on, it just expands in every direction. That’s the magic of movies.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Quick Chat: Wipster’s Rollo Wenlock on Slack integration

By Randi Altman

Cloud-based review and approval tool Wipster, which lets you upload your latest edit, share it with clients and colleagues and have frame-accurate conversations directly on the video, now offers integration with Slack, allowing for realtime team messaging.

Wipster CEO/founder Rollo Wenlock says, “Now you can get your Wipster notifications directly in your team Slack channel, making it super-easy for the whole team to instantly see where a review is at.”

I reached out to Wenlock to find out more about Wipster, the Slack integration and what it means for users.

wipster-slack-comment-streamHow old is Wipster now, and can you describe how it works?
Wipster was born in 2013. Wipster is a content review and approval platform for creative teams and their stakeholders to rapidly iterate video projects by sharing work-in-progress for realtime pin-point comments right on the content. Teams speed up their production by up to 60 percent and get closer creative collaboration with their workmates, thus enhancing the work. We like to say that Wipster is the “Google Docs of video.”

How has the tool evolved over the years?
In the beginning we were very focused on creating a very specific user experience to prove people wanted to share work-in-progress and talk all over it. Wipster only worked for single users, only certain types of video could be uploaded, and at the very start, when comments were made, you had no way of knowing who made them!

Now Wipster works for multiple integrated teams, comments are realtime, with replies, added imagery and social “likes.” All commentary becomes automatic to-do lists, and you can have the whole Wipster experience right inside Adobe Creative Cloud.

What types of pros have been taking advantage of Wipster?
In the early days it was freelancers and small studios working for large agencies and brands. Now we have the large agencies and brands as customers as well. Companies like Red Bull, Delta Airlines and Intel. We have every type of creative team using Wipster every day to enhance their creative work.

There are many review and approval apps out there these days, what makes Wipster different? Is it suited to a particular workflow?
Since our launch there have been a number of other apps launch, some doing a great job, others not quite getting the user experience right. The reason why brands and studios are coming to Wipster is our relentless focus on making the review experience work seamlessly between the creative and the stakeholder.

Oftentimes, these people have never worked together before, and creating a very easy and memorable experience heightens their relationship. For our customers, Wipster is a new way of working, which takes them 100x beyond the process they had before, which usually involved a disconnected collection of social video apps and email.

Can you talk about your Slack integration? What does it offer users that they didn’t have before? How does it enhance the process?
We talk to our customers every day, multiple times a day — and they tell us about all the apps and workflows they already have, and what they would like them to do with Wipster — which is insanely helpful.

Our customers want to use Wipster as their “pre-publish” platform, and anything we can do to make their lives simpler and more enjoyable is top of our list. Thousands of our users are working in Slack every day, so it was a no-brainer that we create a Wipster activity channel for them to access right inside Slack.

When using Slack and Wipster together, you can access all your Wipster activity right inside a Slack channel in realtime. This means people in your team can see when videos have been uploaded and shared. You can see when teammates and clients have viewed work, and made comments. You can even see what frame of the video they commented on, with a green dot showing you where they had clicked. This workflow is just another way we are rapidly speeding up the process in which creatives and stakeholders can work together.

Main Photo Caption: Rollo Wenlock (far right) and the Wipster team.


Quick Chat: Emery Wells discusses Frame.io for Adobe After Effects

By Randi Altman

Frame.io is a cloud-based video collaboration tool that was designed to combine the varied ways pros review and approve projects — think Dropbox, Vimeo or email. Frame.io allows you to create projects and add collaborators and files to share in realtime.

They are now offering integration with Adobe’s After Effects that includes features like realtime comments and annotations that sync to your comp, the ability to import comments and annotations into your comp as live shape layers, and uploads of project files and bins.

To find out more, I reached out to Frame.io’s co-founder/CEO Emery Wells.

You just launched a panel for Adobe After Effects. Why was this the next product you guys targeted?
We launched our first Adobe integration with Premiere Pro this past NAB. It was a huge amount of work to rebuild all the Frame.io collaboration features for the Adobe Extension architecture, but it was worth the effort. The response from the Premiere integration was one of the best and biggest we received. After Effects is Premiere’s best friend. It’s the workhorse of the post industry. From complex motion graphics and visual effects to simple comps and title sequences, After Effects is one the key tools video pros rely on so we knew we had to extend all of the capabilities into AE.

Can you discuss the benefits users get from this panel?
Workflow is often one of the biggest frustrations any post pro faces. You really just want to focus on making cool stuff, but inevitably that requires wrangling renders, uploading files everywhere, collecting feedback and generally just doing a bunch of stuff that has nothing to do what you’re good at and what you enjoy. Frame.io for Adobe After Effects allows you to focus on the work you do well in the tool you use to do it. When you need to get feedback from someone, just upload your comp to Frame.io from within AE. Those people will immediately get a notification via email or their phone and they can start leaving feedback immediately. That feedback then flows right back into your comp where you’re doing the work.

We just cut out all the inefficient steps in between. What it really provides, more than anything else, is rapid iteration. The absolute best work only comes through that creative iteration. We never nail something on our first try. It’s the 10th try, the 50th try. Being able to try things quickly and get feedback quickly not only saves time and money, but will actually produce better work.

Will there be more Adobe collaboration offerings to come?
The way we built the panel for Premiere and After Effects actually uses the entire Frame.io web application codebase. It essentially just has a different skin on it so it feels native to Adobe apps. What that essentially means is all the updates we do to the core web application get inherited by Premiere and After Effects, so there will be many more features to come.

Not long ago Frame.io got a huge infusion of cash thanks to some heavy-hitter investors. How has this changed the way you guys work?
It’s allowing us to move faster and in parallel. We’ve now shipped four really unique products in about a year and half. The core web app, the Apple Design award-winning iOS app, the full experiences that live inside Premiere and AE, and our desktop companion app that integrated with Final Cut Pro X. All these products require considerable resources to maintain and push forward, so the capital infusion will allow us to continue building a complete ecosystem of apps that all work together to solve the most essential creative collaboration challenges.

What’s next for Frame.io?
The integrations are a really key part of our strategy, and you’ll see more of them moving forward. We want to embed Frame.io as deeply as we can in the creative apps so it just becomes a seamless part of your experience.

Check out this video for more:


Hands of Stone DP and colorist weigh in on film’s look and feel

By Randi Altman

“No mas! No mas!” Those famous words were uttered in desperation by legendary fighter Roberto Durán, putting an end to his rematch with Sugar Ray Leonard. But before that, Durán had impressively defeated the charismatic Sugar Ray, capturing the WBC welterweight title. Durán’s story — along with that of his trainer Ray Arcel — was recently told in The Weinstein Company’s feature Hands of Stone.

Written and directed by Jonathan Jakubowicz, the film’s DP was Miguel Ioan Littin Menz. He worked very closely with director Jakubowicz and FotoKem colorist Kostas Theodosiou to develop several different looks for the film, including for the different decades in which the story takes place, boxing versus training scenes in different locations (New York, Panama, Las Vegas) and flashback scenes.

Robert De Niro and Edgar Ramírez star in HANDS OF STONEThe film stars Édgar Ramírez as Duran, Usher Raymond as Sugar Ray and Robert DeNiro as Ray Arcel.

We were lucky enough to get some time from both Littin Menz and Theodosiou, albeit separately, for questions. First we caught up with Theodosiou.
Enjoy.

How early did you get involved with the film?
Theodosiou: Prior to my involvement in the project, FotoKem’s nextLAB was on location and involved in dailies acquisition and management. However, I started working with the filmmakers at the editorial stage, after the shoot was finished.

What kind of overall look/looks did the director and DP have in mind for the film, and how did they share that vision with you?
Theodosiou: Both the director Jonathan Jakubowicz and the director of photography Miguel Ioan Litten Menz were very hands-on. They supervised each session to make sure we created looks that best suited all the different time periods, as well as the variety of locations used in the production. The story involved multiple locations, including Panama, New York and Las Vegas.

Nearly every scene was shot on location to maintain authenticity, and it was important that we were true to the look and feel of each location. Jonathan and Miguel explained in detail what they wanted to achieve visually, so we created a unique look for each location.

kostas

Kostas Theodosiou

In addition, the story took us through many different time periods that spanned Roberto Duran’s life — from childhood through his entire career. Each time period also required a different treatment to establish its place in time. Every look we created had a purpose and is in the film for a reason. As a result, there are many different looks in this movie, but they all worked together to help tell the story.

You called on Resolve for this film. Can you talk about the tool and how it helps you in your work?
Theodosiou: Resolve is a great platform and allowed me to mix footage that was shot using a variety of different cameras, lenses and aspect ratios. The tools in Resolve helped me blend the footage seamlessly to enhance the filmmakers’ vision, and the results surpassed their expectations.

You mentioned that both the director and DP were in the room with you?
Theodosiou: Yes, Miguel and Jonathan were supervising the color correction from beginning to end. We all had great chemistry and worked together as a team. This was Jonathan’s passion project and he was very invested in the film, so he was deeply involved in the finishing process. And Miguel flew in from Chile to make sure he was here with us.

In the final stages of making the film, additional scenes were added and both filmmakers returned to FotoKem to work with me to make sure the new extended scenes fit in with the mood they were trying to portray. It was a very hands-on experience.

Now let’s hear from DP Miguel Ioan Litten Menz:

What were your first meetings like with Kostas?
Littin Menz: I was very pleased to hear that the color correction was to be done at FotoKem in Los Angeles. We chose Kostas because of his background — he’s worked for Paul Thomas Anderson; Robert Elswit, ASC; Christopher Nolan; and Hoyte van Hoytema, ASC. Since the first meeting, the connection and conversation about aesthetic was immediately understood. Our ideas and feelings about how to adjust the palette of colors for the final look of the film were in sync. He did marvelous work.

director-and-dp

Jonathan Jakubowicz and Miguel Ioan Littin Menz.

What was the general overall look the director had in mind for the film and how did he communicate that to you?
Littin Menz: In general, Jonathan talked about creating different looks between Panama and New York, and at the same time creating a look where you can feel an epic and intimate story at the same time. We want the audience to feel the wild, powerful and sensual colors around Roberto Durán’s life in Panama, and more plain, elegant and sober colors around Ray Arcel’s life in New York. In our research, we looked at thousands of photographs from sports magazines from that period, and also many documentaries.

And for my personal research, I again read Norman Mailer’s book “The Fight” and Jack London’s “The Mexican.”

How would you describe the different looks and feel of the film — decade by decade, location by location?
Littin Menz: I worked very closely with Tomás Voth, the production designer, who did amazing work. We described two very different worlds — Duran’s life in Panama and Ray Arcel’s in New York — so as a general concept we tried to create eclectic and powerful palates of colors for Duran’s life, to mimic his real personality.

For Ray Arcel, we used colors that were more serene and elegant, like he was throughout his entire life. Sometimes I used warm colors to evoke nostalgic times for Ray Arcel, and sometimes cool colors appeared in the sad times for both Duran and Arcel. Decade by decade, from the ‘60s to the ‘80s, we created different looks for timeline reasons but also as part of the intimate space for each character.

What cameras did you use, and why did you opt for three different ones? How did that affect the look and the grade?
Littin Menz: We relied on two Alexa XTs, one Alexa M and three Blackmagic cameras for VFX purposes. One of the Alexas, the B camera, was always prepared for the Steadicam. The C camera and the Alexa M were used for the fights. Also, we used Anamorphic Hawk V Lite Lenses. Kostas was thorough in making sure everything from the different shoots matched.

Can you talk about the shoot? Was there a DIT? If so, what role did they play? And what kind of on-set monitors were you using?
Littin Menz: The DIT was there mostly for making the back-ups and dailies. It was a lot of material every day. We also created LUTs for some scenes. The monitors were Asus VS197D-P 18.5-inch for video assist and a Flanders Scientific for the DIT station.

Was there anything unique or challenging about it that you are particularly proud of?
Littin Menz: On the technical side, it was very challenging to reproduce the big spaces and fights, in places like the Madison Square Garden in New York through three decades, the Olympic Stadium in Montreal and the Superdome in New Orleans, but I think we did it successfully.

Some of my favorite scenes were those of Durán when he was a kid in “El Chorrillo,” the poor neighborhood where he lived. We never forgot that the principal idea for the film was to tell the story through the clear and transparent eyes of that child — the story of a child who came from one of poorest neighborhoods of Latin America and became a world champion. I’m very proud to have been a part of this project.

Collaboration app from Frame.io available for iPhone

Frame.io, which seems to be releasing new features and tools almost monthly, has developed an iPhone app for video review and collaboration. The all-new iOS app gives editors, producers, artists and filmmakers the ability to share, review and collaborate on videos wherever they are.

Frame.io for iOS includes: time-based comments and video annotations so you can draw directly on video frames to accurately communicate your feedback; video transcoding in the cloud so you can upload any video format and not have to worry about playback compatibility; version control so you can see what your video looked like one version ago or 100 versions ago; Comment Replay, which loops a four-second range around any comment so you can get a sense of what it means in context; TouchScrub, allowing users to slide their finger over a thumbnail to preview; and Touch ID for added security.

“We spent eight months perfecting the Frame.io experience for iPhone”, says Frame.io CEO, Emery Wells. “The old way of working with email and 10 different file sharing and video review services just wasn’t cutting it. We first solved that problem on the web and now with Frame.io for iOS we’ve made the entire video review and collaboration experience accessible from anywhere.”

We reached out to Frame.io’s Emery Wells to find out more about the app. First we asked, why the eight-month timeframe for building the app?

“Doing an iOS app from scratch is a big undertaking. We wanted it to be 100 percent native. We didn’t choose to reuse any code, we didn’t rewrap the web application. This allowed us to take advantage of all the latest features available in iOS like Touch ID and 3D Touch,” he explains. “We completely redesigned three or four times before arriving at the final design. Simultaneously to designing, we started coding all the really essential parts of the app. The stuff we know we’ll need even if we go through another drastic design change. We kept working like this until we came up with two to three magic moments. When we impress ourselves then we know we’re ready to start thinking about shipping.

Wells believes the magic moments in this first release were these three key things:
1.  Pull down collaborator animation. This is really custom UI/UX and animation.
2. TouchScrub with Peek and Pop support. “The first time we got TouchScrub working everyone was giddy,” he says. “There is something so satisfying about scrubbing your finger over a clip to get a quick preview. Peek and Pop support was icing on the cake.”

3. Comment Replay. This was an idea Wells came up with when they were working on their Premiere launch video. “Team members were leaving notes for me like, ‘Needs to come in on the beat.’ I’d be reading these notes on our yet-unreleased iPhone app and it would be really hard to experience that little moment where the note was left. I wanted to loop that little range a few times to a get a sense of what he meant and understand the comment in context. We came up with the idea of Comment Replay, which loops four seconds around any comment. It’s insanely useful.”

We also asked Wells if plans for a non-iOS mobile device was in his future. “We started with iOS probably because most of our team is iOS-centric. We all have iPhones and Macs. It’s not a religious decision. Android is great and hopefully at some point in the future we’ll see Frame.io on Android but our expertise and familiarity was more in line with iOS.”

Also available in French and German, the Frame.io iOS app can be downloaded now from the App Store. Check out their product video…