Category Archives: A Closer Look

On Hold: Making an indie web series

By John Parenteau

On Hold is an eight-episode web series, created and co-written by myself and Craig Kuehne, about a couple of guys working at a satellite company for an India-based technology firm. They have little going for themselves except each other, and that’s not saying much. Season 1 is available now, and we are in prepro on Season 2.

While I personally identify as a filmmaker, I’ve worn a wide range of hats in the entertainment industry since graduating from USC School of Cinematic Arts in the late ‘80s. As a visual effects supervisor, I’ve been involved in projects as diverse as Star Trek: Voyager and Hunger Games. I have also filled management roles at companies such as Amblin Entertainment, Ascent Media, Pixomondo and Shade VFX.

That’s me in the chair, conferring on setup.

It was with my filmmaker hat on that I recently partnered with Craig, a long-time veteran of visual effects, whose credits include Westworld and Game of Thrones. We thought it might be interesting to share our experiences as we ventured into live-action production.

It’s not unique that Craig and I want to be filmmakers. I think most industry professionals, who are not already working as directors or producers, strive to eventually reach that goal. It’s usually the reason people like us get into the business in the first place, and what many of us continue to pursue. Often we’ve become successful in another aspect of entertainment and found it difficult to break out of those “golden handcuffs.” I know Craig and I have both felt that way for years, despite having led fairly successful lives as visual effects pros.

But regardless of our successes in other roles, we still identify ourselves as filmmakers, and at some point, you just have to make the big push or let the dream go. I decided to live by my own mantra that “filmmakers make film.” Thus, On Hold was born.

Why the web series format, you might ask? With so many streaming and online platforms focused on episodic material, doing a series would show we are comfortable with the format, even if ours was a micro-version of a full series. We had, for years, talked about doing a feature film, but that type of project takes so many resources and so much coordination. It just seemed daunting in a no-budget scenario. The web series concept allows us to produce something that resembles a marketable project, essentially on little or no budget. In addition, the format is easily recreated for an equally low budget, so we knew we could do a second season of the show once we had done the first.

This is Craig, pondering a shot.

The Story
We have been friends for years, and the idea for the series came from both our friendship and  our own lives. Who hasn’t felt, as they were getting older, that maybe some of the life choices they made might not have been the best? That can be a serious topic, but we took a comedic angle, looking for the extremes. Our main characters, Jeff (Jimmy Blakeney) and Larry (Paul Vaillancourt), are subtle reflections of us (Craig is Jeff, the somewhat over-thinking, obsessive nerd, and I’m Larry, a bit of a curmudgeon, who can take himself way too seriously), but they quickly took a life of their own, as did the rest of the cast. We added in Katy (Brittney Bertier), their over-energetic intern, Connie (Kelly Keaton), Jeff’s bigger-than-life sister, and Brandon (Scott Rognlien), the creepy and not-very- bright boss. The chemistry just clicked. They say casting is key, and we certainly discovered that on this project. We were very lucky to find the actors we did, and  played off of each other perfectly.

So what does it take to do a web series? First off, writing was key. We spent a few months working out the overall storyline of the first season and then honed in on the basic outlines of each episode. We actually worked out a rough overall arc of the show itself, deciding on a four-season project, which gave us a target to aim for. It was just some basic imagery for an ultimate ending of the show, but it helped keep us focused and helped drive the structure of the early episodes. We split up writing duties, each working on alternate episodes and then sharing scripts with each other. We tried to be brutally honest; It was important that the show reflect both of our views. We spent many nights arguing over certain moments in each episode, both very passionate about the storyline.

In the end we could see we had something good, we just needed to add our talented actors to make it great.

On Hold

The Production
We shot on a Blackmagic Cinema camera, which was fairly new at that point. I wanted the flexibility of different lenses but a high-resolution and high-quality picture. I had never been thrilled with standard DSLR cameras, so I thought the Blackmagic camera would be a good option. To top it off, I could get one for free — always a deciding factor at our budget level. We ended up shooting with a single Canon zoom lens that Craig had, and for the most part it worked fine. I can’t tell you how important the “glass” you shoot with can be. If we had the budget I would have rented some nice Zeiss lenses or something equally professional, and the quality of the image reflects the lack of budget. But the beauty of the Blackmagic Cinema Camera is that it shoots such a nice image already, and at such a high resolution, that we knew we would have some flexibility in post. We recorded in Apple ProRes.

As a DP, I have shot everything from PBS documentaries to music videos, commercials and EPKs (a.k.a. behind the scenes projects), and have had the luxury of working with a load of gear, sometimes with a single light. At USC Film School, my alma mater, you learn to work with what you have, so I learned early to adapt my style to the gear on hand. I ended up using a single lighting kit (a Lowell DP 3 head kit) which worked fine. Shooting comedy is always more about static angles and higher key lighting, and my limited kit made that easily accessible. I would usually lift the ambience in the room by bouncing a light off a wall or ceiling area off camera, then use bounce cards on C-stands to give some source light from the top/side, complementing but not competing with the existing fluorescents in the office. The bigger challenges were when we shot toward the windows. The bright sunlight outside, even with the blinds closed, was a challenge, but we creatively scheduled those shots for early or late in the day.

Low-budget projects are always an exercise in inventiveness and flexibility, mostly by the crew. We had a few people helping off and on, but ultimately it came down to the two of us wearing most of the hats and our associate producer, Maggie Jones, filling in the gaps. She handled the SAG paperwork, some AD tasks, ordered lunch and even operated the boom microphone. That left me shooting all but one episode, while we alternated directing episodes. We shot an episode a day, using a friend’s office on the weekends for free. We made sure we created shot lists ahead of time, so I could see what he had in mind when I shot Craig’s episodes, but also so he could act as a backup check on my list when I was directing.

The Blackmagic camera at work.

One thing about SAG — we decided to go with the guild’s new media contract for our actors. Most of them were already SAG, and while they most likely would have been fine shooting such a small project non-union, we wanted them to be comfortable with the work. We also wanted to respect the guild. Many people complain that working under SAG, especially at this level, is a hassle, but we found it to be exactly the opposite. The key is keeping up with the paperwork each day you shoot. Unless you are working incredibly long hours, or plan to abuse your talent (not a good idea regardless), it’s fairly easy to remain compliant. Maggie managed the daily paperwork and ensured we broke for lunch as per the requirements. Other than that, it was a non-issue.

The Post
Much like our writing and directing, Craig and I split editorial tasks. We both cut on Apple Final Cut Pro X (he with pleasure, me begrudgingly), and shared edits with each other. It was interesting to note differences in style. I tended to cut long, letting scenes breathe. Craig, a much better editor than I, had snappier cuts that moved quicker. This isn’t to say my way didn’t work at times, but it was a nice balance as we made comments on each other’s work. You can tell my episodes are a bit longer than his, but I learned from the experience and managed to shorten my episodes significantly.

I did learn another lesson, one called “killing your darlings.” In one episode, we had as scene where Jeff enjoyed a box of donuts, fishing through them to find the fruit-filled one he craved. The process of him licking each one and putting them back, or biting into a few and spitting out pieces, was hilarious onset, but in editorial I soon learned that too much of a good thing can be bad. Craig persuaded me to trim the scene, and I realized quickly that having one strong beat is just as good as several.

We had a variety of issues with other areas of post, but with no budget we could do little about them. Our “mix” consisted of adjusting levels in our timeline. Our DI amounted to a little color correction. While we were happy with the end result, we realized quickly that we want to make season two even better.

On Hold

The Lessons
A few things pop out as areas needing improvement. First of all, shooting a comedy series with a great group of improv comedians mandates at least two cameras. Both Craig and I, as directors, would do improv takes with the actors after getting the “scripted version,” but some of it was not usable since cutting between different improv takes from a single camera shoot is nearly impossible. We also realized the importance of a real sound mixer on set. Our single mic, mono tracks, run by our unprofessional hands, definitely needed some serious fixing in post. Simply having more experienced hands would have made our day more efficient as well.

For post, I certainly wanted to use newer tools, and we called in some favors for finishing. A confident color correction really makes the image cohesive, and even a rudimentary audio mix can remove many sound issues.

All in all, we are very proud of our first season of On Hold. Despite the technical issues and challenges, what really came together was the performances, and, ultimately, that is what people are watching. We’ve already started development on Season 2, which we will start shooting in January 2018, and we couldn’t be more excited.

The ultimate lesson we’ve learned is that producing a project like On Hold is not as hard as you might think. Sure it has its challenges, but what part of entertainment isn’t a challenge? As Tom Hanks says in A League of Their Own, “It’s supposed to be hard. If it wasn’t hard everyone would do it.” Well, this time, the hard work was worth it, and has inspired us to continue on. Ultimately, isn’t that the point of it all? Whether making films for millions of dollars, or no-budget web series, the point is making stuff. That’s what makes us filmmakers.

 

 

SMPTE ST 2110 enables IP workflows

By Tom Coughlin

At IBC2017 and this year’s SMPTE Conference there were significant demonstrations of IP-based workflows with interoperability demonstrations and conference sessions. Clearly proprietary media networking will be supplanted by IP-based workflows. This will enable new equipment economies and open up new opportunities for using and repurposing media. IP workflows will also impact the way we store and use digital content and thus the storage systems where they live.

SMPTE has just ratified ST 2110 standards for IP transport in media workflows. The standard puts video, audio and ancillary data into separate routable streams as shown in the figure below. PCM Audio streams are covered by SMPTE ST 2110-30, uncompressed video streams are covered by ST 2110-20 and ancillary data is covered by ST 2110-40. Some other parts of the standards cover traffic shaping of uncompressed video (ST 2110-21), AES3 transparent transport (ST 2110-31) and ST 2110-50 allows integration with older specification ST 2022-6 that covers legacy SDI over IP.

The separate streams have timestamps that allow proper alignment of the different streams when they are combined together — this timestamp is provided by ST 2059. Each stream contains metadata that tells the receiver how to interpret what is inside of the stream. The uncompressed video stream supports up to 32k X 32k images, HDR and all common color systems and formats.

The important thing about these IP standards is that they allow using conventional Ethernet cabling rather than special proprietary cables. This saves a lot of money on hardware. In addition, having an IP-based workflow allows easy ingest into a core IP network and distribution of content using IP-based broadcast, telco, cable and broadband technologies as well as satellite channels. As most consumers have IP content access, these IP networks connect directly to consumer equipment. The image below from an Avid presentation by Shailendra Mathur at SMPTE 2017 illustrates the workflow below.

At IBC and the SMPTE 2017 Conference there were interoperability demonstrations. Although the IBC interop demo had many more participants the SMPTE demo was pretty extensive. The photo below shows the SMPTE interoperability demonstration setup.

As many modern network storage systems, whether file or object based, use Ethernet connectivity, having the rest of the workflow using an IP network makes movement of data through the workflow to and from digital storage easier. Since access to cloud-based assets is also though IP-based networks and these can feed CDNs and other distribution networks, on-premise and cloud storage interact through IP networks and can be used to support working storage, archives as well as content distribution libraries.

IP workflows combined with IP-based digital storage provide end-to-end processing and storage of data. This provides hardware economics and access to a lot of software built to manage and monitor IP flows to help optimize a media production and distribution system. By avoiding the overhead of converting from one type of network to another the overall system complexity and efficiency will be improved, resulting in faster projects and easier repair of problems when they arise.


Tom Coughlin is president of Coughlin Associates. He is the founder and organizer of the annual Storage Visions Conference as well as the Creative Storage Conference. He has also been the general chairman of the annual Flash Memory Summit.

Dell 6.15

Working with Anthropologie to build AR design app

By Randi Altman

Buying furniture isn’t cheap; it’s an investment. So imagine having an AR app that allows you to see what your dream couch looks like in paisley, or colored dots! Well imagine no more. Anthropologie — which sells women’s clothing, shoes and accessories, as well as furniture, home décor, beauty and gifts — just launched its own AR app, which gives users the ability to design and customize their own pieces and then view them in real-life environments.

They called on production and post house CVLT to help design the app. The bi-coastal studio created over 96,000 assets, allowing users to combine products in very realistic and different ways. The app also accounts for environmental lighting and shadows in realtime.

We reached out to CVLT president Alberto Ruiz to find out more about how the studio worked with Anthroplogie to create this app.

How early did CVLT get involved in the project?
Our involvement began in the spring of 2017. We collaborated early in the planning phases when Anthropologie was concepting how to best execute the collection. Due to our background in photography, video production and CGI, we discussed the positives and pitfalls of each avenue, ultimately helping them select CGI as the path forward.

We’re often approached by a brand with a challenge and asked to consult on the best way to create the assets needed for the campaign. With specialists in each category, we look at all available ways of executing a particular project and provide a recommendation as to the best way to build a campaign with longevity in mind.

How did CVLT work with Anthropologie? How much input did you have?
We worked in close collaboration with Anthropologie every step of the way. We helped design style guides and partnered with their development team to test and optimize assets for every platform.

Our creatives worked closely with Anthropologie to elevate the assets to a high-quality reflective of the product integrity. We presented CGI as a way to engage customers now and in the future through AR/VR platforms. Because of this partnership, we understood the vision for future executions and built our assets with those executions in mind. They were receptive to our suggestions and engaged in product feedback. All in all, it was a true partnership between companies.

Has CVLT worked on assets or materials for an app before? How much of your work is for apps or the web?
The majority of the work that we produce is for digital platforms, whether for the web, mobile or experiential platforms. In addition to film and photography projects, we produce highly complex CGI products for luxury jewelers, fragrance and retail companies.

More and more clients are looking to either supplement or run full campaigns digitally. We believe that investing in emerging technologies, such as augmented and virtual reality, is paramount in the age of digital and mobile content. Our commitment to emerging technologies connects our clients with the resources to explore new ways of communicating with their audience.

What were the challenges of creating so many assets? What did you learn that could be applicable moving forward?
The biggest challenge was unpacking all the variables within this giant puzzle. There are 138 unique pieces of furniture in 11 different fabrics, with 152 colorways, eight leg finishes and a variety of hardware options. Stylistically, colors of a similar family were to live on complementary backgrounds, adding yet another variable to the project. It was basically a rubix cube on steroids. Luckily, we really enjoy puzzles.

We always believed in having a strong production team and pipeline. It was the only way to achieve the scale and quality of this project. This was further reinforced as we raced toward the finish line. We’re now engaged in future seasons and are focused on refining the pipe and workflow tools therein.

Any interesting stories from working on the project?
One of the most interesting things about working on the project was how much we learned about furniture. The level of planning and detail that goes into each piece is amazing. We talk a lot about the variables in colors, fabrics and styles because they are the big factors. What remains hidden are the small details that have large impacts. We were given a crash course in stitching details, seam placements, tufting styles and more. Those design details are what set an Anthropologie piece apart.

Another interesting part of the project was working with such an iconic brand with a strong heritage. The rich history of design at Anthropologie permeates every aspect of their work. The same level of detail poured into product design is also visible in the way they communicate with and understand their customer.

What tools were used throughout the project?
Every time we approach a new project we assess the tools that we have in our arsenal and the custom tools that we can develop to make the process smoother for our clients. This project was no different in that sense. We combined digital project management tools with proprietary software to create a seamless experience for our client and staff.

We built a bi-coastal team for this project between our New York and Los Angeles offices. Between that and our Philadelphia-based client, we relied heavily on collaborative digital tools to manage reviews. It’s a workflow we’re accustomed to as many of our clients have a global presence, which was further refined to meet the scale of this project.

What was the most difficult part of the project?
The timeframe was really the biggest challenge in this project. The sheer volume of assets — 96,000 that we created in under five months was definitely a monumental task, and one we’re very proud of.


Making 6 Below for Barco Escape

By Mike McCarthy

There is new movie coming out this week that is fairly unique. Telling the true story of Eric LeMarque surviving eight days lost in a blizzard, 6 Below: Miracle on the Mountain is the first film shot and edited in its entirety for the new Barco Escape theatrical format. If you don’t know what Barco Escape is, you are about to find out.

This article is meant to answer just about every question you might have about the format and how we made the film, on which I was post supervisor, production engineer and finishing editor.

What is Barco Escape?
Barco Escape is a wraparound visual experience — it consists of three projection screens filling the width of the viewer’s vision with a total aspect ratio of 7.16:1. The exact field of view will vary depending on where you are sitting in the auditorium, but usually 120-180 degrees. Similar to IMAX, it is not about filling the entire screen with your main object but leaving that in front of the audience and letting the rest of the image surround them and fill their peripheral vision in a more immersive experience. Three separate 2K scope theatrical images play at once resulting in 6144×858 pixels of imagery to fill the room.

Is this the first Barco Escape movie?
Technically, four other films have screened in Barco Escape theaters, the most popular one being last year’s release of Star Trek Beyond. But none of these films used the entire canvas offered by Escape throughout the movie. They had up to 20 minutes of content on the side screens, but the rest of the film was limited to the center screen that viewers are used to. Every shot in 6 Below was framed with the surround format in mind, and every pixel of the incredibly wide canvas is filled with imagery.

How are movies created for viewing in Escape?
There are two approaches that can be used to fill the screen with content. One is to place different shots on each screen in the process of telling the story. The other is to shoot a wide enough field of view and high enough resolution to stretch a single image across the screens. For 6 Below, director Scott Waugh wanted to shoot everything at 6K, with the intention of filling all the screens with main image. “I wanted to immerse the viewer in Eric’s predicament, alone on the mountain.”

Cinematographer Michael Svitak shot with the Red Epic Dragon. He says, “After testing both spherical and anamorphic lens options, I chose to shoot Panavision Primo 70 prime lenses because of their pristine quality of the entire imaging frame.” He recorded in 6K-WS (2.37:1 aspect ratio at 6144×2592), framing with both 7:1 Barco Escape and a 2.76:1 4K extraction in mind. Red does have an 8:1 option and a 4:1 option that could work if Escape was your only deliverable. But since there are very few Escape theaters at the moment, you would literally be painting yourself into a corner. Having more vertical resolution available in the source footage opens up all sorts of workflow possibilities.

This still left a few challenges in post: to adjust the framing for the most comfortable viewing and to create alternate framing options for other deliverables that couldn’t use the extreme 7:1 aspect ratio. Other projects have usually treated the three screens separately throughout the conform process, but we treated the entire canvas as a single unit until the very last step, breaking out three 2K streams for the DCP encode.

What extra challenges did Barco Escape delivery pose for 6 Below’s post workflow?
Vashi Nedomansky edited the original 6K R3D files in Adobe Premiere Pro, without making proxies, on some maxed-out Dell workstations. We did the initial edit with curved ultra-wide monitors and 4K TVs. “Once Mike McCarthy optimized the Dell systems, I was free to edit the source 6K Red RAW files and not worry about transcodes or proxies,” he explains. “With such a quick turnaround everyday, and so much footage coming in, it was critical that I could jump on the footage, cut my scenes, see if they were playing well and report back to the director that same day if we needed additional shots. This would not have been possible time-wise if we were transcoding and waiting for footage to cut. I kept pushing the hardware and software, but it never broke or let me down. My first cut was 2 hours and 49 minutes long, and we played it back on one Premiere Pro timeline in realtime. It was crazy!”

All of the visual effects were done at the full shooting resolution of 6144×2592, as was the color grade. Once Vashi had the basic cut in place, there was no real online conform, just some cleanup work to do before sending it to color as an 8TB stack of 6K frames. At that point, we started examining it from the three-screen perspective with three TVs to preview it in realtime, courtesy of the Mosaic functionality built into Nvidia’s Quadro GPU cards. Shots were realigned to avoid having important imagery in the seams, and some areas were stretched to compensate for the angle of the side screens from the audiences perspective.

DP Michael Svitak and director Scott Waugh

Once we had the final color grade completed (via Mike Sowa at Technicolor using Autodesk Lustre), we spent a day in an Escape theater analyzing the effect of reflections between the screens and its effect on the contrast. We made a lot of adjustments to keep the luminance of the side screens from washing out the darks on the center screen, which you can’t simulate on TVs in the edit bay. “It was great to be able to make the final adjustments to the film in realtime in that environment. We could see the results immediately on all three screens and how they impacted the room,” says Waugh.

Once we added the 7.1 mix, we were ready to export assets for our delivery in many different formats and aspect ratios. Making the three streams for Escape playback was a simple as using the crop tool in Adobe Media Encoder to trim the sides in 2K increments.

How can you see movies in the Barco Escape format?
Barco maintains a list of theaters that have Escape screens installed, which can be found at ready2escape.com. But for readers in the LA area, the only opportunity to see a film in Barco Escape in the foreseeable future is to attend one of the Thursday night screenings of 6Below at the Regal LA Live Stadium or the Cinemark XD at Howard Hughes Center. There are other locations available to see the film in standard theatrical format, but as a new technology, Barco Escape is only available in a limited number of locations. Hopefully, we will see more Escape films and locations to watch them in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


A closer look at some London-based audio post studios

By Mel Lambert

While in the UK recently for a holiday/business trip, I had the opportunity to visit several of London’s leading audio post facilities and catch up with developments among the Soho community.

‘Baby Driver’

I also met up with Julian Slater, a highly experienced supervising sound editor, sound designer and re-recording mixer who relocated to the US a couple of years ago, working first at Formosa Group and then at the Technicolor at Paramount facility in Hollywood. Slater was in London working on writer/director Edgar Wright’s action-drama Baby Driver, starring Lily James, Jon Hamm, Jon Bernthal and Jamie Foxx. The film follows the progress of a young getaway driver who, after being coerced into working for a crime boss, finds himself taking part in a heist that’s doomed to fail.

Goldcrest Films
Slater handled sound effects pre-dubs at Goldcrest Films on Dean Street in the heart of Soho’s film district, while co-mixer Tim Cavagin worked on dialog and Foley pre-mixes at Twickenham TWI Studios in Richmond, a London suburb west of the capital. Finals started just before Christmas at Goldcrest, with Slater handling music and SFX, while Cavagin oversaw dialog and Foley. “We are using Goldcrest’s new Dolby Atmos-capable Theater 1, which opened last May,” explains Slater. “The post crew includes sound effects editors Arthur Graley, Jeremy Price and Martin Cantwell, plus dialog/ADR supervisor Dan Morgan and Foley editor Peter Hanson.

“I cannot reveal too much about my sound design for Baby Driver,” admits Slater, “but because the lead character [actor Ansel Elgort] has a hearing anomaly, I am working with pitch changes to interweave various elements of the film’s soundtrack.”

Baby Driver is scheduled for UK and US release in August, and will be previewed in mid-March at the SXSW Film Festival in Austin. Composer Steven Price’s score for the film was recorded at Abbey Road Studios in North London. Price wrote the music for writer/director Alfonso Cuarón’s Gravity (2013), which won him the Academy Award for Best Original Score.

British-born Wright is probably best known for comedies, such as Shaun of the Dead (2004), Hot Fuzz (2007) and The World’s End (2013), several of which featured Slater’s talents as supervising sound editor, sound designer and/or re-recording mixer.

Slater is a multiple BAFTA and Emmy Award nominee. After graduating from the School of Audio Engineering (now the SAE Institute) in London, at the age of 22 he co-founded the Hackenbacker post company and designed sound for his first feature film, director Mike Figgis’ Leaving Las Vegas (1995). Subsequent films include In Bruges (2008), Dark Shadows (2012), Scott Pilgrim Vs. the World (2010) and Attack the Block (2011).

Goldcrest Films, which has a NYC-based studio as well, provides post services for film and broadcast projects, including Carol (2015), The Danish Girl (2015) and Les Misérables (2012). The facility features three Dolby dubbing theaters with DCI-compliant projection, plus ADR and Foley recording stages, sound design and editing suites, offline editorial and grading suites. “Last May we opened Theatre 1,” reports studio manager Rob Weatherall, “a fully sound-isolated mixing theater that is Dolby Atmos Premier-certified.”

Goldcrest Films Theater 1 (L-R): Alex Green, Rowan Watson, Julian Slater, Rob Weatherall and Robbie Scott.

First used to re-record writer/director Paul Greengrass’ Jason Bourne (2016), the new room houses a hybrid Avid 32-fader S6 M40 Pro Tools control surface section within a 72-fader dual-engine AMS Neve DFC3D Gemini frame. By building interchangeable AMS and S6 “buckets” in a single console frame, the facility can mix and match formats according to the re-recording engineers’ requirements — either “in the box” using the S6 surface, or a conventional workflow using the DFC sections.

“I like working in the box,” says Slater, “since it lets me retain all my sound ideas right through print mastering. For Baby Driver we premixed to a 9.1-channel bed with Atmos objects and brought this submix here to Goldcrest where we could open everything seamlessly on the S6 console and refine all my dialog, music and effects submixes for the final Atmos immersive mix. Because I have so much sound design for the music being heard by our lead character, including sound cues for the earbuds and car radios, it’s the only way to work! We also had a lot of music playback on the set.”

The supervising sound editor needed to carefully prepare myriad sound cues. “Having worked on all of his films, I have come to recognize that Edgar [Wright] is an extremely sound-conscious director,” Slater reports. “The soundtrack for Baby Driver needed to work seamlessly and sound holistic — not forced in any way. In other words, while sound is important in this film — for obvious reasons — it is critical that we don’t detract the audience from the dramatic storyline.”

Theater 1’s 55-loudspeaker Atmos array includes a mixture of Crown-powered JBL 5732s Screen Array cabinets in the front with Meyer cabinets for the surrounds. Accommodated formats include 5.1, 7.1 and DTS:X. Five Pro Tools playback systems are available with Waves Platinum plug-in packages, plus a 192-channel Pro Tools HDX 3 recorder. Each Pro Tools rig features a DAD DX32 audio interface, with both Audinate Dante- and MADI-format digital outputs. The latter can be routed to the DFC console for conventional mixing or to a sixth rig with a DAD AX32 converter system for in the box mixing on the S6 control surface. Video projection is via a Barco DP2K-10SX and an Integrated Media Server for DCP playback, and Pro Tools Native with an AJA video card. Outboards include a pair of Lexicon 960 reverbs, two TC 6000 reverb and four dbx Subharmonic synthesizers.

Hackenbacker Audio Post
Around the corner from Goldcrest, Slater’s former facility Hackenbacker Audio Post comprises a multi-room post facility that was purchased in July 2015 by Molinare from e-Post Media, owners of Halo Post. Hackenbacker handled sound for the TV series Downton Abbey, Cold Feet and Thunderbirds Are Go, plus director Richard Ayoade’s film, The Double (2013). Owner/founder Nigel Heath remains a director of the group management team for the facility’s three dubbing studios, five edit suites and a large Foley stage located a short distance away.

Hackebacker’s Studio 2

Hackenbacker Studio 1 has been Heath’s home base for more than a decade. It houses a large-format AMS Neve 48-fader MMC Neve console with three Avid HD3 Pro Tools systems, two iZ Technologies RADAR 24-track recorder/players and a Dynaudio M3F 5.1 monitoring system that was used to re-record Hot Fuzz, In Bruges, Shaun of the Dead and many other projects.

Studio 2 features Dynaudio monitoring along with an Avid Icon 16-fader D-Control surface linked to a Pro Tools HDX system. It is used for 5.1 TV mixing and ADR and includes a large booth suitable for both ADR and voice-over. Also designed for TV mixing and ADR, Studio 3 features Quested monitoring and an Avid ICON 32-fader D-control surface linked to a Pro Tools HDX system. Edit 1 and 2 handle a wide cross section of sound effects editorial assignments, with access to a large sound library and other creative tools. Edit 3 and 4 are equipped for dialog and ADR editing. Edit 5 features a transfer bay and QC facility in which all sound material is verified and checked.

Twickenham TWI Studios
According to technology development manager/re-recording mixer Craig Irving, Twickenham TWI Studios recently completed mixing of the soundtrack for writer/director Stanley Tucci’s Final Portrait, the story of Swiss painter and sculptor Alberto Giacometti, starring Armie Hammer and Geoffrey Rush. The film was re-recorded by Tim Cavagin and Irving, with sound editorial by Tim Hands on dialog and Jack Gillies on effects.

The lounge at Twickenham-TWI.

“Dialog tracks for Baby Driver were pre-mixed by Tim in our Atmos-capable Theatre 1,” explains Irving. “Paul Massey will be returning soon to complete the mix in Theatre 1 for director Ridley Scott’s Alien Covenant, which reunites the same sound team that worked on The Martian — with Oliver Tarney supervising, Rachel Tate on dialog, and Mark Taylor and our very own Dafydd Archard on effects.” Massey also mixed Scott’s Exodus: Gods and Kings (2014) at Twickenham TWI. He also worked on director Rufus Norris’ London Road (2015) and director Tim Miller’s Deadpool (2016). He recently completed the upcoming Pirates of the Caribbean: Dead Men Tell No Tales. While normally based at Fox Post Production Services in West Los Angeles, Massey also spends time in his native England overseeing a number of film projects.

“Their stages have also been busy with production of Netflix’s Black Mirror series, which consists of six original films looking at the darker side of modern life. Episode 1 was directed by Jodie Foster. “To service an increase in production, we are investing in new infrastructure that will feature a TV mixing stage,” explains Irving. “The new room will be based around an Avid S6 control surface and used as a bespoke area to mix original TV programming, as well as creating TV mixes of our theatrical titles. Our Picture Post area is also being expanded with a second FilmLight Baselight Two color grading system with full 4K projection for both theatrical and broadcast projects.”

Twickenham TWI’s rooftop bar and restaurant opened its doors to clients and staff last year. “It has proved extremely popular and is open to membership from within the industry,” Irving says. The facility’s remodeled front office and reception area was designed Barbarella Design. “We have chosen a ‘’60s retro, Mad Men theme in greys and red,” says the studio’s COO Maria Walker. In addition to its two main re-recording theaters, TWI offers 40 cutting rooms, an ADR/Foley stage and three shooting stages.

Warner Bros. De Lane Lea
Just up the street from Goldcrest Films is Warner Bros. De Lane Lea, which started as a multi-room studio. It also has a rather unusual ancestry. In the 1940s, Major De Lane Lea was looking to improve the way dialog for film and later TV could be recorded and replaced in order to streamline dubbing between French and English. This resulted in his setting up a company called De Lane Lea Processes and a laboratory in Soho. The company also developed a number of other products aimed at post, and over the next 30 years opened a variety of studios in London for voice recording, film, TV and jingle mixing, music recording and orchestral-score recording.

De Lane Lea’s Stage 1.

Around 1970, the operation moved into its current building on Dean Street and shifted its focus toward film and TV sound. The facility, which was purchased by Warner Bros. in 2012, currently includes four re-recording stages, two ADR stages for recording dialog, voiceovers and commentaries, plus 50 cutting rooms, a preview theater, transfer bay and a café/bar. Three of the Dolby-certified mixing stages are equipped with AMS Neve DFC Gemini consoles or Avid S6 control surfaces and Meyer monitoring. A TV mixing stage boasts an Avid Pro Tools control surface and JBL monitoring.

Stage 1 features an AMS Neve 80-fader DFC Gemini digital two-mixer console with an Avid control surface, linked to a Meyer Sound EXP system providing Dolby Atmos monitoring. Six Pro Tools playback systems are available — three 64-channel HDX and three 128-channel HDX2 rigs — together with a 128-channel HDX2 Pro Tools recorder. Film projection is from a Kinoton FP38ECII 35mm unit, with a Barco DP2K-23B digital cinema projector offering resolutions up to 2K. Video playback within Pro Tools is via a VCubeHD nonlinear player or a Blackmagic card. Outboards include a Lexicon 960 and a TC 6000 reverb, plus two dbx Subharmonic Synthesizers. Stage 2 is centered around an Avid S6 M40 24-fader console linked to three Pro Tools playback systems — a pair of 64-channel HDX2 and a single 128-channel HDX2 rig — plus a 64-channel HDX recorder. Monitoring is via a 7.1-channel Meyer Sound EXP system.

Warner Bros. Studios Leavesden
Located 20 miles north west of Central London and serving as its UK-based shooting lot, Warner Bros. Studios Leavesden offers a number of well-equipped stages for large-scale productions, in addition to a large tank for aquatic scenes. The facility’s history dates back almost 70 years, to when it was originally acquired by the UK Ministry of Defense in 1939 as a WWII production base for building aircraft, including the iconic Mosquito Fighter and Halifax Bombers. When hostilities ceased, the site was purchased by Rolls Royce and continued as a base for aircraft manufacture, progressing onto large engines. It eventually closed in 1992.

Warner Bros. Leavesden’s studio layout.

In 1994, Leavesden began a new life as a film studio and over the following decades was home to a number of high-profile productions, including the James Bond film Goldeneye (1995), Mortal Kombat: Annihilation (1997), Star Wars Episode One: The Phantom Menace (1999), An Ideal Husband (1999) and director Tim Burton’s Sleepy Hollow (1999).

By 2000, Heyday Films had acquired use of the site on behalf of Warner Bros. for what would be the first in a series of Harry Potter films — Harry Potter and the Philosopher’s Stone (2001) — with each subsequent film in the franchise during the following decade being shot at Leavesden. While other productions, almost exclusively Warner Bros. productions, made partial use of the complex, the site was mostly occupied by permanent standing sets for the Harry Potter films.

In 2010, as the eighth and final Harry Potter film was nearing completion, Warner Bros. announced its intention to purchase the studio as a permanent European base, the first studio to do so since MGM in the 1940s. By November of that year, the studio had completed purchase of Leavesden Studios and announced plans to invest more than £100 million (close to $200 million at the time) on the site they had occupied, converting Stages A through H into sound stages. As part of the redevelopment, Warner Bros. created two entirely new soundstages to house a permanent public exhibition called Warner Bros. Studio Tour London — The Making of Harry Potter, creating 300 new jobs. It opened to the public in early 2012.

With over 100 acres, WBSL features one of the most extensive backlots in Europe, with level, graded areas, including a former aircraft runway, a variety of open fields, woodlands, hills and clear horizons. In addition, it offers bespoke art departments, dry-hire edit suites and VFX rooms, in addition to a pair of the largest water tanks in Europe, with a 60-by-60 foot filtered and heated indoor tank, and a 250-by-250 foot exterior tank.

Main Image: Goldcrest London’s Theater 1.


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Utopic editor talks post for David Lynch tribute Psychogenic Fugue

Director Sandro Miller called on Utopic partner and editorCraig Lewandowski to collaborate on Psychogenic Fugue, a 20-minute film starring John Malkovich in which the actor plays seven characters in scenes recreated from some of filmmaker David Lynch’s films and TV shows. These characters include The Log Lady, Special Agent Dale Cooper, and even Lynch himself as narrator of the film.

It is part of a charity project called Playing Lynch that will benefit the David Lynch Foundation, which seeks to introduce at-risk populations affected by trauma to transcendental meditation.

craigChicago-based Utopic handled all the post, including editing, graphics, VFX and sound design. The film is part of a multimedia fundraiser hosted by Squarespace and executed by Austin-based agency, Preacher. The seven vignettes were released one at a time on Playinglynch,com.

To find out more about Utopic’s work on the film, we reached out to Lewandowski with some questions.

How early were you brought in on the film?
We were brought in before the project was even finalized. There were a couple other ideas that were kicked around before this one rose to the top.

We cut together a timing board using all the pieces we would later be recreating. We also pulled some hallway scenes from an old Playstation commercial that he directed, and we then scratched in all the “Lynch” lines for timing.

You were on set. Can you talk about why and what the benefits were for the director and you as an editor?
My job on the set was to have our reference movie at the ready and make sure we were matching timing, framing, lighting, etc. Sandro would often check the reference to make sure we were on track.

For scenes like the particles in Eraserhead, I had the DP shoot it at various frame rates and at the highest possible resolution, so we could shoot it vertical and use the particles falling. I also worked with the Steadicam operator to get a variety of shots in the hallway since I knew we’d need to create some jarring cutaways.

How big of a challenge was it dealing with all those different iconic characters, especially in a 20-minute film?
Sandro was adamant that we not try to “improve” on anything that David Lynch originally shot. Having had a lot of experience with homages, Sandro knew that we couldn’t take liberties. So the sets and action were designed to be as close as possible to the original characters.

In shots where it was only one character originally (The Lady in the Radiator, Special Agent Dale Cooper, Elephant Man) it was easier, but in scenes where there were originally more characters and now it was just Malkovich, we had to be a little more creative (Frank Booth, Mystery Man). Ultimately, with the recreations, my job was to line up as closely as possible with what was originally done, and then with the audio do my best to stay true to the original.

Can you talk about your process and how you went about matching the original scenes? Did you feel much pressure?
Sandro and I have worked together before, so I didn’t feel a lot of pressure from him, but I think I probably put a fair amount on myself because I knew how important this project was for so many people. And, as is the case with anything I edit, I don’t take it lightly that all of that effort that went into preproduction and production now sits on my shoulders.

Again, with the recreations it was actually fairly straightforward. It was the corridor shots where Malkovich plays Lynch and recites lines taken from various interviews that offered the biggest opportunity, and challenge. Because there was no visual reference for this, I could have some more fun with it. Most of the recreations are fairly slow and ominous, so I really wanted these corridor shots to offset the vignettes, kind of jar you out of the trance you were just put in, make you uneasy and perhaps squirm a bit, before being thrust into the next recreation.

What about the VFX? Can you talk about how they fit in and how you worked with them?
Many of the VFX were either in-camera or achieved through editorial, but there were spots — like where he’s in the corridor and snaps from the front to the back — that I needed something more than I could accomplish on my own, so I used our team at Utopic. However, when cutting the trailer, I relied heavily on our motion graphics team for support.

Psychogenic Fugue is such an odd title, so the writer/creative director, Stephen Sayadin, came up with the idea of using the dictionary definition. We took it a step further, beginning the piece with the phonetic spelling and then seamlessly transitioning the whole thing. They then tried different options for titling the characters. I knew I wanted to use the hallway shot, close-ups of the characters and ending on Lynch/Malkovich in the chair. They gave me several great options.

What was the film shot on, and what editing system did you use?
The film was shot on Red at 6K. I worked in Adobe Premiere, using the native Red files. All of our edit machines at Utopic are custom-built, high-performance PCs assembled by the editors themselves.

What about tools for the visual effects?
Our compositor/creative finisher used an Autodesk Flame, and our motion graphics team used Adobe After Effects.

Can you talk about the sound design?
I absolutely love working on sound design and music, so this was a dream come true for me. With both the film and the trailer, our composer Eric Alexandrakis provided me with long, odd, disturbing tracks, complete with stems. So I spent a lot of time just taking his music and sound effects and manipulating them. I then had our sound designer at Brian Lietner jump in and go crazy.

Is there a scene that you are most proud of, or that was most challenging, or both?
I really like the snap into the flame/cigarette at the very beginning. I spent a long time just playing with that shot, compositing a bunch of shots together, manipulating them, adjusting timing, coming back in the next morning and changing it all up again. I guess that and Eraserhead. We had so many passes of particles and layered so many throughout the piece. That shot was originally done with him speaking to camera, but we had this pass of him just looking around, and realized it was way more powerful to have the lines delivered as though they were internal monologue. It also allowed us to play with the timings in a way that we wouldn’t be able to with a one-take shot.

As far as what I’m most proud of, it’s the trailer. We worked really hard to get the recreations and full film done. Then I was able to take some time away from it all and come back fresh. I knew that there was a ton of great footage to work with and we had to do something that wasn’t just a cutdown. It was important to me that the trailer feel every bit as demented as the film itself, if not more. I think we accomplished that.

Check out the trailer here:


The creative process behind The Human Rights Zoetrope

By Sophia Kyriacou

As an artist working in the broadcast industry of almost 20 years, I’ve designed everything from opening title sequences to program brands to content graphics. About three years into my career, I was asked to redesign a program entirely in 3D. The rest, as they say, is history.

Over two years ago I was working full-time at the BBC doing the same work as I am doing now, broadcast designer and 3D artist, but decided it was time to cut my time in half and allow myself to focus on my own creative ventures. I wanted to work with external and varied clients, both here in the UK and internationally. I also wanted to use my spare time for development work. In an industry where technology is constantly evolving it’s essential to keep ahead of the game.

One of those creative ventures was commissioned by Noon Visual Creatives — a London-based production and post company that serves several Arabic broadcasters in both the United Kingdom and worldwide — to create a television branding package for a program called Human Rights.

I had previously worked with Noon on a documentary about the ill-fated 1999 EgyptAir plane crash (which is still awaiting broadcast), so when I was approached again I was more than happy to create their Human Rights brand.

My Inspiration
I was very lucky in that my client essentially gave me free rein, which I find is a rarity these days. I have always been excited and inspired by the works of the creative illusionist M.C Escher. His work has always made me think and explore how you can hook your viewer by giving them something to unravel and interact with. His 1960 lithograph, called Ascending and Descending, was my initial starting point. There was something about the figures going round and round but getting nowhere.The Human Rights Zeotrope Titles

While Escher’s work kickstarted my creative process I also wanted to create something that was illusion-based, so I revisited Mark Gertler’s Merry-Go-Round. As a young art student I had his poster on my wall. Sometimes I would find myself staring at it for hours, looking at the people’s expressions and the movement Gertler had expressed in the figures with his onion-skin-style strokes. There was so much movement within the painting that it jumped out at me. I loved the contrasting colors of orange and blue, the composition was incredibly strong and animated.

I have always been fascinated by the mechanics of old hand-cranked metal toys, including zoetropes, and I have always loved how inanimate objects could come alive to tell you a story. It is very powerful. You have the control to be given the narrative or you can walk away from it — it’s about making a choice and being in control.

Once I had established I was going to build a 3D zoetrope, I explored the mechanics of building one. It was the perfect object to address the issue of human rights because without the trigger it would remain lifeless. I then starting digging into the declaration of Human Rights to put forward a proposal of what I thought would work within their program. I shortlisted 10 rights and culled that down to the final eight. Everything had to be considered. The positioning of the final eight had their own hierarchy and place.

At the base of the zoetrope are water pumps, signifying the right to clean water and sanitation. This is the most important element of the entire zoetrope, grounding the entire structure, as without water, there simply is no life, no existence. Above, a prisoner gestures for attention to the outside world, its environment completely contradicting, given hope by an energetic burst of comforting orange. The gavel references the rights for justice and are subliminally inspired by the hammers walking defiantly within the Pink Floyd video, Another Brick in the Wall. The gavel within the zoetrope becomes that monumental object of power, helped along by the dynamic camera with repetitions of itself staggered over time like echoes on a loop. Surrounding the gavel of justice is a dove flying free from a metal birdcage in a shape of the world. This was my reference to the wonderful book, I Know Why the Caged Bird Sings, by Maya Angelou.

My client wanted to highlight the crisis of the Syrian refugees, so I decided to depict an exhausted child wearing a life jacket, suggesting he had travelled across the Mediterranean Sea, while a young girl at his side, oblivious, happily plays with a spinning top. I wanted to show the negativity being cancelled out by optimism.

To hammer home the feeling of isolation and emptiness that the lack of human rights brings forth, I placed the zoetrope into a cold and almost brutal environment: an empty warehouse. My theme of the positivity canceling out negativity once again is echoed as the sunlight penetrates through hitting the cold floor in an attempt to signify hope and reconnect with the outside world.

the-human-rights-zoetrope_gavel-shotEvery level of detail was broken up into sections. I created very simple one-second loops of animation that were subtle, but enough to tell the story. Once I had animated each section, it was a case of painstakingly pulling apart each object into a stop-frame animated existence so once they were placed in their position and spun, they would animate back into life again.

My Workflow
For ease and budget, I used Poser Pro, a character-based software to animate all the figures in isolation first. Using both the PoserFusion plug-in and the Alembic export, I was able to import each looping character into Maxon Cinema 4D where I froze and separated each 3D object one by one. Any looping objects that were not figure-based were all modelled and animated within Cinema 4D. Once the individual components were animated and positioned, I imported everything into a master 3D scene where I was able to focus on the lighting and camera shots.

For the zoetrope centrepiece, I built a simple lighting rig made up of the GSG Light Kit Pro, two soft boxes, that I had adapted and placed within a NULL and an area Omni light above. This allowed me to rotate the rig around according to my camera shot. Having a default position and brightness set-up was great and helped to get me out of trouble if I got a little too carried away with the settings, and the lighting didn’t change too dramatically on each camera shot. I also added a couple of Visible Area Spotlights out of the warehouse pointing inwards to give the environment a foggy distant feel.

I deliberately chose not to render using volumetric lighting because I didn’t want that specific look and did not want any light bursts hitting my zoetrope. The zoetrope was the star of the show and nothing else. Another lighting feature I tend to use within my work is the combination of the Physical Sky and the Sun. Both give a natural warm feel and I wanted sunlight to burst through the window; it was conceptually important and it added balance to the composition.

The most challenging part of the entire project was getting the lighting to work seamlessly throughout, as well as the composition within some of the camera shots. Some shots were very tight in frame, so I could not rely on the default rig and needed additional lighting to catch objects where the 3-point lights didn’t work so well. I had decided very early on, that rather than work from a single master file, as with the lighting, I had a default “get me out of trouble” master, saving each shot with its own independent settings as I went along to keep my workflow clean. Each scene file was around a gigabyte in size as none of the objects within the zoetrope were parametric anymore once they had been split, separated-out and converted to polygons.

My working machine was a 3.2GHz 8-core Mac Pro with 24GB of RAM, rendered out on a PC — custom-built 3X3 machine — with an Intel Core Processor i7 5960X with water cooling, 32GB RAM and clockable to 4.5GHz.

Since completion, The Human Rights Zoetrope titles have won several awards, including a Gold at the Muse Creative Awards in the Best Motion Graphics category, a Platinum Best of Show in the Art Direction category, and a Gold in the Best Graphic Design category at the Aurora Awards.

The Human Rights Zoetrope is also a Finalist at the New York Festivals 2017 in the Animation: Promotion/Open & IDs category. The winners will be announced at the NAB Show.

 

Sophia Kyriacou is a London-based broadcast designer and 3D artist.

GoPro intros Karma foldable drone, Hero5 with voice-controlled recording

By Brady Betzel

“Hey, GoPro, start recording!” That’s right, voice-controlled recording is here. Does this mean pros can finally start all their GoPros at the same time? More on this in a bit…

I’m one of the lucky few journalists/reviewers who have been brought out to Squaw Valley, California, to hear about GoPro’s latest products first hand — oh, and I got to play with them as well.

So, the long awaited GoPro Karma drone is finally here, but it’s not your ordinary drone. It is small and foldable so it can fit in a backpack, but the three-axis camera stabilizer can be attached to the included Karma grip so you can grab the drone before it lands and carry it or mount it. This is huge! If worked out correctly you can now fake a gigantic jib swing with a GoPro, or even create some ultra-long shots. One of the best parts is that the controller is a videogame style remote that doesn’t require you use your phone or tablet! Thank you GoPro! No, really, thank you.

The Karma is priced at $799, the Karma plus Session is $999, and the Karma plus Hero5 Black is $1,099. And it’s available one day before my birthday next month — hint, hint, nudge, nudge — October 23.

To the Cloud! GoPro Plus and Quik Apps
So you might have been wondering how GoPro intends to build a constant revenue stream. Well, it seems like they are banking on the new GoPro Plus cloud-based subscription service. While your new Hero5 is charging it can auto-upload photos and videos via a computer or phone. In addition you will be able to access, edit and share all from GoPro Plus. For us editing nerds, this is the hot topic because want to edit everything from anywhere.

My question is this: If everyone gets on the GoPro Plus train, are they prepared for the storage and bandwidth requirements? Time will tell. In addition to being able to upload to the cloud with your GoPro Plus subscription, you will have a large music library at your disposal, 20 percent off accessories from GoPro.com, exclusive GoPro Apparel and Premium Support.

The GoPro Subscription breaks down to $4.99 and is available in the US on October 2 — it will be in more markets in January 2017.

Quik App is GoPro’s ambitious attempt at creating an autonomous editing platform. I am really excited about this (even though it basically eliminates the need for an editor — more on this later). While many of you may be hearing about Quik for the first time, it actually has been around for a bit. If you haven’t tried it yet, now is the time. One of the most difficult parts of a GoPro’s end-to-end workflow is the importing, editing and exporting. Now, with GoPro Plus and Quik you will be automatically uploading your Hero5 footage while charging so you can be editing quickly (or Quik-ly. Ha! Sorry, I had to.)

Hero5 Black and Hero5 Session
It’s funny that the Hero5 Black and Session are last on my list. I guess I am kind of putting what got GoPro to the dance last, but last doesn’t in any way mean least!

Hero5 Black

Available on October 2, the Hero5 Black is $399, and includes the following:
● Two-inch touch display with simplified controls.
● Up to 4K video at 30fps
● Auto-upload to GoPro Plus while charging
● Voice Control with support for seven languages, with more to come
● Simplified one-button control
● Waterproof, without housing, to 33 feet
● Compatible with existing mounts, including Karma
● Stereo audio recording
● Video Stabilization built-in
● Fish-eye-free wide-angle video
● RAW and WDR (wide dynamic range) photo modes
● GPS built-in!

Hero5 Session is $299 and offers these features:
● Same small design
● Up to 4K at 30fps
● 10 Megapixel photos
● Auto upload to GoPro Plus while charging
● Voice Control support for seven languages with more to come
● Simplified one-button control
● Waterproof, without housing, to 33 feet
● Compatible with existing mounts, including Karma
● Video Stabilization built in
● Fish-eye-free wide-angle video

Summing Up
GoPro has made power moves. They not only took the original action camera — the Hero — to the next level with upgrades like image stabilization, waterproof without housing, and simplifying the controls in the Hero5 Black and Hero5 Session, they added 4K recording a 30fps and stereo audio recording with Advanced Wind Noise Reduction.

Not only did they upgrade their cameras, GoPro is attempting to revolutionize the drone market with the Karma. The Karma has potential to bring the limelight back to GoPro and steal some thunder from competitors, like DJI, with this foldable and compact drone whose three-axis gimbal can be held by the included Karma handle.

Hero5 Session

Remember that drone teaser video that everyone thought was fake!? Here it is just in case. Looks like that was real and with some pre-planning you can recreate these awesome shots. What’s even more awesome is that later this year GoPro will be launching the “Quik Key,” a micro-USB card reader that plugs into your phone to transfer your videos and photos to your phone, as well as REMO — a voice-activated remote control for the Hero5 (think Apple TV, but for your camera: “GoPro, record video.”

Besides the incredible multimedia products GoPro creates, I really love the family feeling and camaraderie within the GoPro company and athletes they bring in to show off their tools. Coming from the airport to Squaw Valley, I was in the airport shuttle with some mega-pro athletes/content creators like Colin, and they were just as excited as I was.

It was kind of funny because the people who are usually in the projects I edit were next to me geeking out. GoPro has created this amazing, self-contained, ecosphere of content creators and content manipulators that are fan-boys and fan-girls. The energy around the GoPro Karma and Hero5 announcement is incredible, and they’ve created their own ultra-positive culture. I wish I could bottle it up and give it out to everyone reading this news.

Check out some video I shot here.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.


‘Suicide Squad’: Imageworks VFX supervisor Mark Breakspear 

By Randi Altman

In Warner Bros.’ Suicide Squad, a band of captured super-villains are released from prison by the government and tasked with working together to fight a common enemy, the evil Joker. This film, which held top box office honors for weeks, has a bit of everything: comic book antiheroes, super powers, epic battles and redemption. It also features a ton of visual effects work that was supervised by Sony Imageworks’ Mark Breakspear, who worked closely with production supervisor Jerome Chen and director David Ayer (see our interview with Ayer).

Mark Breakspear

Mark Breakspear

Breakspear is an industry veteran with more than 20 years of experience as a visual effects supervisor and artist, working on feature films, television and commercials. His credits include American Sniper, The Giver, Ender’s Game, Thor: The Dark World, The Great Gatsby… and that’s just to name a few.

Suicide Squad features approximately 1,200 shots, with Imageworks doing about 300, including the key fight at the end of the film between Enchantress, the Squad, Incubus and Mega Diablo. Imageworks also provided shots for several other sequences throughout the movie.

MPC worked on the majority of the other visual effects, with Third Floor creating postviz after the shoot to help with the cutting of the film.

I recently threw some questions at Breakspear about his process and work on Suicide Squad.

How early did you get involved in the project?
Jerome Chen, the production supervisor, involved us from the very beginning in the spring of 2015. We read the script and started designing one of the most challenging characters — Incubus. We spent a couple of months working with designer Tim Borgmann to finesse the details of his overall look, shape and, specifically, his skin and sub-surface qualities.


How did Imageworks prepare for taking on the film?

We spent time gathering as much information as we could about the work we were looking to do. That involved lengthy calls with Jerome to pick over every aspect of the designs that David Ayer wanted. As it was still pretty early, there was a lot more “something like” rather than “exactly like” when it came to the ideas. But this is what the prepro was for, and we were able to really focus on narrowing down the many ideas in to key selections and give the crew something to work with during the shoot in Toronto.

Can you talk about being on set?
The main shoot was at Pinewood in Toronto. We had several soundstages that were used for the creation of the various sets. Shoot days are usually long and arduous, and this was no exception. For VFX crews, the days are typically hectic, quiet, hectic, very hectic, quiet and then suddenly very hectic again. After wrap, you still have to download all the data, organize it and prep everything for the next day.

I had fantastic help on set from Chris Hebert who was our on-set photographer. His job was to make sure we had accurate records (photographic and data sets) of anything that could be used in our work later on. That meant actors, props, witness cameras, texture photography and any specific one-off moments that occur 300 times a day. Every movie set needs a Chris Hebert or it’s going to be a huge struggle later on in post!

gb0140_comp_breakdown_plate.1052.tif


Ok, let’s dig into the workflow. Can you walk us through it?

Workflow is a huge subject, so I’ll keep the answer somewhat concise! The general day would begin with a team meet between all the various VFX departments here at Imageworks. The work was split across teams in both Culver and Vancouver, so we did regular video Hangouts to discuss the daily plan, the weekly targets and generally where we were at, plus specific needs that anyone had. We would usually follow this by department meetings prior to AM dailies where I would review the latest work from the department leads, give notes, select things to show Jerome and David, and give feedback that I may have received from production.

We tried our best to keep our afternoons meeting-free so actual work could get done! Toward the end of the day we would have more dailies, and the final days selection of notes and pulls to the client would take place. Most days ended fairly late, as we had to round off the hundreds of emails with meaningful replies, prep for the next day and catch any late submission arrivals from the artists that might benefit from notes before the morning.

What tool, or tools, did you use for remote collaboration?
We used Google Hangouts for video conferencing, and Itview for shot discussion and notes with Jerome and David. Itview is our own software that replaces the need to use [off-the-shelf tools], and allows a much faster, more secure and accurate way to discuss and share shots. Jerome had a system in post and we would place data on it remotely for him to view and comment on in realtime with us during client calls. The notes and drawings he made would go straight in to our note tracker and then on to artists as required.

gb1156_comp_breakdown.1242.tif
What was the most challenging shot or shots, and why?

Our most challenging work was in understanding and implementing fractals into the design of the characters and their weapons. We had to get up to speed on three-dimensional mandlebulbs and how we can render them into our body of work. We also had to create vortical flow simulations that came off the fractal weapons, which created their own set of challenges due to the nature of how particles uniquely behave when near high velocity emissions.

So there wasn’t a specific shot that was more challenging than another, but the work that went in to most of them required a very challenging pre-design and concept solve involving fractal physics to make them work.

Can you talk about tools — off-the-shelf or proprietary — you used for the VFX? Any rendering in the cloud?
We used Side Effects Houdini and Autodesk Maya for the majority of shots and The Foundry’s Nuke to comp everything. When it came to rendering we used Arnold, and in regards to cloud rendering, we did render remotely to our own cloud, which is about 1,000 miles away — does that count (smiles)?

VFX Supervisor Volker Engel: ‘Independence Day,’ technology and more

Uncharted Territory’s Volker Engel is one of Hollywood’s leading VFX supervisors, working on movies as diverse as White House Down, Hugo and Roland Emmerich’s Shakespeare movie Anonymous. Most recently he was in charge of the huge number of effects for Emmerich’s Independence Day: Resurgence.

Engel was kind enough to make time in his schedule to discuss his 28-year history with Emmerich, his favorite scenes from Independence Day, his experience with augmented reality on set and more.

When did you get involved with Independence Day?
I was probably the earliest person involved after Roland Emmerich himself! He kept me posted over the years while we were working on other projects because we were always going to do this movie.

I think it was 2009 when the first negotiations with 20th Century Fox started, but the important part was early 2014. Roland had to convince the studio regarding the visuals of the project. Everyone was happy with the screenplay, but they said it would be great to get some key images. I hired a company called Trixter — they are based in Germany, but also have an office in LA. They have a very strong art department. In about six weeks we finished 16 images that are what you can call “concept art,” but they are extremely detailed. Most of these concepts can be seen as finished shots in the movie. This artwork was presented to 20th Century Fox and the movie was greenlit.

Concept art via Trixter.

You have worked with Emmerich many times. You must have developed a sort of shorthand?
This is now a 28-year working relationship. Obviously, we haven’t done every movie as a team but I think this is our eighth movie together. There is a shorthand and that helps a lot. I don’t think we really know what the actual shorthand is other than things that we don’t need to talk about because we know what needs to happen.

Technology continues to advance. Does that make life easier, or because you have more options does it make it even more complex?
It’s less the fact that there’s more options, it’s that the audience is so much more sophisticated. We now have better tools available to make better pictures. We can do things now that we were not able to do before. So, for example, now we can imagine a mothership that’s 3,000 miles in diameter and actually lands on Earth. There is a reason we had a smaller mothership in the first movie and that it didn’t touch down anywhere on the planet.

The mothership touching down in DC.

So it changes the way you tell stories in a really fundamental way?
Absolutely. If you look at a movie like Ex Machina, for example, you can show a half-human/half-robot and make it incredibly visually convincing. So all of a sudden you can tell a story that you wouldn’t have been able to tell before.

If you look at the original Independence Day movie, you really only see glimpses of the aliens because we had to do it with practical effects and men in suits. For Independence Day: Resurgence we had the chance to go much further. What I like actually is that Roland decided not to make it too gratuitous, but at least we were able to fully show the aliens.

Reports vary, but they suggest about 1,700 effects shots in Independence Day: Resurgence. Is that correct?
It was 1,748. Close to two-thirds of the movie!

What was your previs process like?
We had two different teams: one joined us from Method Studios and the other was our own Uncharted Territory team, and we split the task in half. The Method artists were working in our facility, so we were all under one roof.

Method focused on the whole lunar sequence, for example, while our in-house team started with the queen/bus chase toward the end of the movie. Roland loves to work with two specific storyboard artists and has several sessions during the week with them, and we used this as a foundation for the previs.

Trixter concept art.

So Roland was involved at the previs stage looking at how it was all going to fit together?
He had an office where the previs team was working, so we could get him over and go literally from artist to artist. We usually did these sessions twice a day.

What tools were you using?
Our in-house artists are Autodesk 3D Studio Max specialists, and the good folks from Method worked with Autodesk Maya.

The live shoot used camera-tracking technology from Ncam to marry the previs graphics and the live action in realtime to give a precise impression of how the final married shot would work.

How were you using the Ncam exactly?
The advantage is that we took the assets we had already built for previs and then re-used them inside the Ncam set-up, doing this with Autodesk Motion Builder. But some of the animation had to be done right there on set.

After: Area 51

I’ll give you an example. When we’re inside the hangar at Area 51, Roland wanted to pan from an actor’s face looking at 20 jet fighters lifting off and flying into the distance, and he wanted to pan off the actors face to show the jets. The Ncam team and Marion [Spates, the on-set digital effects supervisor] had to right there, on the spot, do the animation for the fighters. In about five minutes, they had to come up with something there and then and do the animation, and what’s more, it worked. That’s why Roland also loves to work with Ncam, because it gives him the flexibility to make some decisions right there in the moment.

So you’re actually updating or even creating shots on set?
Yes, exactly. We have the toolbox there — the assets like the interior of the hangar — but then we do it right there to the picture. Sometimes for both the A-camera and the B-camera.

We did a lot of extensions and augmentations on this movie and what really helped was our experience of working with Ncam on White House Down. For Roland, as the director, it helps him compose his images instead of just looking at a gigantic bluescreen. That’s really what it is, and he’s really good at that.

The Ncam at use on set.

I explain it this way: imagine you already have your first composite right there, which goes straight to editorial. They immediately have something to work with. We just deliver two video files: the clean one with the bluescreen and another from Ncam that has the composite.

Did using Ncam add to the shooting time?
Working with AR on set always adds some shooting time, and it’s really important that the director is briefed and wants to use this tool. The Ncam prep often runs parallel to the rehearsals with the actors, but sometimes it adds two or three additional minutes. When you have someone who’s not prepared for it, two or three minutes can feel like a lifetime. It does, however, save a lot of time in post.

On White House Down, when we used Ncam for the first time, it actually took a little over a week until everything grooved and everyone was aware of it — especially the camera department. After a little while they just knew this is exactly what needed to be done. It all became instant teamwork. It is something that supports the picture and it’s not a hindrance. It’s something that the director really wants.

Do you have a favorite scene from Resurgence?
There is a sequence inside the mothership where our actors are climbing up one of these gigantic columns. We had a small set piece being built for the actors to climb, and it was really important for Roland to compose the whole image. He could ask for a landing platform to be removed and more columns to be added to create a sense of depth, then move the view around another 50 or 60 degrees.

He was creating his images right there, and that’s why the guys have to be really quick on their feet and build these things in and make it work. At the same time, the assistant director is there saying the cameras are ready, the actors are ready and we’re ready to shoot, and of course no one wants them to wait around, so they better have their stuff ready!

Destruction of Singapore

The destruction of Singapore.

Some of my other favorite sequences from the film are the destruction of Singapore while the mothership enters the atmosphere and the alien queen chasing the school bus!

What is next for you?
In 1999, when I started Unchartered Territory with my business partner Marc Weigert, we set it up as a production company and started developing our own projects. We joke that Roland interrupts us from developing our projects because he comes with projects of his own that we just cannot say no to! But we have just come back from a trip to Ireland where we scouted two studios and met with several potential production partners for a new project of our own. Stay tuned!