Author Archives: Randi Altman

Post Supervisor: Planning an approach to storage solutions

By Lance Holte

Like virtually everything in post production, storage is an ever-changing technology. Camera resolutions and media bitrates are constantly growing, requiring higher storage bitrates and capacities. Productions are increasingly becoming more mobile, demanding storage solutions that can live in an equally mobile environment. Yesterday’s 4K cameras are being replaced by 8K cameras, and the trend does not look to be slowing down.

Yet, at the same time, productions still vary greatly in size, budget, workflow and schedule, which has necessitated more storage options for post production every year. As a post production supervisor, when deciding on a storage solution for a project or set of projects, I always try to have answers to a number of workflow questions.

Let’s start at the beginning with production questions.

What type of video compression is production planning on recording?
Obviously, more storage will be required if the project is recording to Arriraw rather than H.264.

What camera resolution and frame rate?
Once you know the bitrate from the video compression specs, you can calculate the data size on a per-hour basis. If you don’t feel like sitting down with a calculator or spreadsheet for a few minutes, there are numerous online data size calculators, but I particularly like AJA’s DataCalc application, which has tons of presets for cameras and video and audio formats.

How many cameras and how many hours per day is each camera likely to be recording?
Data size per hour, multiplied by hours per day, multiplied by shoot days, multiplied by number of cameras gives a total estimate of the storage required for the shoot. I usually add 10-20% to this estimate to be safe.

Let’s move on to post questions…

Is it an online/offline workflow?
The simplicity of editing online is awesome, and I’m holding out for the day when all projects can be edited with online media. In the meantime, most larger projects require online/offline editorial, so keep in mind the extra storage space for offline editorial proxies. The upside is that raw camera files can be stored on slower, more affordable (even archival) storage through editorial until the online process begins.

On numerous shows I’ve elected to keep the raw camera files on portable external RAID arrays (cloned and stored in different locations for safety) until picture lock. G-Tech, LaCie, OWC and Western Digital all make 48+ TB external arrays on which I’ve stored raw median urging editorial. When you start the online process, copy the necessary media over to your faster online or grading/finishing storage, and finish the project with only the raw files that are used in the locked cut.

How much editorial staff needs to be working on the project simultaneously?
On smaller projects that only require an editorial staff of two or three people who need to access the media at the same time, you may be able to get away with the editors and assistants network sharing a storage array, and working in different projects. I’ve done numerous smaller projects in which a couple editors connected to an external RAID (I’ve had great success with Proavio and QNAP arrays), which is plugged into one workstation and shares over the network. Of course, the network must have enough bandwidth for both machines to play back the media from the storage array, but that’s the case for any shared storage system.

For larger projects that employ five, 10 or more editors and staff, storage that is designed for team sharing is almost a certain requirement. Avid has opened up integrated shared storage to outside storage vendors the past few years, but Avid’s Nexis solution still remains an excellent option. Aside from providing a solid solution for Media Composer and Symphony, Nexis can also be used with basically any other NLE, ranging from Adobe Premiere Pro to Blackmagic DaVinci Resolve to Final Cut Pro and others. The project sharing abilities within the NLEs vary depending on the application, but the clear trend is moving toward multiple editors and post production personnel working simultaneously in the same project.

Does editorial need to be mobile?
Increasingly, editorial is tending to begin near the start of physical production and this can necessitate the need for editors to be on or near set. This is a pretty simple question to answer but it is worth keeping in mind so that a shoot doesn’t end up without enough storage in a place where additional storage isn’t easily available — or the power requirements can’t be met. It’s also a good moment to plan simple things like the number of shuttle or transfer drives that may be needed to ship media back to home base.

Does the project need to be compartmentalized?
For example, should proxy media be on a separate volume or workspace from the raw media/VFX/music/etc.? Compartmentalization is good. It’s safe. Accidents happen, and it’s a pain if someone accidentally deletes everything on the VFX volume or workspace on the editorial storage array. But it can be catastrophic if everything is stored in the same place and they delete all the VFX, graphics, audio, proxy media, raw media, projects and exports.

Split up the project onto separate volumes, and only give write access to the necessary parties. The bigger the project and team, the bigger the risk for accidents, so err on the side of safety when planning storage organization.

Finally, we move to finishing, delivery and archive questions…

Will the project color and mix in-house? What are the delivery requirements? Resolution? Delivery format? Media and other files?
Color grading and finishing often require the fastest storage speeds of the whole pipeline. By this point, the project should be conformed back to the camera media, and the colorist is often working with high bitrate, high-resolution raw media or DPX sequences, EXRs or other heavy file types. (Of course, there are as many workflows as there are projects, many of which can be very light, but let’s consider the trend toward 4K-plus and the fact that raw media generally isn’t getting lighter.) On the bright side, while grading and finishing arrays need to be fast, they don’t need to be huge, since they won’t house all the raw media or editorial media — only what is used in the final cut.

I’m a fan of using an attached SAS or Thunderbolt array, which is capable of providing high bandwidth to one or two workstations. Anything over 20TB shouldn’t be necessary, since the media will be removed and archived as soon as the project is complete, ready for the next project. Arrays like Areca ARC-5028T2 or Proavio EB800MS give read speeds of 2000+ MB/s,which can play back 4K DPXs in real time.

How should the project be archived?
There are a few follow-up questions to this one, like: Will the project need to be accessed with short notice in the future? LTO is a great long-term archival solution, but pulling large amounts of media off LTO tape isn’t exactly quick. For projects that I suspect will be reopened in the near future, I try to keep an external hard drive or RAID with the necessary media onsite. Sometimes it isn’t possible to keep all of the raw media onsite and quickly accessible, so keeping the editorial media and projects onsite is a good compromise. Offsite, in a controlled, safe, secure location, LTO-6 tapes house a copy of every file used on the project.

Post production technology changes with the blink of an eye, and storage is no exception. Once these questions have been answered, if you are spending any serious amount of money, get an opinion from someone who is intimately familiar with the cutting edge of post production storage. Emphasis on the “post production” part of that sentence, because video I/O is not the same as, say, a bank with the same storage size requirements. The more money devoted to your storage solutions, the more opinions you should seek. Not all storage is created equal, so be 100% positive that the storage you select is optimal for the project’s particular workflow and technical requirements.

There is more than one good storage solution for any workflow, but the first step is always answering as many storage- and workflow-related questions as possible to start taking steps down the right path. Storage decisions are perhaps one of the most complex technical parts of the post process, but like the rest of filmmaking, an exhaustive, thoughtful, and collaborative approach will almost always point in the right direction.

Main Image: G-Tech, QNAP, Avid and Western Digital all make a variety of storage solutions for large and small-scale post production workflows.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

What you should ask when searching for storage

Looking to add storage to your post studio? Who isn’t these days? Jonathan Abrams, chief technical officer at New York City’s Nutmeg Creative was kind enough to put together a list that can help all in their quest for the storage solution that best fits their needs.

Here are some questions that customers should ask a storage manufacturer.

What is your stream count at RAID-6?
The storage manufacturer should have stream count specifications available for both Avid DNx and Apple ProRes at varying frame rates and raster sizes. Use this information to help determine which product best fits your environment.

How do I connect my clients to your storage?  
Gigabit Ethernet (copper)? 10 Gigabit Ethernet (50-micron Fiber)? Fiber Channel (FC)? These are listed in ascending order of cost and performance. Combined with the answer to the question above, this narrows down which product a storage manufacturer has that fits your environment.

Can I use whichever network switch I want to and know that it will work, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?
If you are using a Mac with Thunderbolt ports, then you will need a network adapter, such as a Promise SANLink2 10G SFP+ for your shared storage connection. Also ask, “Can I use any Thunderbolt network adapter, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?”

If you are an Avid Media Composer user, ask, “Does your storage present itself to Media Composer as if it was Avid shared storage?”
This will allow the first person who opens a Media Composer project to obtain a lock on a bin.  Other clients can open the same project, though they will not have write access to said bin.

What is covered by support? 
Make certain that both the hardware (chassis and everything inside of it) and the software (client and server) are covered by support. This includes major version upgrades to the server and client software (i.e. v.11 to v.12). You do not want your storage manufacturer to announce a new software version at NAB 2018 and then find out that it’s not covered by your support contract. That upgrade is a separate cost.

For how many years will you be able to replace all of the hardware parts?
Will the storage manufacturer replace any part within three years of your purchase, provided that you have an active support contract? Will they charge you less for support if they cannot replace failed components during that year’s support contract? The variation of this question is, “What is your business model?” If the storage manufacturer will only guarantee availability of all components for three years, then their business model is based upon you buying another server from them in three years. Are you prepared to be locked into that upgrade cycle?

Are you using custom components that I cannot source elsewhere?
If you continue using your storage beyond the date when the manufacturer can replace a failed part, is the failed part a custom part that was only sold to the manufacturer of your storage? Is the failed part one that you may be able to find used or refurbished and swap out yourself?

What is the penalty for not renewing support? Can I purchase support incidents on an as-needed basis?
How many as-needed event purchases equate to you realizing, “We should have renewed support instead.” If you cannot purchase support on an as-needed basis, then you need to ask what the penalty for reinstating support is. This information helps you determine what your risk tolerance is and whether or not there is a date in the future when you can say, “We did not incur a financial loss with that risk.”

Main Image:  Nutmeg Creative’s Jonathan Abrams with the company’s 80 TB of EditShare storage and two spare drive.  Photo Credit:  Larry Closs

Storage in the Studio: VFX Studios

By Karen Maierhofer

It takes talent and the right tools to generate visual effects of all kinds, whether it’s building breathtaking environments, creating amazing creatures or crafting lifelike characters cast in a major role for film, television, games or short-form projects.

Indeed, we are familiar with industry-leading content creation tools such as Autodesk’s Maya, Foundry’s Mari and more, which, when placed into the hands of creatives, the result in pure digital magic. In fact, there is quite a bit of technological magic that occurs at visual effects facilities, including one kind in particular that may not have the inherent sparkle of modeling and animation tools but is just as integral to the visual effects process: storage. Storage solutions are the unsung heroes behind most projects, working behind the scenes to accommodate artists and keep their productive juices flowing.

Here we examine three VFX facilities and their use of various storage solutions and setups as they tackle projects large and small.

Framestore
Since it was founded in 1986, Framestore has placed its visual stamp on a plethora of Oscar-, Emmy- and British Academy Film Award-winning visual effects projects, including Harry Potter, Gravity and Guardians of the Galaxy. With increasingly more projects, Framestore expanded from its original UK location in London to North American locales such as Montreal, New York, Los Angeles and Chicago, handling films as well as immersive digital experiences and integrated advertisements for iconic brands, including Guinness, Geico, Coke and BMW.

Beren Lewis

As the company and its workload grew and expanded into other areas, including integrated advertising, so, too, did its storage needs. “Innovative changes, such as virtual-reality projects, brought on high demand for storage and top-tier performance,” says NYC-based Beren Lewis, CTO of advertising and applied technologies at Framestore. “The team is often required to swiftly accommodate multiple workflows, including stereoscopic 4K and VR.”

Without hesitation, Lewis believes storage is typically the most challenging aspect of technology within the VFX workflow. “If the storage isn’t working, then neither are the artists,” he points out. Furthermore, any issues with storage can potentially lead to massive financial implications for the company due to lost time and revenue.

According to Lewis, Framestore uses its storage solution — a Pixit PixStor General Parallel File System (GPFS) storage cluster using the NetApp E-Series hardware – for all its project data. This includes backups to remote co-location sites, video preprocessing, decompression, disaster recovery preparation, scalability and high performance for VFX, finishing and rendering workloads.

The studio moved all the integrated advertising teams over to the PixStor GPFS clusters this past spring. Currently, Framestore has five primary PixStor clusters using NetApp E-Series in use at each office in London, LA, Chicago and Montreal.

According to Lewis, Framestore partnered with Pixit Media and NetApp to take on increasingly complicated and resource-hungry VR projects. “This partnership has provided the global integrated advertising team with higher performance and nonstop access to data,” he says. “The Pixit Media PixStor software-defined scale-out storage solution running on NetApp E-Series systems brings fast, reliable data access for the integrated advertising division so the team can embrace performance and consistency across all five sites, take a cost-effective, simplified approach to disaster recovery and have a modular infrastructure to support multiple workflows and future expansion.”

BMW

Framestore selected its current solution after reviewing several major storage technologies. It was looking for a single namespace that was very stable, while providing great performance, but it also had to be scalable, Lewis notes. “The PixStor ticked all those boxes and provided the right balance between enterprise-grade hardware and support, and open-source standards,” he explains. “That balance allowed us to seamlessly integrate the PixStor into our network, while still maintaining many of the bespoke tools and services that we had developed in-house over the years, with minimum development time.”

In particular, the storage solution provides the required high performance so that the studio’s VFX, finishing and rendering workloads can all run “full-out with no negative effect on the finishing editors’ or graphic artists’ user experience,” Lewis says. “This is a game-changing capability for an industry that typically partitions off these three workloads to keep artists from having to halt operations. PixStor running on E-Series consolidates all three workloads onto a single IT infrastructure with streamlined end-to-end production of projects, which reduces both time to completion and operational costs, while both IT acquisition and maintenance costs are reduced.”

At Framestore, integrating storage into the workflow is simple. The first step after a project is green-lit is the establishment of a new file set on the PixStor GPFS cluster, where ingested footage and all the CG artist-generated project data will live. “The PixStor is at the heart of the integrated advertising storage workflow from start to finish,” Lewis says. Because the PixStor GPFS cluster serves as the primary storage for all integrated advertising project data, the division’s workstations, renderfarm, editing and finishing stations connect to the cluster for review, generation and storage of project content.

Prior to the move to PixStor/NetApp, Framestore had been using a number of different storage offerings. According to Lewis, they all suffered from the same issues in terms of scalability and degradation of performance under render load — and that load was getting heavier and more unpredictable with every project. “We needed a technology that scaled and allowed us to maintain a single namespace but not suffer from continuous slowdowns for artists due to renderfarm load during crunch times or project delivery.”

Geico

As Lewis explains, with the PixStor/NetApp solution, processing was running up to 270,000 IOPS (I/O operations per second), which was at least several times what Framestore’s previous infrastructure would have been able to handle in a single namespace. “Notably, the development workflow for a major theme-park ride was unhindered by all the VR preprocessing, while backups to remote co-location sites synched every two hours without compromising the artist, rendering or finishing workloads,” he says. “This provided a cost-effective, simplified approach to disaster recovery, and Framestore now has a fast, tightly integrated platform to support its expansion plans.”

To stay at the top of its game, Framestore is always reviewing new technologies, and storage is often part of that conversation. To this end, the studio plans to build on the success it has had with PixStor by expanding the storage to handle some additional editorial playback and render workloads using an all-Non-Volatile Memory Express (NVMe) flash tier. Other projects include a review of object storage technology for use as a long-term, off-premises storage target for archival data.

Without question, the industry’s visual demands are rapidly changing. Not long ago, Framestore could easily predict storage and render requirements for a typical project. But that is no longer the case, and the studio finds itself working in ever-increasing resolutions and frame rates. Whereas projects may have been as small as 3TB in the recent past, nowadays the studio regularly handles multiple projects of 300TB or larger. And the storage must be shared with other projects of varying sizes and scope.

“This new ‘unknowns’ element of our workflow puts many strains on all aspects of our pipeline, but especially the storage,” Lewis points out. “Knowing that our storage can cope with the load and can scale allows us to turn our attention to the other issues that these new types of projects bring to Framestore.”

As Lewis notes, working with high-resolution images and large renderfarms create a unique set of challenges for any storage technology that’s not seen in many other fields. The VFX will often test any storage technology well beyond what other industries are capable of. “If there’s an issue or a break point, we will typically find it in spectacular fashion,” he adds.

Rising Sun Pictures
As a contributor to the design and execution of computer-generated effects on more than 100 feature films since its inception 22 years ago, Rising Sun Pictures (RSP) has pushed the technical bar many times over in film as well as television projects. Based in Adelaide, South Australia, RSP has built a top team of VFX artists who have tackled such box-office hits as Thor: Ragnarok, X-Men and Game of Thrones, as well as the Harry Potter and Hunger Games franchises.

Mark Day

Such demanding, high-level projects require demanding, high-level effects, which, in turn, demand a high-performance, reliable storage solution capable of handling varying data I/O profiles. “With more than 200 employees accessing and writing files in various formats, the need for a fast, reliable and scalable solution is paramount to business continuity,” says Mark Day, director of engineering at RSP.

Recently, RSP installed an Oracle ZS5 storage appliance to handle this important function. This high-performance, unified storage system provides NAS and SAN cloud-converged storage capabilities that enable on-premises storage to seamlessly access Oracle Public Cloud. Its advanced hardware and software architecture includes a multi-threading SMP storage operating system for running multiple workloads and advanced data services without performance degradation. The offering also caches data on DRAM or flash cache for optimal performance and efficiency, while keeping data safely stored on high-capacity SSD (solid state disk) or HDD (hard disk drive) storage.

Previously, the studio had been using an Dell EMC Isilon storage cluster with Avere caching appliances, and the company is still employing the solution for parts of its workflow.

When it came time to upgrade to handle RSP’s increased workload, the facility ran a proof of concept with multiple vendors in September 2016 and benchmarked their systems. Impressed with Oracle, RSP began installation in early 2017. According to Day, RSP liked the solution’s ability to support larger packet sizes — now up to 1MB. In addition, he says its “exceptional” analytics engine gives introspection into a render job.

“It has a very appealing [total cost of ownership], and it has caching right out of the box, removing the need for additional caching appliances,” says Day. Storage is at the center of RSP’s workflow, storing all the relevant information for every department — from live-action plates that are turned over from clients, scene setup files and multi-terabyte cache files to iterations of the final product. “All employees work off this storage, and it needs to accommodate the needs of multiple projects and deadlines with zero downtime,” Day adds.

Machine Room

“Visual effects scenes are getting more complex, and in turn, data sizes are increasing. Working in 4K quadruples file sizes and, therefore, impacts storage performance,” explains Day. “We needed a solution that could cope with these requirements and future trends in the industry.”

According to Day, the data RSP deals with is broad, from small setup files to terabyte geocache files. A one-minute 2K DPX sequence is 17GB for the final pass, while 4K is 68GB. “Keep in mind this is only the final pass; a single shot could include hundreds of passes for a heavy computer-generated sequence,” he points out.

Thus, high-performance storage is important to the effective operation of a visual effects company like RSP. In fact, storage helps the artists stay on the creative edge by enabling them to iterate through the creative process of crafting a shot and a look. “Artists are required to iterate their creative process many times to perfect the look of a shot, and if they experience slowdowns when loading scenes, this can have a dramatic effect on how many iterations they can produce. And in turn, this affects employees’ efficiency and, ultimately, the profitability of the company,” says Day.

Thor: Ragnarok

Most recently, RSP used its new storage solution for work on the blockbuster Thor: Ragnarok, in particular, for the Val’s Flashback sequence — which was extremely complex and involved extensive lighting and texture data, as well as high-frame-rate plates (sometimes more than 1,000fps for multiple live-action footage plates). “Before, our storage refresh early versions of this shot could take up to 24 hours to render on our server farm. But since installing our new storage, we saw this drastically reduced to six hours — that’s a 3x improvement, which is a fantastic outcome,” says Day.

Outpost VFX
A full-service VFX studio for film, broadcast and commercials, Outpost VFX, based in Bournemouth, England, has been operational since late 2012. Since that time, the facility has been growing by leaps and bounds, taking on major projects, including Life, Nocturnal Animals, Jason Bourne and 47 Meters Down.

Paul Francis

Due to this fairly rapid expansion, Outpost VFX has seen the need for increased capacity in its storage needs. “As the company grows and as resolution increases and HDR comes in, file sizes increase, and we need much more capacity to deal with that effectively,” says CTO Paul Francis.

When setting up the facility five years ago, the decision was made to go with PixStor from Pixit Media and Synology’s NAS for its storage solution. “It’s an industry-recognized solution that is extremely resilient to errors. It’s fast, robust and the team at Pixit provides excellent support, which is important to us,” says Francis.

Foremost, the solution had to provide high capacity and high speeds. “We need lots of simultaneous connections to avoid bottlenecks and ensure speedy delivery of data,” Francis adds. “This is the only one we’ve used, really. It has proved to be stable enough to support us through our growth over the last couple of years — growth that has included a physical office move and an increase in artist capacity to 80 seats.”

Outpost VFX mainly works with image data and project files for use with Autodesk’s Maya, Foundry’s Nuke, Side Effects’ Houdini and other VFX and animation tools. The challenge this presents is twofold, both large and small: concern for large file sizes, and problems the group can face with small files, such as metadata. Francis explains: “Sequentially loading small files can be time-consuming due to the current technology, so moving to something that can handle both of these areas will be of great benefit to us.”

Locally, artists use a mix of HDDs from a number of different manufacturers to store reference imagery and so forth — older-generation PCs have mostly Western Digital HDDs while newer PCs have generic SSDs. When replacing or upgrading equipment, Outpost VFX uses Samsung 900 Series SSDs, depending on the required performance and current market prices.

Life

Like many facilities, Outpost VFX is always weighing its options when it comes to finding the best solution for its current and future needs. Presently, it is looking at splitting up some of its storage solutions into smaller segments for greater resilience. “When you only have one storage solution and it fails, everything goes down. We’re looking to break our setup into smaller, faster solutions,” says Francis.

Additionally, security is a concern for Outpost VFX when it comes to its clients. According to Francis, certain shows need to be annexed, meaning the studio will need a separate storage solution outside of its main network to handle that data.

When Outpost VFX begins a job, the group ingests all the plates it needs to work on, and they reside in a new job folder created by production and assigned to a specific drive for active jobs. This folder then becomes the go-to for all assets, elements and shot iterations created throughout the production. For security purposes, these areas of the server are only visible to and accessible by artists, who in turn cannot access the Internet; this ensures that the files are “watertight and immune to leaks,” says Francis, adding that with PixStor, the studio is able to set up different partitions for different areas that artists can jump between easily.

How important is storage to Outpost VFX? “Frankly, there’d be no operation without storage!” Francis says emphatically. “We deal with hundreds of terrabytes of data in visual effects, so having high-capacity, reliable storage available to us at all times is absolutely essential to ensure a smooth and successful operation.”

47 Meters Down

Because the studio delivers visual effects across film, TV and commercials simultaneously, storage is an important factor no matter what the crew is working on. A recent film project like 47 Meters Down required the full gamut of visual effects work, as Outpost VFX was the sole vendor for the project. So, the studio needed the space and responsiveness of a storage system that enabled them to deliver more than 420 shots, a number of which featured heavy 3D builds and multiple layers of render elements.

“We had only about 30 artists at that point, so having a stable solution that was easy for our team to navigate and use was crucial,” Francis points out.

Main Image: From Outpost VFX’s Domestos commercial out of agency MullenLowe London.

Storage in the Studio: Post Houses

By Karen Maierhofer

There are many pieces that go into post production, from conform, color, dubbing and editing to dailies and more. Depending on the project, a post house can be charged with one or two pieces of this complex puzzle, or even the entire workload. No matter the job, the tasks must be done on time and on budget. Unforeseen downtime is unacceptable.

That is why when it comes to choosing a storage solution, post houses are very particular. They need a setup that is secure, reliable and can scale. For them, one size simply does not fit all. They all want a solution that fits their particular needs and the needs of their clients.

Here, we look at three post facilities of various sizes and range of services, and the storage solutions that are a good fit for their business.

Liam Ford

Sim International
The New York City location of Sim has been in existence for over 20 years, operating under the former name of Post Factory NY up until about a month ago when Sim rebranded it and its seven other founding post companies as Sim International. Whether called by its new moniker or its previous one, the facility has grown to become a premier space in the city for offline editorial teams as well as one of the top high-end finishing studios in town, as the list of feature films and episodic shows that have been cut and finished at Sim is quite lengthy. And starting this past year, Sim has launched a boutique commercial finishing division.

According to senior VP of post engineering Liam Ford, the vast majority of the projects at the NYC facility are 4K, much of which is episodic work. “So, the need is for very high-capacity, very high-bandwidth storage,” Ford says. And because the studio is located in New York, where space is limited, that same storage must be as dense as possible.

For its finishing work, Sim New York is using a Quantum Xcellis SAN, a StorNext-based appliance system that can be specifically tuned for 4K media workflow. The system, which was installed approximately two years ago, runs on a 16Gb Fibre Channel network. Almost half a petabyte of storage fits into just a dozen rack units. Meanwhile, an Avid Nexis handles the facility’s offline work.

The Sim SAN serves as the primary playback system for all the editing rooms. While there are SSDs in some of the workstations for caching purposes, the scheduling demands of clients do not leave much time for staging material back and forth between volumes, according to Ford. So, everything gets loaded back to the SAN, and everything is played back from the SAN.

As Ford explains, content comes into the studio from a variety of sources, whether drives, tapes or Internet transfers, and all of that is loaded directly onto the SAN. An online editor then soft-imports all that material into his or her conform application and creates an edited, high-resolution sequence that is rendered back to the SAN. Once at the SAN, that edited sequence is available for a supervised playback session with the in-house colorists, finishing VFX artists and so forth.

“The point is, our SAN is the central hub through which all content at all stages of the finishing process flows,” Ford adds.

Before installing the Xcellis system, the facility had been using local workstation storage only, but the huge growth in the finishing division prompted the transition to the shared SAN file system. “There’s no way we could do the amount of work we now have, and with the flexibility our clients demand, using a local storage workflow,” says Ford.

When it became necessary for the change, there were not a lot of options that met Sim’s demands for high bandwidth and reliable streaming, Ford points out, as Quantum’s StorNext and SGI’s CXFS were the main shared file systems for the M&E space. Sim decided to go with Quantum because of the work the vendor has done in recent years toward improving the M&E experience as well as the ease of installing the new system.

Nevertheless, with the advent of 25Gb and 100Gb Ethernet, Sim has been closely monitoring the high-performance NAS space. “There are a couple of really good options out there right now, and I can see us seriously looking at those products in the near future as, at the very least, an augmentation to our existing Fibre Channel-based storage,” Ford says.

At Sim, editors deal with a significant amount of Camera Raw, DPX and OpenEXR data. “Depending on the project, we could find ourselves needing 1.5GB/sec or more of bandwidth for a single playback session, and that’s just for one show,” says Ford. “We typically have three or four [shows] playing off the SAN at any one time, so the bandwidth needs are huge!”

Master of None

And the editors’ needs continue to evolve, as does their need for storage. “We keep needing more storage, and we need it to be faster and faster. Just when storage technology finally got to the point that doing 10-bit 2K shows was pretty painless, everyone started asking for 16-bit 4K,” Ford points out.

Recently, Sim completed work on the feature American Made and the Netflix show Master of None, in addition to a number of other episodic projects. For these and others shows, the SAN acts as the central hub around which the color correction, online editing, visual effects and deliverables are created.

“The finishing portion of the post pipeline deals exclusively with the highest-quality content available. It used to be that we’d do our work directly from a film reel on a telecine, but those days are long past,” says Ford. “You simply can’t run an efficient finishing pipeline anymore without a lot of storage.”

DigitalFilm Tree
DigitalFilm Tree (DFT) opened its doors in 1999 and now occupies a 10,000-square-foot space in Universal City, California, offering full round-trip post services, including traditional color grading, conform, dailies and VFX, as well as post system rentals and consulting services.

While Universal City may be DFT’s primary location, it has dozens of remote satellite systems — mini post houses for production companies and studios – around the world. Those remote post systems, along with the increase in camera resolution (Alexa, Raw, 4K), have multiplied DFT’s storage needs. Both have resulted in a sea change in the facility’s storage solution.

According to CEO Ramy Katrib, most companies in the media and entertainment industry historically have used block storage, and DFT was no different. But four years ago, the company began looking at object storage, which is used by Silicon Valley companies, like Dropbox and AWS, to store large assets. After significant research, Katrib felt it was a good fit for DFT as well, believing it to be a more economical way to build petabytes of storage, compared to using proprietary block storage.

Ramy Katrib

“We were unique from most of the post houses in that respect,” says Katrib. “We were different from many of the other companies using object storage — they were tech, financial institutions, government agencies, health care; we were the rare one from M&E – but our need for extremely large, scalable and resilient storage was the same as theirs.”

DFT’s primary work centers around scripted television — an industry segment that continues to grow. “We do 15-plus television shows at any given time, and we encourage them to shoot whatever they like, at whatever resolution they desire,” says Katrib. “Most of the industry relies on LTO to back up camera raw materials. We do that too, but we also encourage productions to take advantage of our object storage, and we will store everything they shoot and not punish them for it. It is a rather Utopian workflow. We now give producers access to all their camera raw material. It is extremely effective for our clients.”

Over four years ago, DFT began using a cloud-based platform called OpenStack, which is open-source software that controls large pools of data, to build and design its own object storage system. “We have our own software developers and people who built our hardware, and we are able to adjust to the needs of our clients and the needs of our own workflow,” says Katrib.

DFT designs its custom PC- and Linux-based post systems, including chassis from Super Micro, CPUs from Intel and graphic cards from Nvidia. Storage is provided from a number of companies, including spinning-disc and SSD solutions from Seagate Technology and Western Digital.

DFT then deploys remote dailies systems worldwide, in proximity to where productions are shooting. Each day clients plug their production hard drives (containing all camera raw files) into DFT’s remote dailies system. From DFT’s facility, dailies technicians remotely produce editorial, viewing and promo dailies files, and transfer them to their destinations worldwide. All the while, the camera raw files are transported from the production location to DFT’s ProStack “massively scalable object storage.” In this case, “private cloud storage” consists of servers DFT designed that house all the camera raw materials, with management from DFT post professionals who support clients with access to and management of their files.

DFT provides color grading for Great News.

Recently, storage vendors such as Quantum and Avid have begun building and branding their own object storage solutions not unlike what DFT has constructed at its Universal City locale. And the reason is simple: Object storage provides a clear advantage because of reliability and the low cost. “We looked at it because the storage we were paying for, proprietary block storage, was too expensive to house all the data our clients were generating. And resolutions are only going up. So, every year we needed more storage,” Katrib explains. “We needed a solution that could scale with the practical reality we were living.”

Then, about four years ago when DFT started becoming a software company, one of the developers brought OpenStack to Katrib’s attention. “The open-source platform provided several storage solutions, networking capabilities and cloud compute capabilities for free,” he points out. Of course, the solution is not a panacea, as it requires a company to customize the offering for its own needs and even contribute back to the OpenStack community. But then again, that requirement enables DFT to evolve to the changing needs of its clients without waiting for a manufacturer to do it.

“It does not work out of the box like a solution from IBM, for instance. You have to develop around it,” Katrib says. “You have to have a lab mentality, designing your own hardware and software based on pain points in your own environment. And, sometimes it fails. But when you do it correctly, you realize it is an elegant solution.” However, there are vibrant communities, user groups and tech summits of those leveraging the technology who are willing to assist and collaborate.

DFT has evolved its object storage solution, extending its capabilities from an initial hundreds of terabytes – which is nothing to sneeze at — to hundreds of petabytes of storage. DFT also designs remote post systems and storage solutions for customers in remote locations around the world. And those remote locations can be as simple as a workstation running applications such as Blackmagic’s Resolve or Adobe After Effects and connected to object storage housing all the client’s camera raw material.

The key, Katrib notes, is to have great post and IT pros managing the projects and the system. “I can now place a remote post system with a calibrated 4K monitor and object storage housing the camera raw material, and I can bring the post process to you wherever you are, securely,” he adds. “From wherever you are, you can view the conform, color and effects, and sign off on the final timeline, as if you were at DFT.”

DFT posts American Housewife

In addition to the object storage, DFT is also using Facilis TerraBlock and Avid Nexis systems locally and on remote installs. The company uses those commercial solutions because they provide benefits, including storage performance and feature sets that optimize certain software applications. As Katrib points out, storage is not one flavor fits all, and different solutions work better for certain use cases. In DFT’s case, the commercial storage products provide performance for the playback of multiple 4K streams across the company’s color, VFX and conform departments, while its ProStack high-capacity object storage comes into play for storing the entirety of all files produced by our clients.

“Rather than retrieve files from an LTO tape, as most do when working on a TV series, with object storage, the files are readily available, saving hours in retrieval time,” says Katrib.

Currently, DFT is working on a number of television series, including Great News (color correction only) and Good Behavior (dailies only). For other shows, such as the Roseanne revival, NCIS: Los Angeles, American Housewife and more, it is performing full services such as visual effects, conform, color, dailies and dubbing. And in some instances, even equipment rental.

As the work expands, DFT is looking to extend upon its storage and remote post systems. “We want to have more remote systems where you can do color, conform, VFX, editorial, wherever you are, so the DP or producer can have a monitor in their office and partake in the post process that’s particular to them,” says Katrib. “That is what we are scaling as we speak.”

Broadway Video
Broadway Video is a global media and entertainment company that is primarily engaged in post-production services for television, film, music, digital and commercial projects for the past four decades. Located in New York and Los Angeles, the facility offers one-stop tools and talent for editorial, audio, design, color grading, finishing and screening, as well as digital file storage, preparation, aggregation and delivery of digital content across multiple platforms.

Since its founding in 1979, Broadway Video has grown into an independent studio. During this timeframe, content has evolved greatly, especially in terms of resolution, to where 4K and HD content — including HDR and Atmos sound — is becoming the norm. “Staying current and dealing with those data speeds are necessary in order to work fluidly on a 4K project at 60p,” says Stacey Foster, president and managing director, Broadway Video Digital and Production. “The data requirements are pretty staggering for throughput and in terms of storage.”

Stacey Foster

This led Broadway Video to begin searching a year ago for a storage system that would meet its needs now as well as in the foreseeable future — in short, it also needed a system that is scalable. Their solution: an all-Flash Hitachi Vantara Virtual Storage Platform (VSP) G series. Although quite expensive, a flash-based system is “ridiculously powerful,” says Foster. “Technology is always marching forward, and Flash-based systems are going to become the norm; they are already the norm at the high end.”

Foster has had a long-standing relationship with Hitachi for more than a decade and has witnessed the company’s growth into M&E from the medical and financial worlds where it has been firmly ensconced. According to Foster, Hitachi’s VSP series will enhance Broadway Video’s 4K offerings and transform internal operations by allowing quick turnaround, efficient and cost-effective production, post production and delivery of television shows and commercials. And, the system offers workload scalability, allowing the company to expand and meet the changing needs of the digital media production industry.

“The systems we had were really not that capable of handling DPX files that were up to 50TB, and Hitachi’s VSP product has been handling them effortlessly,” says Foster. “I don’t think other [storage] manufacturers can say that.”

Foster explains that as Broadway Video continued to expand its support of the latest 4K content and technologies, it became clear that a more robust, optimized storage solution was needed as the company moved in this new direction. “It allows us to look at the future and create a foundation to build our post production and digital distribution services on,” Foster says.

Broadway Video’s with Netflix projects sparked the need for a more robust system. Recently, Comedians in Cars Getting Coffee, an Embassy Row production, transitioned to Netflix, and one of the requirements by its new home was the move from 2K to 4K. “It was the perfect reason for us to put together a 4K end-to-end workflow that satisfies this client’s requirements for technical delivery,” Foster points out. “The bottleneck in color and DPX file delivery is completely lifted, and the post staff is able to work quickly and sometimes even faster than in real time when necessary to deliver the final product, with its very large files. And that is a real convenience for them.”

Broadway Video’s Hitachi Vantara Virtual Storage Platform G series.

As a full-service post company, Broadway Video in New York operates 10 production suites of Avids running Adobe Premiere and Blackmagic Resolve, as well as three full mixing suites. “We can have all our workstations simultaneously hit the [storage] system hard and not have the system slow down. That is where Hitachi’s VSP product has set itself apart,” Foster says.

For Comedians in Cars Getting Coffee, like many projects Broadway Video encounters, the cut is in a lower-resolution Avid file. The 4K media is then imported into the Resolve platform, so it is colored in its original material and format. In terms of storage, once the material is past the cutting stage, it is all stored on the Hitachi system. Once the project is completed, it is handed off on spinning disc for archival, though Foster foresees a limited future for spinning discs due to their inherent nature for a limited life span — “anything that spins breaks down,” he adds.

All the suites are fully HD-capable and are tied with shared SAN and ISIS storage; because work on most projects is shared between editing suites, there is little need to use local storage. Currently Broadway Video is still using its previous Avid ISIS products but is slowly transitioning to the Hitachi system only. Foster estimates that at this time next year, the transition will be complete, and the staff will no longer have to support the multiple systems. “The way the systems are set up right now, it’s just easier to cut on ISIS using the Avid workstations. But that will soon change,” he says.

Currently, Broadway Video is still using its Avid ISIS products but is slowly transitioning to the Hitachi system. Foster estimates that at this time next year, the transition will be complete, and the staff will no longer have to support the multiple systems. “The way the systems are set up right now, it’s just easier to cut on ISIS using the Avid workstations. But that will soon change,” he says.

Other advantages the Hitachi system provides is stability and uptime, which Foster maintains is “pretty much 100 percent guaranteed.” As he points out, there is no such thing as downtime in banking and medical, where Hitachi earned its mettle, and bringing that stability to the M&E industry “has been terrific.”

Of course, that is in addition to bandwidth and storage capacity, which is expandable. “There is no limit to the number of petabytes you can have attached,” notes Foster.

Considering that the majority of calls received by Broadway Video center on post work for 4K-based workflows, the new storage solution is a necessary technical addition to the facility’s other state-of-the-art equipment. “In the environment we work in, we spend more and more time on the creative side in terms of the picture cutting and sound mixing, and then it is a rush to get it out the door. If it takes you days to import, color correct, export and deliver — especially with the file sizes we are talking about – then having a fast system with the kind of throughput and bandwidth that is necessary really lifts the burden for the finishing team,” Foster says.

He continues: “The other day the engineers were telling me we were delivering 20 times faster using the Hitachi technology in the final cutting and coloring of a Jerry Seinfeld stand-up special we had done in 4K” resulting in a DPX file that was about 50TB. “And that is pretty significant,” Foster adds.

Main Image: DigitalFilm Tree’s senior colorist Patrick Woodard.

Behind the Title: Butter Music and Sound’s Chip Herter

NAME: Chip Herter

COMPANY: NYC’s Butter Music+Sound/Haystack Music

CAN YOU DESCRIBE YOUR COMPANY?
Butter creates custom music compositions for advertising/film/TV. Haystack Music is the internal music catalog from Butter, featuring works from our composers, emerging artists and indie labels.

WHAT’S YOUR JOB TITLE?
Director of Creative Sync Services

WHAT DOES THAT ENTAIL?
The role was designed to be a catch-all for all things creative music licensing. This includes music supervision (curating music for projects from the music industry at large, by way of record labels and publishers) and creative direction from our own Haystack Music library.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Rights management is an understated aspect of the role. The ability to immediately know who key players are in the ownership of a song, so that we can estimate costs for using a song on behalf of our clients and license a track with ease.

WHAT TOOLS DO YOU USE?
The best tool in my toolbox is the team that supports me every day.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I have a keen interest in putting the spotlight on new and emerging music. Be it a new piece written by one of our composers or an emerging act that I want to introduce to a larger audience.

WHAT’S YOUR LEAST FAVORITE?
Losing work to anyone else. It is a natural part of the job, but I can’t help getting personally invested in every project I work on. So the loss feels real, but in turn I always learn something from it.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Morning, for sure. Coffee and music? Yes, please!

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Most likely working for a PR agency. I love to write, and I am good at it (so I’m told).

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was a late bloomer. I was 26 when I took my first internship as a music producer at Crispin Porter+Bogusky. From my first day on the job, I knew this was my higher calling. Anyone who geeks-out to the language in a music license like me is destined to do this for a living.

Lexus Innovations

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently worked on a campaign for Lexus with Team One USA called Innovations that was particularly great and the response to the music was very positive. Recently, we also worked on projects for Levi’s, Nescafé, Starbucks and Keurig… coffee likes us, I guess!

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I was fortunate to work with Wieden+Kennedy on their Coca-Cola Super Bowl ad in 2015. I placed a song from the band Hundred Waters, who have gone on to do remarkable things since. The spot carried a very positive message about anti-bullying, and it was great to work on something with such social awareness.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
WiFi, Bluetooth and Spotify.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I don’t take for granted that my favorite pastime — going to concerts — is a fringe benefit of the job. When I am not listening to music, I am almost always listening to a podcast or a standup comedian. I also enjoy acting like a child with my two-year-old son as much as I can. I learn a lot from him about not taking myself too seriously.

Richard Linklater on directing the film Last Flag Flying

By Iain Blair

Director Richard Linklater first made a name for himself back in 1991 with the acclaimed and influential independent release Slacker, an experimental narrative revolving around 24 hours in the lives of 100 characters. Since then he’s made the beloved Dazed and Confused, Before Sunrise; Before Sunset, (he got an Oscar nod for Best Adapted Screenplay) and Boyhood (which received multiple BAFTA and Golden Globe Awards, and an Oscar for actress Patricia Arquette).

He’s also directed such diverse films as the Western/gangster picture The Newton Boys, the animated feature Waking Life, the real-time drama Tape, the comedy School of Rock and Everybody Wants Some!!

L-R: Iain Blair and Richard Linklater

His new film is the timely Last Flag Flying, which deals with war, patriotism and friendship. Set in 2003, it tells the story of three soldiers — former Navy Corps medic Larry “Doc” Shepherd (Steve Carell) and former Marines Sal Nealon (Bryan Cranston) and Richard Mueller (Laurence Fishburne) — who reunite 30 years after they served together in the Vietnam War to bury Doc’s son, a young Marine killed in the Iraq War. Doc decides to forgo a burial at Arlington Cemetery and, with the help of his old buddies, takes the casket on a bittersweet trip up the East Coast to his home in suburban New Hampshire. Along the way, Doc, Sal and Mueller reminisce and come to terms with shared memories of the war that continues to shape their lives.

Linklater co-wrote the screenplay with author Darryl Ponicsan, who wrote his novel Last Flag Flying as a sequel to his book The Last Detail, which was made into the acclaimed 1973 film starring Jack Nicholson.

I spoke with Linklater —whose other film credits include Suburbia, Bad News Bears (the 2005 version), A Scanner Darkly, Fast Food Nation, Inning by Inning: A Portrait of a Coach, Me and Orson Welles, Bernie and Before Midnight — about making the film, which is getting awards buzz, and why his image as a loose, improv-heavy director is so inaccurate.

Is it fair to say this is not a war movie, but it’s about war?
Yes, I think that’s right. It’s my kind of war movie. It’s not a battlefield movie but about the effects of war — that sub-category about what happens when the war comes home and how the combat survivors deal with it.

My dad was in the navy in WW11 but he never talked about the war and his experiences.
Exactly. My dad was the same. Very stoic. They just held it all in. Now it’s even worse, I feel. The suicide rate among returning vets is so high today, and the big problem is the way the service breaks you down and trains you, but then they don’t really put you back together. When you’re done, the killing machine goes home, and you’re taught to be stoic and be a man, and you’re left with nothing. To me that’s so tragic. We should talk about it. And all these issues, along with all of the wars since Vietnam, are the subtext to this whole story. War is such a political minefield today.

Is it true you started to make a film of this a decade ago?
Yes, I tried to adapt the book, but I’m glad it didn’t happen back then. It wasn’t meant to be, and no one was really ready to deal with the war in Iraq. It takes a while to want to analyze a war and get a perspective. A little distance is very helpful.

You’ve got three great leads. What did they bring to it?
Everything — all of their intelligence, humor, experience and work ethic. They really dug in, and we spent a couple of weeks rehearsing and really talking about all the issues. Each character is very different, and the guys really found them and jumped in. The biggest contrast is probably between Sal and Doc. Sal is the life of the party type, clearly self-medicating, always talking, eating, drinking, while Doc’s very low-key and quiet. Then Mueller’s somewhere in the middle.

Technology has changed a lot since you started, with the whole digital revolution, but you still like to shoot on film? Was that the case with this one?
We shot Boyhood all on film, but we did this digitally. It was just more practical, and I think now you can get any look you want with digital. It’s pretty impossible to tell the difference between film and digital now. But I don’t think film’s dead, although economically it’s changed, obviously, but I don’t think it’s going away. I hope we always have it as a choice. Digital’s just one more tool.

Where did you do the post?
We posted in my offices in Austin, as usual, and we began cutting while I shot.

Do you like post?
I love post and all the stuff you can do to shape your film, but I actually feel the most creative in rehearsal and then shooting. That’s when I feel like I’m really making the film. I don’t feel like I’ve ever “found” the film in the edit and post, like some directors do. There’s a certain schematic at work that I’m trying to follow. I know certain kinds of films have to be deconstructed and then reconstructed in post, but mine aren’t like that. It sounds boring but I do all that in advance. I’m a big preparer.

So you’re not the big improv, loose guy people like to think you are?
(Laughs) No, no. I prepare everything. And with experience you just know what you’re going to shoot and actually use.

The film was edited by your longtime collaborator Sandra Adair. Tell us about that relationship and how it works.
She doesn’t need to be on set and we just send dailies and then we talk a bit. I don’t usually shoot more than necessary, as I tend to have pretty limited budgets and schedules. I usually have a fairly good cut done about a month into post. I used to cut my own stuff when I began, like all filmmakers, and then she cut Dazed and Confused for me 24 years ago and we’ve been a team ever since.

Linklater on set.

I think we share the same brain at this point, a certain shorthand; we have great chemistry, and she’s just really good. She can just look at the footage and know what I’m thinking. I don’t have to explain it. The big challenge on this was finding the right balance between all the heavy drama and then the moments of comedy, and keeping that tonal balance all the way down the line. So in the edit you go, “This is a bit dialogue-heavy, let’s cut to the joke,” and “this is redundant,” and so on. The re-writing never stops.

How many visual effects shots are there in the film?
A few hundred, all done by Savage VFX who’re in LA and (Pittsburgh) Pennsylvania, where we shot — mainly greenscreen, train stuff, compositing, clean-up and so on. Hopefully, you don’t notice them at all.

Can you talk about the importance of music and sound?
It’s always been huge to me, and a couple of the songs — “Not Dark Yet” by Bob Dylan, and “Wide River to Cross” by Levon Helm — are so important. Big choices, with the Levon Helm song at the graveside, and Dylan at the very end. Again, it’s a tonal thing. There aren’t that many songs, but they’re all crucial, like the Eminem song “Without Me” and its humor.

Graham Reynolds, who’s done the score for a lot of my films, composed a beautiful score and I probably used it in places I usually wouldn’t because I felt the story needed it emotionally, and I wanted to give more clues in that area, and carry things through more.

The film has a very bleak look. Talk about the DI and how that process helped?
We did it at Light Iron with colorist Corinne Bogdanowicz (using Quantel Rio), and that bleak, rainy look was baked into the whole thing and started at the conceptual level — “We’re never going to see sunlight.” We’re going to have a lot of rain, grungy locations and a sort of texture and tone that’s fundamental to telling this story. Corinne did a great job, especially in the scenes where nature didn’t give us what we wanted. (From Corinne: “The movie was shot beautifully by Shane Kelly, who conveyed in the DI that the visuals needed to emphasize cool and dark tones.  At the same time, we worked to maintain a naturalistic feel throughout.”)

What’s next?

I’m in the middle of post on my next film, Where’d You Go, Bernadette, a comedy-drama starring Cate Blanchett, out next year. And I have a big TV project that began as a film, a huge, sprawling historical thing. TV is now this really viable medium for filmmakers.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

FilmLight adds colorist Andy Minuth as workflow specialist

FilmLight has hired colorist Andy Minuth as color workflow specialist. Minuth, originally from Germany, was most recently lead colorist at 1000Volt Post Production in Istanbul. There he was responsible for the grade of commercials as well as feature films, and worked on FilmLight Baselight. He brings a deep technical knowledge of image processing and color management to the job, which will have him speaking to fellow colorists worldwide.

“I am looking forward to talking to other creatives around the world, sharing my experience,” he says. “I’m also excited to hear their stories about the productivity of the Baselight Linked Grade (BLG) workflow now that it’s reaching more artists — DITs, editors and compositors — throughout the production process.”

While Minuth will be based in FilmLight’s new office in Munich, he will have a global presence for the company, helping users develop unified color pipelines and enhance skills regardless of location.

“We need to have those conversations in their language, in the language of creative post production,” explains Mark Burton, head of EMEA sales for FilmLight. “That is why it is so valuable for us to add another highly experienced, highly regarded colorist to our team.”

Mercy Christmas director offers advice for indie filmmakers

By Ryan Nelson

After graduating from film school at The University of North Carolina School of the Arts, I was punched in the gut. I had driven into Los Angeles mere hours after the last day of school ready to set Hollywood on fire with my thesis film. But Hollywood didn’t seem to know I’d arrived. A few months later, Hollywood still wasn’t knocking on my door. Desperate to work on film sets and learn the tools of the trade, I took a job as a grip. In hindsight, it was a lucky accident. I spent the next few years watching some of the industry’s most successful filmmakers from just a few feet away.

Like a sponge, I soaked in every aspect of filmmaking that I could from my time on the sets of Avengers, Real Steel, Spider Man 3, Bad Boys 2, Seven Psychopaths, Smokin’ Aces and a slew of Adam Sandler comedies. I spent hours working, watching, learning and judging. How are they blocking the actors in this scene? What sort of cameras are they using? Why did they use that light? When do you move the camera? When is it static? When I saw the finished films in theaters, I ultimately asked myself, did it all work?

During that same time, I wrote and directed a slew of my own short films. I tried many of the same techniques I’d seen on set. Some of those attempts succeeded and some failed.

Recently, the stars finally aligned and I directed my first feature-length film, Mercy Christmas, from a script I co-wrote with my wife Beth Levy Nelson. After five years of writing, fundraising, production and post production, the movie is finished. We made the movie outside the Hollywood system, using crowd funding, generous friends and loving family members to compile enough cash to make the ultra-low-budget version of the Mercy Christmas screenplay.

I say low budget because it was financially, but thanks to my time on set, years of practice and much trial and error, the finished film looks and feels like much more than it cost.

Mercy Christmas, by the way, features Michael Briskett, who meets the perfect woman and his ideal Christmas dream comes true when she invites him to her family’s holiday celebration. Michael’s dream shatters, however, when he realizes that he will be the Christmas dinner. The film is currently on iTunes.

My experience working professionally in the film business while I struggled to get my shot at directing taught me many things. I learned over those years that a mastery of the techniques and equipment used to tell stories for film was imperative.

The stories I gravitate towards tend to have higher concept set pieces. I really enjoy combining action and character. At this point in my career, the budgets are more limited. However, I can’t allow financial restrictions to hold me back from the stories I want to tell. I must always find a way to use the tools available in their best way.

Ryan Nelson with camera on set.

Two Cameras
I remember an early meeting with a possible producer for Mercy Christmas. I told him I was planning to shoot two cameras. The producer chided me, saying it would be a waste of money. Right then, I knew I didn’t want to work with that producer, and I didn’t.

Every project I do now and in the future will be two cameras. And the reason is simple: It would be a waste of money not to use two cameras. On a limited budget, two cameras offer twice the coverage. Yes, understanding how to shoot two cameras is key, but it’s also simple to master. Cross coverage is not conducive to lower budget lighting so stacking the cameras on a single piece of coverage gives you a medium shot and close shot at the same time. Or for instance, when shooting the wide master shot, you can also get a medium master shot to give the editor another option to breakaway to while building a scene.

In Mercy Christmas, we have a fight scene that consists of seven minutes of screen time. It’s a raucous fight that covers three individual fights happening simultaneously. We scheduled three days to shoot the fight. Without two cameras it would have taken more days to shoot, and we definitely didn’t have more days in the budget.

Of course, two camera rentals and camera crews are budget concerns, so the key is to find a lower budget but high-quality camera. For Mercy Christmas, we chose the Canon C-300 Mark II. We found the image to be fantastic. I was very happy with the final result. You can also save money by only renting one lens package to use for both cameras.

Editing
Good camera coverage doesn’t mean much without an excellent editor. Our editor for Mercy Christmas, Matt Evans, is a very good friend and also very experienced in post. Like me, Matt started at the bottom and worked his way up. Along the way, he worked on many studio films as apprentice editor, first assistant editor and finally editor. Matt’s preferred tool is Avid Media Composer. He’s incredibly fast and understands every aspect of the system.

Matt’s technical grasp is superb, but his story sense is the real key. Matt’s technique is a fun thing to witness. He approaches a scene by letting the footage tell him what to do on a first pass. Soaking in the performances with each take, Matt finds the story that the images want to tell. It’s almost as if he’s reading a new script based on the images. I am delighted each time I can watch Matt’s first pass on a scene. I always expect to see something I hadn’t anticipated. And it’s a thrill.

Color Grading
Another aspect that should be budgeted into an independent film is professional color grading. No, your editor doing color does not count. A professional post house with a professional color grader is what you need. I know this seems exorbitant for a small-budget indie film, but I’d highly recommend planning for it from the beginning. We budgeted color grading for Mercy Christmas because we knew it would take the look to professional levels.

Color grading is not only a tool for the cinematographer it’s a godsend for the director as well. First and foremost, it can save a shot, making a preferred take that has an inferior look actually become a usable take. Second, I believe strongly that color is another tool for storytelling. An audience can be as moved by color as by music. Every detail coming to the audience is information they’ll process to understand the story. I learned very early in my career how shots I saw created on set were accentuated in post by color grading. We used Framework post house in Los Angeles on Mercy Christmas. The colorist was David Sims who did the color and conform in DaVinci Resolve 12.

In the end, my struggle over the years did gain my one of my best tools: experience. I’ve taken the time to absorb all the filmmaking I’ve been surrounded by. Watching movies. Working on sets. Making my own.

After all that time chasing my dream, I kept learning, refining my skills and honing my technique. For me, filmmaking is a passion, a dream and a job. All of those elements made me the storyteller I am today and I wouldn’t change a thing.

Mixing the sounds of history for Marshall

By Jennifer Walden

Director Reginald Hudlin’s courtroom drama Marshall tells the story of Thurgood Marshall (Chadwick Boseman) during his early career as a lawyer. The film centers on a case Marshall took in Connecticut in the early 1940s. He defended a black chauffeur named Joseph Spell (Sterling K. Brown) who was charged with attempted murder and sexual assault of his rich, white employer Eleanor Strubing (Kate Hudson).

At that time, racial discrimination and segregation were widespread even in the North, and Marshall helped to shed light on racial inequality by taking on Spell’s case and making sure he got a fair trial. It’s a landmark court case that is not only of huge historical consequence but is still relevant today.

Mixers Anna Behlmer and Craig Mann

Marshall is so significant right now with what’s happening in the world,” says Oscar-nominated re-recording mixer Anna Behlmer, who handled the effects on the film. “It’s not often that you get to work on a biographical film of someone who lived and breathed and did amazing things as far as freedom for minorities. Marshall began the NAACP and argued Brown vs. Dept. of Education for stopping the segregation of the schools. So, in that respect, I felt the weight and the significance of this film.”

Oscar-winning supervising sound editor/re-recording mixer Craig Mann handled the dialogue and music. Behlmer and Mann mixed Marshall in 5.1 surround on a Euphonix System 5 console on Stage 2 at Technicolor at Paramount in Hollywood.

In the film, crowds gather on the steps outside the courthouse — a mixture of supporters and opponents shouting their opinions on the case. When dealing with shouting crowds in a film, Mann likes to record the loop group for those scenes outside. “We recorded in Technicolor’s backlot, which gives a nice slap off all the buildings,” says Mann, who miked the group from two different perspectives to capture the feeling that they’re actually outside. For the close-mic rig, Mann used an L-C-R setup with two Schoeps CMC641s for left and right and a CMIT 5U for center, feeding into a TASCAM HSP-82 8-channel recorder.

“We used the CMIT 5U mic because that was the production boom mic and we knew we’d be intermingling our recordings with the production sound, because they recorded some sound on the courthouse stairs,” says Mann. “We matched that up so that it would anchor everything in the center.”

For the distant rig, Mann went with a Sanken CSS-5 set to record in stereo, feeding a Sound Devices 722. Since they were running two setups simultaneously, Mann says they beeped everyone with a bullhorn to get slate sync for the two rigs. Then to match the timing of the chanting with production sound, they had a playback rig with eight headphone feeds out to chosen leaders from the 20-person loop group. “The people wearing headphones could sync up to the production chanting and those without headphones followed along with the people who had them on.”

Inside the courtroom, the atmosphere is quiet and tense. Mann recorded the loop group (inside the studio this time) reacting as non-verbally as possible. “We wanted to use the people in the gallery as a tool for tension. We do all of that without being too heavy handed, or too hammy,” he says.

Sound Effects
On the effects side, the Foley — provided by Foley artist John Sievert and his team at JRS Productions in Toronto — was a key element in the courtroom scenes. Each chair creak and paper shuffle plays to help emphasize the drama. Behlmer references a quiet scene in which Thurgood is arguing with his other attorney defending the case, Sam Friedman (Josh Gad). “They weren’t arguing with their voices. Instead, they were shuffling papers and shoving things back and forth. The defendant even asks if everything is ok with them. Those sounds helped to convey what was going on without them speaking,” she says.

You can hear the chair creak as Judge Foster (James Cromwell) leans forward and raises an eyebrow and hear people in the gallery shifting in their seats as they listen to difficult testimony or shocking revelations. “Something as simple as people shifting on the bench to underscore how uncomfortable the moment was, those sounds go a long way when you do a film like this,” says Behlmer.

During the testimony, there are flashback sequences that illustrate each person’s perception of what happened during the events in question. The flashback effect is partially created through the picture (the flashbacks are colored differently) and partially through sound. Mann notes that early on, they made the decision to omit most of the sounds during the flashbacks so that the testimony wouldn’t be overshadowed.

“The spoken word was so important,” adds Behlmer. “It was all about clarity, and it was about silence and tension. There were revelations in the courtroom that made people gasp and then there were uncomfortable pauses. There was a delicacy with which this mix had to be done, especially with regards to Foley. When a film is really quiet and delicate and tense, then every little nuance is important.”

Away from the courthouse, the film has a bit of fun. There’s a jazz club scene in which Thurgood and his friends cut loose for the evening. A band and a singer perform on stage to a packed club. The crowd is lively. Men and women are talking and laughing and there’s the sound of glasses clinking. Behlmer mixed the crowds by following the camera movement to reinforce what’s on-screen.

Music
On the music side, Mann’s challenge was to get the brass — the trumpet and trombone — to sit in a space that didn’t interfere too much with the dialogue. On the other hand, Mann still wanted the music to feel exciting. “We had to get the track all jazz-clubbed up. It was about finding a reverb that was believable for the space. It was about putting the vocals and brass upfront and having the drums and bass be accompaniment.”

Having the stems helped Mann to not only mix the music against the dialogue but to also fit the music to the image on-screen. During the performance, the camera is close-up and sweeping along the band. Mann used the music stems to pan the instruments to match the scene. The shot cuts away from the performance to Thurgood and his friends at a table in the back of the club. Using the stems, Mann could duck out of the singer’s vocals and other louder elements to make way for the dialogue. “The music was very dynamic. We had to be careful that it didn’t interfere too much with the dialogue, but at the same time we wanted it to play.”

On the score, Mann used Exponential Audio’s R4 reverb to set the music back into the mix. “I set it back a bit farther than I normally would have just to give it some space, so that I didn’t have to turn it down for dialogue clarity. It got it to shine but it was a little distant compared to what it was intended to be.”

Behlmer and Mann feel the mix was pretty straightforward. Their biggest obstacle was the schedule. The film had to be mixed in just ten days. “I didn’t even have pre-dubs. It was just hang and go. I was hearing everything for the first time when I sat down to mix it — final mix it,” explains Behlmer.

With Mann working the music and dialogue faders, co-supervising sound editor Bruce Tanis was supplying Behlmer with elements she needed during the final mix. “I would say Bruce was my most valuable asset. He’s the MVP of Marshall for the effects side of the board,” she says.

On the dialogue side, Mann says his gear MVP was iZotope RX 6. With so many quiet moments, the dialogue was exposed. It played prominently, without music or busy backgrounds to help hide any flaws. And the director wanted to preserve the on-camera performances so ADR was not an option.

“We tried to use alts to work our way out of a few problems, and we were successful. But there were a few shots in the courtroom that began as tight shots on boom and then cuts wide, so the boom had to pull back and we had to jump onto the lavs there,” concludes Mann. “Having iZotope to help tie those together, so that the cut was imperceptible, was key.”


Jennifer Walden is a NJ-based audio engineer and writer. Follow her on Twitter @audiojeney.