OWC 12.4

Category Archives: Audio Mixing

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Creative Outpost buys Dolby-certified studios, takes on long-form

After acquiring the studio assets from now-closed Angell Sound, commercial audio house Creative Outpost is now expanding its VFX and audio offerings by entering the world of long-form audio. Already in picture post on its first Netflix series, the company is now open for long-form ADR, mix and review bookings.

“Space is at a premium in central Soho, so we’re extremely privileged to have been able to acquire four studios with large booths that can accommodate crowd sessions,” says Creative Outpost co-founders Quentin Olszewski and Danny Etherington. “Our new friends in the ADR world have been super helpful in getting the word out into the wider community, having seen the size, build quality and location of our Wardour Street studios and how they’ll meet the demands of the growing long-form SVOD market.”

With the Angell Sound assets in place, the team at Creative Outpost has completed a number of joint picture and sound projects for online and TV. Focusing two of its four studios primarily on advertising work, Creative Outpost has provided sound design and mix on campaigns including Barclays’ “Team Talk,” Virgin Mobile’s “Sounds Good,” Icee’s “Swizzle, Fizzle, Freshy, Freeze,” Green Flag’s “Who The Fudge Are Green Flag,” Santander’s “Antandec” and Coca Cola’s “Coaches.” Now, the team’s ambitions are to apply its experience from the commercial world to further include long-form broadcast and feature work. Its Dolby-approved studios were built by studio architect Roger D’Arcy.

The studios are running Avid Pro Tools Ultimate, Avid hardware controllers and Neumann U87 microphones. They are also set up for long-form/ADR work with EdiCue and EdiPrompt, Source-Connect Pro and ISDN capabilities, Sennheiser MKH 416 and DPA D:screet microphones.

“It’s an exciting opportunity to join Creative Outpost with the aim of helping them grow the audio side of the company,” says Dave Robinson, head of sound at Creative Outpost. “Along with Tom Lane — an extremely talented fellow ex-Angell engineer — we have spent the last few months putting together a decent body of work to build upon, and things are really starting to take off. As well as continuing to build our core short-form audio work, we are developing our long-form ADR and mix capabilities and have a few other exciting projects in the pipeline. It’s great to be working with a friendly, talented bunch of people, and I look forward to what lies ahead.”

 

OWC 12.4

Video: The Irishman’s focused and intimate sound mixing

Martin Scorsese’s The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, tells the story of organized crime in post-war America as seen through the eyes of World War II veteran Frank Sheeran (DeNiro), a hustler and hitman who worked alongside some of the most notorious figures of the 20th century. In the film, the actors have been famously de-aged, thanks to VFX house ILM, but it wasn’t just their faces that needed to be younger.

In this video interview, Academy Award-winning re-recording sound mixer and decades-long Scorsese collaborator Tom Fleischman — who will receive the Cinema Audio Society’s Career Achievement Award in January — talks about de-aging actors’ voices as well as the challenges of keeping the film’s sound focused and intimate.

“We really had to try and preserve the quality of their voices in spite of the fact we were trying to make them sound younger. And those edits are sometimes difficult to achieve without it being apparent to the audience. We tried to do various types of pitch changing and we us used different kinds of plugins. I listened to scenes from Serpico for Al Pacino and The King of Comedy for Bob DeNiro and tried to match the voice quality of what we had from The Irishman to those earlier movies.”

Fleischman worked on the film at New York’s Soundtrack.

Enjoy the video:


2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.


Review: Nugen Audio’s VisLM2 loudness meter plugin

By Ron DiCesare

In 2010, President Obama signed the CALM Act (Commercial Advertisement Loudness Mitigation) regulating the audio levels of TV commercials. At that time, I had many “laypeople” complain to me how commercials were often so much louder than the TV programs. Over the past 10 years, I have seen the rise of audio meter plugins to meet the requirements of the CALM Act, resulting in reducing this complaint dramatically.

A lot has changed since the 2010 FCC mandate of -24LKFS +/-2db. LKFS was the scale name at the time, but we will get into this more later. Today, we have countless viewing options such as cable networks, a large variety of streaming services, the internet and movie theaters utilizing 7.1 or Dolby Atmos. Add to that, new metering standards such as True Peak and you have the likelihood of confusing and possibly even conflicting audio standards.

Nugen Audio has updated its VisLM for addressing today’s complex world of audio levels and audio metering. The VisLM2 is a Mac and Windows plugin compatible with Avid Pro Tools and any DAW that uses RTAS, AU, AAX, VST and VST3. It can also be installed as a standalone application for Windows and OSX. By using its many presets, Loudness History Mode and countless parameters to view and customize, the VisLM2 can help an audio mixer monitor a mix to see when their programs are in and out of audio level spec using a variety of features.

VisLM2

The Basics
The first thing I needed to see was how it handled the 2010 audio standard of -24LKFS, now known as LUFS. LKFS (Loudness K-weighted relative to Full Scale) was the term used in the United States. LUFS (Loudness Units relative to Full Scale) was the term used in Europe. The difference is in name only, and the audio level measurement is identical. Now all audio metering plugins use LUFS, including the VisLM2.

I work mostly on TV commercials, so it was pretty easy for me to fire up the VisLM2 and get my LUFS reading right away. Accessing the US audio standard dictated by the CALM Act is simple if you know the preset name for it: ITU-R B.S. 1770-4. I know, not a name that rolls off the tongue, but it is the current spec. The VisLM2 has four presets of ITU-R B.S. 1770 — revision 01, 02, 03 and the current revision 04. Accessing the presets is easy, once you realize that they are not in the preset section of the plugin as one might think. Presets are located in the options section of the meter.

While this was my first time using anything from Nugen Audio, I was immediately able to run my 30-second TV commercial and get my LUFS reading. The preset gave me a few important default readings to view while mixing. There are three numeric displays that show Short-Term, Loudness Range and Integrated, which is how the average loudness is determined for most audio level specs. There are two meters that show Momentary and Short-Term levels, which are helpful when trying to pinpoint any section that could be putting your mix out of audio spec. The difference is that Momentary is used for short bursts, such as an impact or gun shot, while Short-Term is used for the last three-second “window” of your mix. Knowing the difference between the two readings is important. Whether you work on short- or long-format mixes, knowing how to interpret both Momentary and Short-Term readings is very helpful in determining where trouble spots might be.

Have We Outgrown LUFS?
Most, if not all, deliverables now specify a True Peak reading. True Peak has slowly but firmly crept its way into audio spec and it can be confusing. For US TV broadcast, True Peak spec can range as high as -2dBTP and as low as -6dBTP, but I have seen it spec out even lower at -8dBTP for some of my clients. That means a TV network can reject or “bounce back” any TV programming or commercial that exceeds its LUFS spec, its True Peak spec or both.

VisLM2

In most cases, LUFS and True Peak readings work well together. I find that -24LUFS Integrated gives a mixer plenty of headroom for staying below the True Peak maximum. However, a few factors can work against you. The higher the LUFS Integrated spec (say, for an internet project) and/or the lower the True Peak spec (say, for a major TV network), the more difficult you might find it to manage both readings. For anyone like me — who often has a client watching over my shoulder telling me to make the booms and impacts louder — you always want to make sure you are not going to have a problem keeping your mix within spec for both measurements. This is where the VisLM2 can help you work within both True Peak and LUFS standards simultaneously.

To do that using the VisLM2, let’s first understand the difference between True Peak and LUFS. Integrated LUFS is an average reading over the duration of the program material. Whether the program material is 15 seconds or two hours long, hitting -24LUFS Integrated, for example, is always the average reading over time. That means a 10-second loud segment in a two-hour program could be much louder than a 10-second loud segment in a 15-second commercial. That same loud 10 seconds can practically be averaged out of existence during a two-hour period with LUFS Integrated. Flawed logic? Possibly. Is that why TV networks are requiring True Peak? Well, maybe yes, maybe no.

True Peak is forever. Once the highest True Peak is detected, it will remain as the final True Peak reading for the entire length of the program material. That means the loud segment at the last five minutes of a two-hour program will dictate the True Peak reading of the entire mix. Let’s say you have a two-hour show with dialogue only. In the final minute of the show, a single loud gunshot is heard. That one-second gunshot will determine the other one hour, 59 minutes, and 59 seconds of the program’s True Peak audio level. Flawed logic? I can see it could be. Spotify’s recommended levels are -14LUFS and -2dBTP. That gives you a much smaller range for dynamics compared to others such as network TV.

VisLM2

Here’s where the VisLM2 really excels. For those new to Nugen Audio, the clear stand out for me is the detailed and large history graph display known as Loudness History Mode. It is a realtime updating and moving display of the mix levels. What it shows is up to you. There are multiple tabs to choose from, such as Integrated, True Peak, Short-Term, Momentary, Variance, Flags and Alerts, to name a few. Selecting any of these tabs will result in showing, or not showing, the corresponding line along the timeline of the history graph as the audio plays.

When any of the VisLM2’s presets are selected, there are a whole host of parameters that come along with it. All are customizable, but I like to start with the defaults. My thinking is that the default values were chosen for a reason, and I always want to know what that reason is before I start customizing anything.

For example, the target for the preset of ITU-R B.S. 1770-4 is -24LUFS Integrated and -2dBTP. By default, both will show on the history graph. The history graph will also show default over and under audio levels based on the alerts you have selected in the form of min and max LUFS. But, much to my surprise, the default alert max was not what I expected. It wasn’t -24LUFS, which seemed to be the logical choice to me. It was 4dB higher at -20LUFS, which is 2dB above the +/-2dB tolerance. That’s because these min and max alert values are not for Integrated or average loudness as I had originally thought. These values are for Short-Term loudness. The history graph lines with its corresponding min and max alerts are a visual cue to let the mixer know if he or she is in the right ballpark. Now this is not a hard and fast rule. Simply put, if your short-term value stays somewhere between -20 and -28LUFS throughout most of an entire project, then you have a good chance of meeting your target of -24LUFS for the overall integrated measurement. That is why the value range is often set up as a “green” zone on the loudness display.

VisLM2

The folks at Nugen point out that it isn’t practically possible to set up an alert or “red zone” for integrated loudness because this value is measured over the entire program. For that, you have to simply view the main reading of your Integrated loudness. Even so, I will know if I am getting there or not by viewing my history graph while working. Compare that to the impractical approach of running the entire mix before having any idea of where you are going to net out. The VisLM2 max and min alerts help keep you working within audio spec right from the start.

Another nice feature about the large history graph window is the Macro tab. Selecting the Macro feature will give you the ability to move back and forth anywhere along the duration of your mix displayed in the Loudness History Mode. That way you can check for problem spots long after they have happened. Easily accessing any part of the audio level display within the history graph is essential. Say you have a trouble spot somewhere within a 30-minute program; select the Macro feature and scroll through the history graph to spot any overages. If an overage turns out to be at, say, eight minutes in, then cue up your DAW to that same eight-minute mark to address changes in your mix.

Another helpful feature designed for this same purpose is the use of flags. Flags can be added anywhere in your history graph while the audio is running. Again, this can be helpful for spotting, or flagging, any problem spots. For example, you can flag a loud action scene in an otherwise quiet dialogue-driven program that you know will be tricky to balance properly. Once flagged, you will have the ability to quickly cue up your history graph to work with that section. Both the Macro and Flag functions are aided by tape-machine-like controls for cueing up the Loudness History Mode display to any problem spots you might want to view.

Presets, Presets, Presets
The VisLM2 comes with 34 presets for selecting what loudness spec you are working with. Here is where I need to rely on the knowledge of Nugen Audio to get me going in the right direction. I do not know all of the specs for all of the networks, formats and countries. I would venture a guess that very few audio mixers do either. So I was not surprised when I saw many presets that I was not familiar with. Common presets in addition to ITU-R B.S. 1770 are six versions of EBU R128 for European broadcast and two Netflix presets (stereo and 5.1), which we will dive into later on. The manual does its best to describe some of the presets, but it falls short. The descriptions lack any kind of real-world language, only techno-garble. I have no idea what AGCOM 219/9/CSP LU is and, after reading the manual, I still don’t! I hope a better source of what’s what regarding each preset will become available sometime soon.

MasterCheck

But why no preset for Internet audio level spec? Could mixing for AGCOM 219/9/CSP LU be even more popular than mixing for the Internet? Unlikel. So let’s follow Nugen’s logic here. I have always been in the -18LUFS range for Internet only mixes. However, ask 10 different mixers and you will likely get 10 different answers. That is why there is not an Internet preset included with the VisLM2 as I had hoped. Even so, Nugen offers its MasterCheck plugin for other platforms such as Spotify and YouTube. MasterCheck is something I have been hoping for, and it would be the perfect companion to the VisLM2.

The folks at Nugen have pointed out a very important difference between broadcast TV and many Internet platforms: Most of the streaming services (YouTube, Spotify, Tidal, Apple Music, etc.) will perform their own loudness normalization after the audio is submitted. They do not expect audio engineers to mix to their standards. In contrast, Netflix and most TV networks will expect mixers to submit audio that already meets their loudness standards. VisLM2 is aimed more toward engineers who are mixing for platforms in the second category.

Streaming Services… the Wild West?
Streaming services are the new frontier, at least to me. I would call it the Wild West by comparison to broadcast TV. With so many streaming services popping up, particularly “off-brand” services, I would ask if we have gone back in time to the loudness wars of the late 2000s. Many streaming services do have an audio level spec, but I don’t know of any consensus between them like with network TV.

That aside, one of the most popular streaming services is Netflix. So let’s look at the VisLM2’s Netflix preset in detail. Netflix is slightly different from broadcast TV because its spec is based on dialogue. In addition to -2dTP, Netflix has an LUFS spec of -27 +/- 2dB Integrated Dialogue. That means the dialogue level is averaged out over time, rather than using all program material like music and sound effects. Remember my gunshot example? Netflix’s spec is more forgiving of that mixing scenario. This can lead to more dynamic or more cinematic mixes, which I can see as a nice advantage when mixing.

Netflix currently supports Dolby Atmos on selected titles, but word on the street is that Netflix deliverables will be requiring Atmos for all titles. I have not confirmed this, but I can only hope it will be backward-compatible for non-Atmos mixes. I was lucky enough to speak directly with Tomlinson Holman of THX fame (Tomlinson Holman eXperiment) about his 10.2 format that included height long before Atmos was available. In the case of 10.2, Holman said it was possible to deliver a single mono channel audio mix in 10.2 by simply leaving all other channels empty. I can only hope this is the same for Netflix’s Atmos deliverables so you can simply add or subtract the amount of channels needed when you are outputting your final mix. Regardless, we can surely look to Nugen Audio to keep us updated with its Netflix preset in the VisLM2 should this become a reality.

True Peak within VisLM2

VisLM Updates
For anyone familiar with the original version of the VisLM, there are three updates that are worth looking at. First is the ability to resize and select what shows in the display. That helps with keeping the window active on your screen as you are working. It can be a small window so it doesn’t interfere with your other operations. Or you can choose to show only one value, such as Integrated, to keep things really small. On the flip side, you can expand the display to fill the screen when you really need to get the microscope out. This is very helpful with the history graph for spotting any trouble spots. The detail displayed in the Loudness History Mode is by far the most helpful thing I have experienced using the VisLM2.

Next is the ability to display both LUFS and True Peak meters simultaneously. Before, it was one or the other and now it is both. Simply select the + icon between the two meters. With the importance of True Peak, having that value visible at all times is extremely valuable.

Third is the ability to “punch in,” as I call it, to update your Integrated reading while you are working. Let’s say you have your overall Integrated reading, and you see one section that is making you go over. You can adjust your levels on your DAW as you normally would and then simply “punch in” that one section to calculate the new Integrated reading. Imagine how much time you save by not having to run a one-hour show every time you want to update your Integrated reading. In fact, this “punch in” feature is actually the VisLM2 constantly updating itself. This is just another example of how the VisLM2 helps keep you working within audio spec right from the start.

Multi-Channel Audio Mixing
The one area I can’t test the VisLM2 on is multi-channel audio, such as 5.1 and Dolby Atmos. I work mostly on TV commercials, Internet programming, jazz records and the occasional indie film. So my world is all good old-fashioned stereo. Even so, the VisLM2 can measure 5.1, 7.1, and 7.1.2, which is the channel count for Dolby Atmos bed tracks. For anyone who works in multi-channel audio, the VisLM2 will measure and display audio levels just as I have described it working in stereo.

Summing Up
With the changing landscape of TV networks, streaming services and music-only platforms, the resulting deliverables have opened up the flood gates of audio specs like never before. Long gone are the days of -24LUFS being the one and only number you need to know.

To help manage today’s complicated and varied amount of deliverables along with the audio spec to go with it, Nugen Audio’s VisLM2 absolutely delivers.


Ron DiCesare is a NYC-based freelance audio mixer and sound designer. His work can be heard on national TV campaigns, Vice and the Viceland TV network. He is also featured in the doc “Sing You A Brand New Song” talking about the making of Coleman Mellett’s record album, “Life Goes On.”


Harbor crafts color and sound for The Lighthouse

By Jennifer Walden

Director Robert Eggers’ The Lighthouse tells the tale of two lighthouse keepers, Thomas Wake (Willem Dafoe) and Ephraim Winslow (Robert Pattinson), who lose their minds while isolated on a small rocky island, battered by storms, plagued by seagulls and haunted by supernatural forces/delusion-inducing conditions. It’s an A24 film that hit theaters in late October.

Much like his first feature-length film The Witch (winner of the 2015 Sundance Film Festival Directing Award for a dramatic film and the 2017 Independent Spirit Award for Best First Feature), The Lighthouse is a tense and haunting slow descent into madness.

But “unlike most films where the crazy ramps up, reaching a fever pitch and then subsiding or resolving, in The Lighthouse the crazy ramps up to a fever pitch and then stays there for the next hour,” explains Emmy-winning supervising sound editor/re-recording mixer Damian Volpe. “It’s like you’re stuck with them, they’re stuck with each other and we’re all stuck on this rock in the middle of the ocean with no escape.”

Volpe, who’s worked with director Eggers on two short films — The Tell-Tale Heart and Brothers — thought he had a good idea of just how intense the film and post sound process would be going into The Lighthouse, but it ended up exceeding his expectations. “It was definitely the most difficult job I’ve done in over two decades of working in post sound for sure. It was really intense and amazing,” he says.

Eggers chose Harbor’s New York City location for both sound and final color. This was colorist Joe Gawler’s first time working with Eggers, but it couldn’t have been a more fitting film. The Lighthouse was shot on 35mm black & white (Double-X 5222) film with a 1.19:1 aspect ratio, and as it happens Gawler is well versed in the world of black & white. He’s remastered a tremendous amount of classic movie titles for The Criterion Collection, such as Breathless, Seventh Samurai and several Fellini films like 8 ½. “To take that experience from my Criterion title work and apply that to giving authenticity to a contemporary film that feels really old, I think it was really helpful,” Gawler says.

Joe Gawler

The advantage of shooting on film versus shooting digitally is that film negatives can be rescanned as technology advances, making it possible to take a film from the ‘60s and remaster it into 4K resolution. “When you shoot something digitally, you’re stuck in the state-of-the-moment technology. If you were shooting digitally 10 years ago and want to create a new deliverable of your film and reimagine it with today’s display technologies, you are compromised in some ways. You’re having to up-res that material. But if you take a 35mm film negative shot 100 years ago, the resolution is still inside that negative. You can rescan it with a new scanner and it’s going to look amazing,” explains Gawler.

While most of The Lighthouse was shot on black & white film (with Baltar lenses designed in the 1930s for that extra dose of authenticity), there were a few stock footage shots of the ocean with big storm waves and some digitally rendered elements, such as the smoke, that had to be color corrected and processed to match the rich, grainy quality of the film. “Those stock footage shots we had to beat up to make them feel more aged. We added a whole bunch of grain into those and the digital elements so they felt seamless with the rest of the film,” says Gawler.

The digitally rendered elements were separate VFX pieces composited into the black & white film image using Blackmagic’s DaVinci Resolve. “Conforming the movie in Resolve gave us the flexibility to have multiple layers and allowed us to punch through one layer to see more or less of another layer,” says Gawler. For example, to get just that right amount of smoke, “we layered the VFX smoke element on top of the smokestack in the film and reduced the opacity of the VFX layer until we found the level that Rob and DP Jarin Blaschke were happy with.”

In terms of color, Gawler notes The Lighthouse was all about exposure and contrast. The spectrum of gray rarely goes to true white and the blacks are as inky as they can be. “Jarin didn’t want to maintain texture in the blackest areas, so we really crushed those blacks down. We took a look at the scopes and made sure we were bottoming out so that the blacks were pure black.”

From production to post, Eggers’ goal was to create a film that felt like it could have been pulled from a 1930’s film archive. “It feels authentically antique, and that goes for the performances, the production design and all the period-specific elements — the lights they used and the camera, and all the great care we took in our digital finish of the film to make it feel as photochemical as possible,” says Gawler.

The Sound
This holds true for post sound, too. So much so that Eggers and Volpe kicked around the idea of making the soundtrack mono. “When I heard the first piece of score from composer Mark Korven, the whole mono idea went out the door,” explains Volpe. “His score was so wide and so rich in terms of tonality that we never would’ve been able to make this difficult dialogue work if we had to shove it all down one speaker’s mouth.”

The dialogue was difficult on many levels. First, Volpe describes the language as “old-timey, maritime” delivered in two different accents — Dafoe has an Irish-tinged seasoned sailor accent and Pattinson has an up-east Maine accent. Additionally, the production location made it difficult to record the dialogue, with wind, rain and dripping water sullying the tracks. Re-recording mixer Rob Fernandez, who handled the dialogue and music, notes that when it’s raining the lighthouse is leaking. You see the water in the shots because they shot it that way. “So the water sound is married to the dialogue. We wanted to have control over the water so the dialogue had to be looped. Rob wanted to save as much of the amazing on-set performances as possible, so we tried to go to ADR for specific syllables and words,” says Fernandez.

Rob Fernandez

That wasn’t easy to do, especially toward the end of the film during Dafoe’s monologue. “That was very challenging because at one point all of the water and surrounding sounds disappear. It’s just his voice,” says Fernandez. “We had to do a very slow transition into that so the audience doesn’t notice. It’s really focusing you in on what he is saying. Then you’re snapped out of it and back into reality with full surround.”

Another challenging dialogue moment was a scene in which Pattinson is leaning on Dafoe’s lap, and their mics are picking up each other’s lines. Plus, there’s water dripping. Again, Eggers wanted to use as much production as possible so Fernandez tried a combination of dialogue tools to help achieve a seamless match between production and ADR. “I used a lot of Synchro Arts’ Revoice Pro to help with pitch matching and rhythm matching. I also used every tool iZotope offers that I had at my disposal. For EQ, I like FabFilter. Then I used reverb to make the locations work together,” he says.

Volpe reveals, “Production sound mixer Alexander Rosborough did a wonderful job, but the extraneous noises required us to replace at least 60% of the dialogue. We spent several months on ADR. Luckily, we had two extremely talented and willing actors. We had an extremely talented mixer, Rob Fernandez. My dialogue editor William Sweeney was amazing too. Between the directing, the acting, the editing and the mixing they managed to get it done. I don’t think you can ever tell that so much of the dialogue has been replaced.”

The third main character in the film is the lighthouse itself, which lives and breathes with a heartbeat and lungs. The mechanism of the Fresnel lens at the top of the lighthouse has a deep, bassy gear-like heartbeat and rasping lungs that Volpe created from wrought iron bars drawn together. Then he added reverb to make the metal sound breathier. In the bowels of the lighthouse there is a steam engine that drives the gears to turn the light. Ephraim (Pattinson) is always looking up toward Thomas (Dafoe), who is in the mysterious room at the top of the lighthouse. “A lot of the scenes revolve around clockwork, which is just another rhythmic element. So Ephraim starts to hear that and also the sound of the light that composer Korven created, this singing glass sound. It goes over and over and drives him insane,” Volpe explains.

Damian Volpe

Mermaids make a brief appearance in the film. To create their vocals, Volpe and his wife did a recording session in which they made strange sea creature call-and-response sounds to each other. “I took those recordings and beat them up in Pro Tools until I got what I wanted. It was quite a challenge and I had to throw everything I had at it. This was more of a hammer-and-saw job than a fancy plug-in job,” Volpe says.

He captured other recordings too, like the sound of footsteps on the stairs inside a lighthouse on Cape Cod, marine steam engines at an industrial steam museum in northern Connecticut and more at the Mystic Sea Port… seagulls and waves. “We recorded so much. We dug a grave. We found an 80-year-old lobster pot that we smashed about. I recorded the inside of conch shells to get drones. Eighty percent of the sound in the film is material that I and Filipe Messeder (assistant and Foley editor) recorded, or that I recorded with my wife,” says Volpe.

But one of the trickiest sounds to create was a foghorn that Eggers originally liked from a lighthouse in Wales. Volpe tracked down the keeper there but the foghorn was no longer operational. He then managed to locate a functioning steam-powered diaphone foghorn in Shetland, Scotland. He contacted the lighthouse keeper Brian Hecker and arranged for a local documentarian to capture it. “The sound of the Sumburgh Lighthouse is a major element in the film. I did a fair amount of additional work on the recordings to make them sound more like the original one Rob [Eggers] liked, because the Sumburgh foghorn had a much deeper, bassier, whale-like quality.”

The final voice in The Lighthouse’s soundtrack is composer Korven’s score. Since Volpe wanted to blur the line between sound design and score, he created sounds that would complement Korven’s. Volpe says, “Mark Korven has these really great sounds that he generated with a ball on a cymbal. It created this weird, moaning whale sound. Then I created these metal creaky whale sounds and those two things sing to each other.”

In terms of the mix, nearly all the dialogue plays from the center channel, helping it stick to the characters within the small frame of this antiquated aspect ratio. The Foley, too, comes from the center and isn’t panned. “I’ve had some people ask me (bizarrely) why I decided to do the sound in mono. There might be a psychological factor at work where you’re looking at this little black & white square and somehow the sound glues itself to that square and gives you this idea that it’s vintage or that it’s been processed or is narrower than it actually is.

“As a matter of fact, this mix is the farthest thing from mono. The sound design, effects, atmospheres and music are all very wide — more so than I would do in a regular film as I tend to be a bit conservative with panning. But on this film, we really went for it. It was certainly an experimental film, and we embraced that,” says Volpe.

The idea of having the sonic equivalent of this 1930’s film style persisted. Since mono wasn’t feasible, other avenues were explored. Volpe suggested recording the production dialogue onto a NAGRA to “get some of that analog goodness, but it just turned out to be one thing too many for them in the midst of all the chaos of shooting on Cape Forchu in Nova Scotia,” says Volpe. “We did try tape emulator software, but that didn’t yield interesting results. We played around with the idea of laying it off to a 24-track or shooting in optical. But in the end, those all seemed like they’d be expensive and we’d have no control whatsoever. We might not even like what we got. We were struggling to come up with a solution.”

Then a suggestion from Harbor’s Joel Scheuneman (who’s experienced in the world of music recording/producing) saved the day. He recommended the outboard Rupert Neve Designs 542 Tape Emulator.

The Mix
The film was final mixed in 5.1 surround on a Euphonix S5 console. Each channel was sent through an RND 542 module and then into the speakers. The units’ magnetic heads added saturation, grain and a bit of distortion to the tracks. “That is how we mixed the film. We had all of these imperfections in the track that we had to account for while we were mixing,” explains Fernandez.

“You couldn’t really ride it or automate it in any way; you had to find the setting that seemed good and then just let it rip. That meant in some places it wasn’t hitting as hard as we’d like and in other places it was hitting harder than we wanted. But it’s all part of Rob Eggers’s style of filmmaking — leaving room for discovery in the process,” adds Volpe.

“There’s a bit of chaos factor because you don’t know what you’re going to get. Rob is great about being specific but also embracing the unknown or the unexpected,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


The gritty and realistic sounds of Joker

By Jennifer Walden

The grit of Gotham City in Warner Bros.’ Joker is painted on in layers, but not in broad strokes of sound. Distinct details are meticulously placed around the Dolby Atmos surround field, creating a soundtrack that is full but not crowded and muddy — it’s alive and clear. “It’s critical to try to create a real feeling world so Arthur (Joaquin Phoenix) is that much more real, and it puts the audience in a place with him,” says re-recording mixer Tom Ozanich, who mixed alongside Dean Zupancic at Warner Bros. Sound in Burbank on Dub Stage 9.

L-R: Tom Ozanich, Unsun Song and Dean Zupancic on Dub Stage 9. Photo: Michael Dressel.

One main focus was to make a city that was very present and oppressive. Supervising sound editor Alan Robert Murray created specific elements to enhance this feeling, while dialogue supervisor Kira Roessler created loop group crowds and callouts that Ozanich could sprinkle throughout the film.

During the street scene near the beginning of the film, Arthur is dressed as a clown and dancing on the sidewalk, spinning a “Going Out of Business” sign. Traffic passes to the left and pedestrians walk around Arthur, who is on the right side of the screen. The Atmos mix reflects that spatiality.

“There are multiple layers of sounds, like callouts of group ADR, specific traffic sounds and various textures of air and wind,” says Zupancic. “We had so many layers that afforded us the ability to play sounds discretely, to lean the traffic a little heavier into the surrounds on the left and use layers of voices and footsteps to lean discretely to the right. We could play very specific dimensions. We just didn’t blanket a bunch of sounds in the surrounds and blanket a bunch of sounds on the front screen. It was extremely important to make Gotham seem gritty and dirty with all those layers.”

The sound effects and callouts didn’t always happen conveniently between lines of principal dialogue. Director Todd Phillips wanted the city to be conspicuous… to feel disruptive. Ozanich says, “We were deliberate with Todd about the placement of literally every sound in the movie. There are a few spots where the callouts were imposing (but not quite distracting), and they certainly weren’t pretty. They didn’t occur in places where it doesn’t matter if someone is yelling in the background. That’s not how it works in real life; we tried to make it more like real life and let these voices crowd in on our main characters.”

Every space feels unique with Gotham City filtering in to varying degrees. For example, in Arthur’s apartment, the city sounds distant and benign. It’s not as intrusive as it is in the social worker’s (Sharon Washington) office, where car horns punctuate the strained conversation. Zupancic says, “Todd was very in tune with how different things would sound in different areas of the city because he grew up in a big city.”

Arthur’s apartment was further defined by director Phillips, who shared specifics like: The bedroom window faces an alley so there are no cars, only voices, and the bathroom window looks out over a courtyard. The sound editorial team created the appropriate tracks, and then the mixers — working in Pro Tools via Avid S6 consoles — applied EQ and reverb to make the sounds feel like they were coming from those windows three stories above the street.

In the Atmos mix, the clarity of the film’s apposite reverbs and related processing simultaneously helped to define the space on-screen and pull the sound into the theater to immerse the audience in the environment. Zupancic agrees. “Tom [Ozanich] did a fabulous job with all of the reverbs and all of the room sound in this movie,” says. “His reverbs on the dialogue in this movie are just spectacular and spot on.”

For instance, Arthur is waiting in the green room before going on the Murray Franklin Show. Voices from the corridor filter through the door, and when Murray (Robert De Niro) and his stage manager open it to ask Arthur what’s with the clown makeup, the filtering changes on the voices. “I think a lot about the geography of what is happening, and then the physics of what is happening, and I factor all of those things together to decide how something should sound if I were standing right there,” explains Ozanich.

Zupancic says that Ozanich’s reverbs are actually multistep processes. “Tom’s not just slapping on a reverb preset. He’s dialing in and using multiple delays and filters. That’s the key. Sounds of things change in reality — reverbs, pitches, delays, EQ — and that is what you’re hearing in Tom’s reverbs.”

“I don’t think of reverb generically,” elaborates Ozanich, “I think of the components of it, like early reflections, as a separate thought related to the reverb. They are interrelated for sure, but that separation may be a factor of making it real.”

One reason the reverbs were so clear is because Ozanich mixed Joker’s score — composed by Hildur Guðnadóttir — wider than usual. “The score is not a part of the actual world, and my approach was to separate the abstract from the real,” explains Ozanich. “In Arthur’s world, there’s just a slight difference between the actual world, where the physical action is taking place, and Arthur’s headspace where the score plays. So that’s intended to have an ever-so-slight detachment from the real world, so that we experience that emotionally and leave the real space feeling that much more real.”

Atmos allows for discrete spatial placement, so Ozanich was able to pull the score apart, pull it into the theater (so it’s not coming from just the front wall), and then EQ each stem to enhance its defining characteristic — what Ozanich calls “tickling the ear.”

“When you have more directionality to the placement of sound, it pulls things wider because rather than it being an ambiguous surround space, you’re now feeling the specificity of something being 33% or 58% back off the screen,” he says.

Pulling the score away from the front and defining where it lived in the theater space gave more sonic real estate for the sounds coming from the L-C-Rs, like the distinct slap of a voice bouncing off a concrete wall or Foley sounds like the delicate rustling scratches of Arthur’s fingertips passing over a child’s paintings.

One of the most challenging scenes to mix in terms of effects was the bus ride, in which Arthur makes funny faces at a little boy, trying to make him laugh, only to be admonished by the boy’s mother. Director Phillips and picture editor Jeff Groth had very specific ideas about how that ‘70s-era bus should sound, and Zupancic wanted those sounds to play in the proper place in the space to achieve the director’s vision. “Buses of that era had an overhead rack where people could put packages and bags; we spent a lot of time getting those specific rattles where they should be placed, and where the motor should be and how it would sound from Arthur’s seat. It wasn’t a hard scene to mix; it was just complex. It took a lot of time to get all of that right. Now, the scene just goes by and you don’t pay attention to the little details; it just works,” says Zupancic.

Ozanich notes the opening was a challenging scene as well. The film begins in the clowns’ locker room. There’s a radio broadcast playing, clowns playing cards, and Arthur is sitting in front of a mirror applying his makeup. “Again, it’s not a terribly complex scene on the surface, but it’s actually one of the trickiest in the movie because there wasn’t a super clear lead instrument. There wasn’t something clearly telling you what you should be paying attention to,” says Ozanich.

The scene went through numerous iterations. One version had source music playing the whole time. Another had bits of score instead. There are multiple competing elements, like the radio broadcast and the clowns playing cards and sharing anecdotes. All those voices compete for the audience’s ear. “If it wasn’t tilted just the right way, you were paying attention to the wrong thing or you weren’t sure what you should be paying attention to, which became confusing,” says Ozanich.

In the end, the choice was made to pull out all the music and then shift the balance from the radio to the clowns as the camera passes by them. It then goes back to the radio briefly as the camera pushes in closer and closer on Arthur. “At this point, we should be focusing on Arthur because we’re so close to him. The radio is less important, but because you hear this voice it grabs your attention,” says Ozanich.

The problem was there were no production sounds for Arthur there, nothing to grab the audience’s ear. “I said, ‘He needs to make sound. It has to be subtle, but we need him to make some sound so that we connect to him and feel like he is right there.’ So Kira found some sounds of Joaquin from somewhere else in the film, and Todd did some stuff on a mic. We put the Foley in there and we cobbled together all of these things,” says Ozanich. “Now, it unquestionably sounds like there was a microphone open in front of him and we recorded that. But in reality, we had to piece it all together.”

“It’s a funny little dichotomy of what we are trying to do. There are certain things we are trying to make stick on the screen, to make you buy that the sound is happening right there with the thing that you’re looking at, and then at the same time, we want to pull sounds off of the screen to envelop the audience and put them into the space and not be separated by that plane of the screen,” observes Ozanich.

The Atmos mix on Joker is a prime example of how effective that dichotomy can be. The sound of the environments, like standing on the streets of Gotham or riding on the subway car, are distinct, dynamic, and ever-changing, and the sounds emanating from the characters are realistic and convincing. All of this serves to pull the audience into the story and get them emotionally invested in the tale of this sad, psychotic clown.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Behind the Title: One Thousand Birds sound designer Torin Geller

Initially interested in working in a music studio, once this sound pro got a taste of audio post, there was no turning back.

NAME: Torin Geller

COMPANY: NYC’s One Thousand Birds (OTB)

CAN YOU DESCRIBE YOUR COMPANY?
OTB is a bi-coastal audio post house specializing in sound design and mixing for commercials, TV and film. We also create interactive audio experiences and installations.

One Thousand Birds

WHAT’S YOUR JOB TITLE?
Sound and Interactive Designer

WHAT DOES THAT ENTAIL?
I work on every part of our sound projects: dialogue edit, sound design and mix, as well as help direct and build our interactive installation work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Operating a scissor lift!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my friends. The atmosphere at OTB is like no other place I’ve worked; many of the people working here are old friends. I think it helps us a lot in terms of being creative since we’re not afraid to take risks and everyone here has each other’s backs.

WHAT’S YOUR LEAST FAVORITE?
Unexpected overtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the morning, right after my first cup of coffee.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Making ambient music in the woods.

JBL spot with Aaron Judge

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school for music technology hoping to work in a music studio, but fell into working in audio post after getting an internship at OTB during school. I still haven’t left!

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recently, we worked on a great mini doc for Royal Caribbean that featured chef Paxx Caraballo Moll, whose story is really inspiring. We also recently did sound design and Foley for an M&Ms spot, and that was a lot of fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We designed and built a two-story tall interactive chandelier at a hospital in Kansas City — didn’t see that one coming. It consists of a 20-foot-long spiral of glowing orbs that reacts to the movements of people walking by and also incorporates reactive sound. Plus, I got to work on the design of the actual structure with my sister who’s an artist and landscape architect, which was really cool.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– headphones
– music streaming
– synthesizers

Hospital installation

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I love following animators on Instagram. I find that kind of work especially inspiring. Movement and sound are so integral to each other, and I love seeing how that can interplay in abstract plus interesting ways of animation that aren’t necessarily possible in film.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve recently started rock climbing and it’s an amazing way to de-stress. I’ve never been one to exercise, but rock climbing feels very different. It’s intensely challenging but totally non-competitive and has a surprisingly relaxed pace to it. Each climb is a puzzle with a very clear end, which makes it super satisfying. And nothing helps you sleep better than being physically exhausted.

The sounds of HBO’s Divorce: Keeping it real

HBO’s Divorce, which stars Sarah Jessica Parker and Thomas Haden Church, focuses on a long-married couple who just can’t do it anymore. It follows them from divorce through their efforts to move on with their lives, and what that looks like. The show deftly tackles a very difficult subject with a heavy dose of humor mixed in with the pain and angst. The story takes place in various Manhattan locations and a nearby suburb. And as you can imagine the sounds of the neighborhoods vary.

                           
Eric Hirsch                                                              David Briggs

Sound post production for the third season of HBO’s comedy Divorce was completed at Goldcrest Post in New York City. Supervising sound editor David Briggs and re-recording mixer Eric Hirsch worked together to capture the ambiances of upscale Manhattan neighborhoods that serve as the backdrop for the story of the tempestuous breakup between Frances and Robert.

As is often the case with comedy series, the imperative for Divorce’s sound team was to support the narrative by ensuring that the dialogue is crisp and clear, and jokes are properly timed. However, Briggs and Hirsch go far beyond that in developing richly textured soundscapes to achieve a sense of realism often lacking in shows of the genre.

“We use sound to suggest life is happening outside the immediate environment, especially for scenes that are shot on sets,” explains Hirsch. “We work to achieve the right balance, so that the scene doesn’t feel empty but without letting the sound become so prominent that it’s a distraction. It’s meant to work subliminally so that viewers feel that things are happening in suburban New York, while not actually thinking about it.”

Season three of the show introduces several new locations and sound plays a crucial role in capturing their ambience. Parker’s Frances, for example, has moved to Inwood, a hip enclave on the northern tip of Manhattan, and background sound effects help to distinguish it from the woodsy village of Hastings-on-Hudson, where Haden Church’s Robert continues to live. “The challenge was to create separation between those two worlds, so that viewers immediately understand where we are,” explains series producer Mick Aniceto. “Eric and David hit it. They came up with sounds that made sense for each part of the city, from the types of cars you hear on the streets to the conversations and languages that play in the background.”

Meanwhile, Frances’ friend, Diane, (Molly Shannon) has taken up residence in a Manhattan high-rise and it, too, required a specific sonic treatment. “The sounds that filter into a high-rise apartment are much different from those in a street-level structure,” Aniceto notes. “The hum of traffic is more distant, while you hear things like the whirl of helicopters. We had a lot of fun exploring the different sonic environments. To capture the flavor of Hudson-on-Hastings, our executive producer and showrunner came up the idea of adding distant construction sounds to some scenes.”

A few scenes from the new season are set inside a prison. Aniceto says the sound team was able to help breathe life into that environment through the judicious application of very specific sound design. “David Briggs had just come off of Escape at Dannemora, so he was very familiar with the sounds of a prison,” he recalls. “He knew the kind of sounds that you hear in communal areas, not only physical sounds like buzzers and bells, but distant chats among guards and visitors. He helped us come up with amusing bits of background dialogue for the loop group.”

Most of the dialogue came directly from the production tracks, but the sound team hosted several ADR sessions at Goldcrest for crowd scenes. Hirsch points to an episode from the new season that involves a girls basketball team. ADR mixer Krissopher Chevannes recorded groups of voice actors (provided by Dann Fink and Bruce Winant of Loopers Unlimited) to create background dialogue for a scene on a team bus and another that happens during a game.

“During the scene on the bus, the girls are talking normally, but then the action shifts to slo-mo. At that point the sound design goes away and the music drives it,” Hirsch recalls. “When it snaps back to reality, we bring the loop-group crowd back in.”

The emotional depth of Divorce marks it as different from most television comedies, it also creates more interesting opportunities for sound. “The sound portion of the show helps take it over the line and make it real for the audience,” says Aniceto. “Sound is a big priority for Divorce. I get excited by the process and the opportunities it affords to bring scenes to life. So, I surround myself by smart and talented people like Eric and David, who understand how to do that and give the show the perfect feel.”

All three seasons of Divorce are available on HBO Go and HBO Now.

Dialects, guns and Atmos mixing: Tom Clancy’s Jack Ryan

By Jennifer Walden

Being an analyst is supposed to be a relatively safe job. A paper cut is probably the worst job-related injury you’d get… maybe, carpal tunnel. But in Amazon Studios/Paramount’s series Tom Clancy’s Jack Ryan, CIA analyst Jack Ryan (John Krasinski) is hauled away from his desk at CIA headquarters in Langley, Virginia, and thrust into an interrogation room in Syria where he’s asked to extract info from a detained suspect. It’s a far cry from a sterile office environment and the cuts endured don’t come from paper.

Benjamin Cook

Four-time Emmy award-winning supervising sound editor Benjamin Cook, MPSE — at 424 Post in Culver City — co-supervised Tom Clancy’s Jack Ryan with Jon Wakeham. Their sound editorial team included sound effects editors Hector Gika and David Esparza, MPSE, dialogue editor Tim Tuchrello, music editor Alex Levy, Foley editor Brett Voss, and Foley artists Jeff Wilhoit and Dylan Tuomy-Wilhoit.

This is Cook’s second Emmy nomination this season, being nominated also for sound editing on HBO’s Deadwood: The Movie.

Here, Cook talks about the aesthetic approach to sound editing on Jack Ryan and breaks down several scenes from the Emmy-nominated “Pilot” episode in Season 1.

Congratulations on your Emmy nomination for sound editing on Tom Clancy’s Jack Ryan! Why did you choose the first episode for award consideration?
Benjamin Cook: It has the most locations, establishes the CIA involvement, and has a big battle scene. It was a good all-around episode. There were a couple other episodes that could have been considered, such as Episode 2 because of the Paris scenes and Episode 6 because it’s super emotional and had incredible loop group and location ambience. But overall, the first episode had a little bit better balance between disciplines.

The series opens up with two young boys in Lebanon, 1983. They’re playing and being kids; it’s innocent. Then the attack happens. How did you use sound to help establish this place and time?
Cook: We sourced a recordist to go out and record material in Syria and Turkey. That was a great resource. We also had one producer who recorded a lot of material while he was in Morocco. Some of that could be used and some of it couldn’t because the dialect is different. There was also some pretty good production material recorded on-set and we tried to use that as much as we could as well. That helped to ground it all in the same place.

The opening sequence ends with explosions and fire, which makes an interesting juxtaposition to the tranquil water scene that follows. What sounds did you use to help blend those two scenes?
Cook: We did a muted effect on the water when we first introduced it and then it opens up to full fidelity. So we were going from the explosions and that concussive blast to a muted, filtered sound of the water and rowing. We tried to get the rhythm of that right. Carlton Cuse (one of the show’s creators) actually rows, so he was pretty particular about that sound. Beyond that, it was filtering the mix and adding design elements that were downplayed and subtle.

The next big scene is in Syria, when Sheikh Al Radwan (Jameel Khoury) comes to visit Sheikh Suleiman (Ali Suliman). How did you use sound to help set the tone of this place and time?
Cook: It was really important that we got the dialects right. Whenever we were in the different townships and different areas, one of the things that the producers were concerned about was authenticity with the language and dialect. There are a lot of regional dialects in Arabic, but we also needed Kurdish, Turkish — Kurmanji, Chechen and Armenian. We had really good loop group, which helped out tremendously. Caitlan McKenna our group leader cast several multi-linguist voice actors who were familiar with the area and could give us a couple different dialects; that really helped to sell location for sure. The voices — probably more than anything else — are what helped to sell the location.

Another interesting juxtaposition of sound was going from the sterile CIA office environment to this dirty, gritty, rattley world of Syria.
Cook: My aesthetic for this show — besides going for the authenticity that the showrunners were after — was trying to get as much detail into the sound as possible (when appropriate). So, even when we’re in the thick of the CIA bullpen there is lots of detail. We did an office record where we set mics around an office and moved papers and chairs and opened desk drawers. This gave the office environment movement and life, even when it is played low.

That location seems sterile when we go to the grittiness of the black-ops site in Yemen with its sand gusts blowing, metal shacks rattling and tents flapping in the wind. You also have off and on screen vehicles and helicopters. Those textures were really helpful in differentiating those two worlds.

Tell me about Jack Ryan’s panic attack at 4:47am. It starts with that distant siren and then an airplane flyover before flashing back to the kid in Syria. What went into building that sequence?
Cook: A lot of that was structured by the picture editor, and we tried to augment what they had done and keep their intention. We changed out a few sounds here and there, but I can’t take credit for that one. Sometimes that’s just the nature of it. They already have an idea of what they want to do in the picture edit and we just augment what they’ve done. We made it wider, spread things out, added more elements to expand the sound more into the surrounds. The show was mixed in Dolby Home Atmos so we created extra tracks to play in the Atmos sound field. The soundtrack still has a lot of detail in the 5.1 and a 7.1 mixes but the Atmos mix sounds really good.

Those street scenes in Syria, as we’re following the bank manager through the city, must have been a great opportunity to work with the Atmos surround field.
Cook: That is one of my favorite scenes in the whole show. The battles are fun but the street scene is a great example of places where you can use Atmos in an interesting way. You can use space to your advantage to build the sound of a location and that helps to tell the story.

At one point, they’re in the little café and we have glass rattles and discrete sounds in the surround field. Then it pans across the street to a donkey pulling a cart and a Vespa zips by. We use all of those elements as opportunities to increase the dynamics of the scene.

Going back to the battles, what were your challenges in designing the shootout near the end of this episode? It’s a really long conflict sequence.
Cook: The biggest challenge was that it was so long and we had to keep it interesting. You start off by building everything, you cut everything, and then you have to decide what to clear out. We wanted to give the different sides — the areas inside and outside — a different feel. We tried to do that as much as possible but the director wanted to take it even farther. We ended up pulling the guns back, perspective-wise, making them even farther than we had. Then we stripped out some to make it less busy. That worked out well. In the end, we had a good compromise and everyone was really happy with how it plays.

The guns were those original recordings or library sounds?
Cook: There were sounds in there that are original recordings, and also some library sounds. I’ve gotten material from sound recordist Charles Maynes — he is my gun guru. I pretty much copy his gun recording setups when I go out and record. I learned everything I know from Charles in terms of gun recording. Watson Wu had a great library that recently came out and there is quite a bit of that in there as well. It was a good mix of original material and library.

We tried to do as much recording as we could, schedule permitting. We outsourced some recording work to a local guy in Syria and Turkey. It was great to have that material, even if it was just to use as a reference for what that place should sound like. Maybe we couldn’t use the whole recording but it gave us an idea of how that location sounds. That’s always helpful.

Locally, for this episode, we did the office shoot. We recorded an MRI machine and Greer’s car. Again, we always try to get as much as we can.

There are so many recordists out there who are a great resource, who are good at recording weapons, like Charles, Watson and Frank Bry (at The Recordist). Frank has incredible gun sounds. I use his libraries all the time. He’s up in Idaho and can capture these great long tails that are totally pristine and clean. The quality is so good. These guys are recording on state-of-the-art, top-of-the-line rigs.

Near the end of the episode, we’re back in Lebanon, 1983, with the boys coming to after the bombing. How did you use sound to help enhance the tone of that scene?
Cook: In the Avid track, they had started with a tinnitus ringing and we enhanced that. We used filtering on the voices and delays to give it more space and add a haunting aspect. When the older boy really wakes up and snaps to we’re playing up the wailing of the younger kid as much as possible. Even when the older boy lifts the burning log off the younger boy’s legs, we really played up the creak of the wood and the fire. You hear the gore of charred wood pulling the skin off his legs. We played those elements up to make a very visceral experience in that last moment.

The music there is very emotional, and so is seeing that young boy in pain. Those kids did a great job and that made it easy for us to take that moment further. We had a really good source track to work with.

What was the most challenging scene for sound editorial? Why?
Cook: Overall, the battle was tough. It was a challenge because it was long and it was a lot of cutting and a lot of material to get together and go through in the mix. We spent a lot of time on that street scene, too. Those two scenes were where we spent the most time for sure.

The opening sequence, with the bombs, there was debate on whether we should hear the bomb sounds in sync with the explosions happening visually. Or, should the sound be delayed? That always comes up. It’s weird when the sound doesn’t match the visual, when in reality you’d hear the sound of an explosion that happen miles away much later than you’d see the explosion happen.

Again, those are the compromises you make. One of the great things about this medium is that it’s so collaborative. No one person does it all… or rarely it’s one person. It does take a village and we had great support from the producers. They were very intentional on sound. They wanted sound to be a big player. Right from the get-go they gave us the tools and support that we needed and that was really appreciated.

What would you want other sound pros to know about your sound work on Tom Clancy’s Jack Ryan?
Cook: I’m really big into detail on the editing side, but the mix on this show was great too. It’s unfortunate that the mixers didn’t get an Emmy nomination for mixing. I usually don’t get recognized unless the mixing is really done well.

There’s more to this series than the pilot episode. There are other super good sounding episodes; it’s a great sounding season. I think we did a great job of finding ways of using sound to help tell the story and have it be an immersive experience. There is a lot of sound in it and as a sound person, that’s usually what we want to achieve.

I highly recommend that people listen to the show in Dolby Atmos at home. I’ve been doing Atmos shows now since Black Sails. I did Lost in Space in Atmos, and we’re finishing up Season 2 in Atmos as well. We did Counterpart in Atmos. Atmos for home is here and we’re going to see more and more projects mixed in Atmos. You can play something off your phone in Atmos now. It’s incredible how the technology has changed so much. It’s another tool to help us tell the story. Look at Roma (my favorite mix last year). That film really used Atmos mixing; they really used the sound field and used extreme panning at times. In my honest opinion, it made the film more interesting and brought another level to the story.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Harbor expands to LA and London, grows in NY

New York-based Harbor has expanded into Los Angeles and London and has added staff and locations in New York. Industry veteran Russ Robertson joins Harbor’s new Los Angeles operation as EVP of sales, features and episodic after a 20-year career with Deluxe and Panavision. Commercial director James Corless and operations director Thom Berryman will spearhead Harbor’s new UK presence following careers with Pinewood Studios, where they supported clients such as Disney, Netflix, Paramount, Sony, Marvel and Lucasfilm.

Harbor’s LA-based talent pool includes color grading from Yvan Lucas, Elodie Ichter, Katie Jordan and Billy Hobson. Some of the team’s projects include Once Upon a Time … in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Snow White and the Huntsman and Rise of the Planet of the Apes.

Paul O’Shea, formerly of MPC Los Angeles, heads the visual effects teams, tapping lead CG artist Yuichiro Yamashita for 3D out of Harbor’s Santa Monica facility and 2D creative director Q Choi out of Harbor’s New York office. The VFX artists have worked with brands such as Nike, McDonald’s, Coke, Adidas and Samsung.

Harbor’s Los Angeles studio supports five grading theaters for feature film, episodic and commercial productions, offering private connectivity to Harbor NY and Harbor UK, with realtime color-grading sessions, VFX reviews and options to conform and final-deliver in any location.

The new UK operation, based out of London and Windsor, will offer in-lab and near-set dailies services along with automated VFX pulls and delivery through Harbor’s Anchor system. The UK locations will draw from Harbor’s US talent pool.

Meanwhile, the New York operation has grown its talent roster and Soho footprint to six locations, with a recently expanded offering for creative advertising. Veteran artists on the commercial team include editors Bruce Ashley and Paul Kelly, VFX supervisor Andrew Granelli, colorist Adrian Seery, and sound mixers Mark Turrigiano and Steve Perski.

Harbor’s feature and episodic offering continues to expand, with NYC-based artists available in Los Angeles and London.

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

Digital Arts expands team, adds Nutmeg Creative talent

Digital Arts, an independently owned New York-based post house, has added several former Nutmeg Creative talent and production staff members to its roster — senior producer Lauren Boyle, sound designer/mixers Brian Beatrice and Frank Verderosa, colorist Gary Scarpulla, finishing editor/technical engineer Mark Spano and director of production Brian Donnelly.

“Growth of talent, technology, and services has always been part of the long-term strategy for Digital Arts, and we’re fortunate to welcome some extraordinary new talent to our staff,” says Digital Arts owner Axel Ericson. “Whether it’s long-form content for film and television, or working with today’s leading agencies and brands creating dynamic content, we have the talent and technology to make all of our clients’ work engaging, and our enhanced services bring their creative vision to fruition.”

Brian Donnelly, Lauren Boyle and Mark Spano.

As part of this expansion, Digital Arts will unveil additional infrastructure featuring an ADR stage/mix room. The current facility boasts several state-of-the-art audio suites, a 4K finishing theater/mixing dubstage, four color/finishing suites and expansive editorial and production space, which is spread over four floors.

The former Nutmeg team has hit the ground running working their long-time ad agency, network, animation and film studio clients. Gary Scarpulla worked on color for HBO’s Veep and Los Espookys, while Frank Verderosa has been working with agency Ogilvy on several Ikea campaigns. Beatrice mixed spots for Tom Ford’s cosmetics line.

In addition, Digital Arts’ in-house theater/mixing stage has proven to be a valuable resource for some of the most popular TV productions, including recording recent commentary sessions for the legendary HBO series, Game of Thrones and the final season of Veep.

Especially noteworthy is colorist Ericson’s and finishing editor Mark Spano’s collaboration with Oscar-winning directors Karim Amer and Jehane Noujaim to bring to fruition the Netflix documentary The Great Hack.

Digital Arts also recently expanded its offerings to include production services. The company has already delivered projects for agencies Area 23, FCB Health and TCA.

“Digital Arts’ existing infrastructure was ideally suited to leverage itself into end-to-end production,” Donnelly says. “Now we can deliver from shoot to post.”

Tools employed across post are Avid Pro Tools, D Control ES, S3 for audio post and Avid Media Composer, Adobe Premiere and Blackmagic Resolve for editing. Color grading is via Resolve.

Main Image: (L-R) Frank Verderosa, Brian Beatrice and Gary Scarpulla

 

Blackmagic: Resolve 16.1 in public beta, updates Pocket Cinema Camera

Blackmagic Design has announced DaVinci Resolve 16.1, an updated version of its edit, color, visual effects and audio post software that features updates to the new cut page, further speeding up the editing process.

With Resolve 16, introduced at NAB 2019, now in final release, the Resolve 16.1 public beta is now available for download from the Blackmagic Design website. This new public beta will help Blackmagic continue to develop new ideas while collaborating with users to ensure those ideas are refined for real-world workflows.

The Resolve 16.1 public beta features changes to the bin that now make it possible to place media in various folders and isolate clips from being used when viewing them in the source tape, sync bin or sync window. Clips will appear in all folders below the current level, and as users navigate around the levels in the bin, the source tape will reconfigure in real time. There’s even a menu for directly selecting folders in a user’s project.

Also new in this public beta is the smart indicator. The new cut page in DaVinci Resolve 16 introduced multiple new smart features, which work by estimating where the editor wants to add an edit or transition and then applying it without the editor having to waste time placing exact in and out points. The software guesses what the editor wants to do and just does it — it adds the inset edit or transition to the edit closest to where the editor has placed the CTI.

But a problem can arise in complex edits, where it is hard to know what the software would do and which edit it would place the effect or clip into. That’s the reason for the beta version’s new smart indicator. The smart indicator provides a small marker in the timeline so users get constant feedback and always know where DaVinci Resolve 16.1 will place edits and transitions. The new smart indicator constantly live-updates as the editor moves around the timeline.

One of the most common items requested by users was a faster way to cut clips in the timeline, so now DaVinci Resolve 16.1 includes a “cut clip” icon in the user interface. Clicking on it will slice the clips in the timeline at the CTI point.

Multiple changes have also been made to the new DaVinci Resolve Editor Keyboard, including a new adaptive scroll feature on the search dial, which will automatically slow down a job when editors are hunting for an in point. The live trimming buttons have been renamed to the same labels as the functions in the edit page, and they have been changed to trim in, trim out, transition duration, slip in and slip out. The function keys along the top of the keyboard are now being used for various editing functions.

There are additional edit models on the function keys, allowing users to access more types of editing directly from dedicated keys on the keyboard. There’s also a new transition window that uses the F4 key, and pressing and rotating the search dial allows instant selection from all the transition types in DaVinci Resolve. Users who need quick picture picture-in in-picture effects can use F5 and apply them instantly.

Sometimes when editing projects with tight deadlines, there is little time to keep replaying the edit to see where it drags. DaVinci Resolve 16.1 features something called a Boring Detector that highlights the timeline where any shot is too long and might be boring for viewers. The Boring Detector can also show jump cuts, where shots are too short. This tool allows editors to reconsider their edits and make changes. The Boring Detector is helpful when using the source tape. In that case, editors can perform many edits without playing the timeline, so the Boring Detector serves as an alternative live source of feedback.

Another one of the most requested features of DaVinci Resolve 16.1 is the new sync bin. The sync bin is a digital assistant editor that constantly sorts through thousands of clips to find only what the editor needs and then displays them synced to the point in the timeline the editor is on. The sync bin will show the clips from all cameras on a shoot stacked by camera number. Also, the viewer transforms into a multi-viewer so users can see their options for clips that sync to the shot in the timeline. The sync bin uses date and timecode to find and sync clips, and by using metadata and locking cameras to time of day, users can save time in the edit.

According to Blackmagic, the sync bin changes how multi-camera editing can be completed. Editors can scroll off the end of the timeline and keep adding shots. When using the DaVinci Resolve Editor Keyboard, editors can hold the camera number and rotate the search dial to “live overwrite” the clip into the timeline, making editing faster.

The closeup edit feature has been enhanced in DaVinci Resolve 16.1. It now does face detection and analysis and will zoom the shot based on face positioning to ensure the person is nicely framed.

If pros are using shots from cameras without timecode, the new sync window lets them sort and sync clips from multiple cameras. The sync window supports sync by timecode and can also detect audio and sync clips by sound. These clips will display a sync icon in the media pool so editors can tell which clips are synced and ready for use. Manually syncing clips using the new sync window allows workflows such as multiple action cameras to use new features such as source overwrite editing and the new sync bin.

Blackmagic Pocket Cinema Camera
Besides releasing the DaVinci Resolve 16.1 public beta, Blackmagic also updated the Blackmagic Pocket Cinema Camera. Blackmagic not only upgraded the camera from 4K to 6K resolution, but it changed the mount to the much used Canon EF style. Previous iterations of the Pocket Cinema Camera used a Micro 4/3s mount, but many users chose to purchase a Micro 4/3s-to-Canon EF adapter, which easily runs over $500 new. Because of the mount change in the Pocket Cinema Camera 6K, users can avoid buying the adapter and — if they shoot with Canon EF — can use the same lenses.

Avid’s new control surfaces for Pro Tools, Media Composer, other apps

By Mel Lambert

During a recent come-and-see MPSE Sound Advice evening at Avid’s West Coast offices in Burbank, MPSE members and industry colleagues were treated to an exclusive look at two new control surfaces for editorial suites and film/TV post stages.

The S1 and S4 controllers join the current S3 and larger S6 control surfaces. Session files from all S Series surfaces are fully compatible with one another, enabling edit and mix session data to move freely from facility to facility. All surfaces provide comprehensive control of Eucon-enabled software, including Pro Tools, Cubase, Nuendo, Logic Pro, Media Composer and other apps to create and record tracks, write automation, control plugins, set up routing and a host of other essential operations via assignable faders, buttons and rotary controls.

S1

S1

Jeff Komar, one of Avid’s pro audio solutions specialists, served as our guide during the evening’s demo sessions of the new surfaces for fully integrated sample-accurate editing and immersive mixing. Expected to ship toward the end of the year, the S1 is said to offer full software integration with Avid’s high-end consoles in a portable, slim-line surface, while the S4 — which reportedly begins shipping in September — is said to bring workstation control to small- to mid-sized post facilities in an ergonomic and compact package.

Pro-user prices start at $24,000 for a three-foot S4 with eight faders; a five-foot configuration with 24 on-surface faders and post-control sections should retail for around $50,000. The S1’s expected end-user price will be approximately $1,200.

The S4 provides extensive visual feedback, including switchable display from channel meters, groups, EQ curves and automation data, in addition to scrolling Pro Tools waveforms that can be edited from the surface. The semi-modular architecture accommodates between eight and 24 assignable faders in eight-fader blocks, with add-on displays, joysticks, PEC/direct paddles and all-knob attention modules. The S4 also features assignable talkback, listen back and speaker sources/levels for Foley/ADR recording plus Dolby Atmos and other formats of immersive audio monitoring. The unit can command two connected playback/record workstations. In essence, the S4 replaces the current S6 M10 system.

Avid’s Jeff Komar

From recording and editing tracks to mixing and monitoring in stereo or surround, the smaller S1 surface provides comprehensive control and visual feedback with full-on Eucon compatibility for Pro Tools and Media Composer. There is also native support for third-party applications, such as Apple Logic Pro, Steinberg Cubase, Adobe Premiere Pro and others. Users can connect up to four units — and also add a Pro Tools|Dock — to create an extended controller. Each S1 has an upper shelf designed to hold an iOS- or Android-compatible tablet running the Pro Tools|Control app. With assignable motorized faders and knobs, as well as fast-access touchscreen workflows and programmable Soft Keys, the S1 is said to offer the speed and versatility needed to accelerate post and video projects.

Reaching deeper into the S4’s semi-modular topology, the surface can be configured with up to three Channel Strip Modules (offering a maximum of 24 faders), four Display Modules to provide visual feedback of each session, and up to three optional modules. The Display Module features a high-resolution TFT screen to show channel names, channel meters, routing, groups, automation data and DAW settings, as well as scrolling waveforms and master meters.

Eucon connectivity can be used to control two different software applications simultaneously, with single key press of editing plugins, writing session automation and other complex tasks. Adding joysticks, PEC/Direct paddles and attention panels enable more functions to be controlled simultaneously from the modular control surface to handle various editing and mixing workflows.

S4

The Master Touch Module (MTM) provides fast access to mix and control parameters through a tilting 12.1-inch multipoint touchscreen, with eight programmable rotary encoders and dedicated knobs and keys. The Master Automation Module (MAM) streamlines session navigation plus project automation and features a comprehensive transport control section with shuttle/jog wheel, a Focus Fader, automation controls and numeric keypad. The Channel Strip Module (CSM) handles control-track levels, plugins and other parameters through eight channel faders, 32 top-lit knobs (four per channel) plus other programmable keys and switches.

For mixing and panning surround and immersive audio projects, including Atmos and Ambisonics, the Joystick Module features a pair of controllers with TFT and OLED displays. The Post Module enables switching between live and recorded tracks/stems through two rows of 10 PEC/direct paddles, while the Attention Knob Module features 32 top-lit knobs — or up to 64 via two modules — to provide extra assignable controls and feedback for plugins, EQ, dynamics, panning and more.

Dependent upon the number of Channel Strip Modules and other options, a customized S4 surface can be housed in either a three-, four- or five -foot pre-assembled frame. As a serving suggestion, the S4-3_CB_Top includes one CSM, one MTM, one MAM and filler panels/plates in a three-foot frame, reaching up to an S4-24-fader, five-foot base system that includes three CSMs, one MTM, one MAM and filler panels/plates in a five-foot frame.

My sincere thanks to members of Avid’s Burbank crew, including pro audio solutions specialists Tony Joy and Gil Gowing, together with Richard McKernan, professional console sales manager for the western region, for their hospitality and patience with my probing questions.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Skywalker Sound’s audio post mix for Toy Story 4

By Jennifer Walden

Pixar’s first feature-length film, 1995’s Toy Story, was a game-changer for animated movies. There was no going back after that blasted onto screens and into the hearts of millions. Fast-forward 24 years to the franchise’s fourth installment — Toy Story 4 — and it’s plain to see that Pixar’s approach to animated fare hasn’t changed.

Visually, Toy Story 4 brings so much to the screen, with its near-photorealistic imagery, interesting camera angles and variations in depth of field. “It’s a cartoon, but not really. It’s a film,” says Skywalker Sound’s Oscar-winning re-recording mixer Michael Semanick, who handled the effects/music alongside re-recording mixer Nathan Nance on dialogue/Foley.

Nathan Nance

Here, Semanick and Nance talk about their approach to mixing Toy Story 4, how they use reverb and Foley to bring the characters to life, and how they used the Dolby Atmos surround field to make the animated world feel immersive. They also talk about mixing the stunning rain scene, the challenges of mixing the emotional carnival scenes near the end and mixing the Bo Peep and Woody reunion scene.

Is your approach to mixing an animated film different from how you’d approach the mix on a live-action film? Mix-wise, what are some things you do to make an animated world feel like a real place?
Nathan Nance: The approach to the mix isn’t different. No matter if it’s an animated movie or a live-action movie, we are interested in trying to complement the story and direct the viewer’s attention to whatever the director wants their attention to be on.

With animation, you’re starting with just the ADR, and the approach to the whole sound job is different because you have to pick and choose every single sound and really create those environments. Even with the dialogue, we’re creating spaces with reverb (or lack of reverb) and helping the emotions of the story in the mix. You might not have the same options in a live-action movie.

Michael Semanick:

Michael Semanick: I don’t approach a film differently. Live action or animated, it comes down to storytelling. In today’s world, some of these live-action movies are like animated films. And the animated films are like live-action. I’m not sure which is which anymore.

Whether it’s live action or animation, the sound team is creating the environments. For live-action, they’re often shooting on a soundstage or they’re shooting on greenscreen, and the sound team creates those environments. For live-action films, they try to get the location to be as quiet as it can be to get the dialogue as clean as possible. So, the sound team is only working with dialogue and ADR.

It’s like an animation in that they need to recreate the entire environment. The production sound mixer is trying to capture the dialogue and not the extraneous sounds. The production sound mixer is there to capture the performance from the actors on that day at that time. Sometimes there are production effects, but the post sound team still preps the scene with sound effects, Foley and loop group. Then on the dub stage, we choose how much of that to put in.

For an animated film, they do the same thing. They prep a whole bunch of sounds and then on the dub stage we decide how busy we want the scene to be.

How do you use reverb to help define the spaces and make the animated world feel believable?
Semanick: Nathan really sets the tone when he’s doing the dialogue, defining how the environments and different spaces are going to sound. That works in combination with the background ambiences. It’s really the voice bouncing off objects that gives you the sense of largeness and depth of field. So reverb is really important in establishing the size of the room and also outdoors — how your voice slaps off a building versus how it slaps off of trees or mountains. Reverb is a really essential tool for creating the environments and spaces that you want to put your actors or characters in.

Nance: You can use reverb to try and make the spaces sound “real” — whatever that means for cinema. Or, you can use it to create something that’s more emotional or has a certain vibe. Reverb is really important for making the dry dialogue sound believable, especially in these Pixar films. They are all in on the environments they’ve created. They want it to sound real and really put the viewer there. But then, there are moments when we use reverb creatively to push the moment further and add to the emotional experience.

What are some other things you do mix-wise to help make this animated world feel believable?
Semanick: The addition of Foley helps ground a lot of the animation. Those natural sounds, like footsteps and movements, we take for granted — just walking down the street or sitting in a restaurant. Those become a huge part of these films. The Foley helps to ground the animation. It gives it life, something to hold onto.

Foley is a big part of making the animated world feel believable. You have Foley artists performing to the actual picture, and the way they put a cup down or how they come to a stop adds character to the sound. It can make it sound more human, more real. Really good Foley artists can become the character. They pick up on the nuances — like how the character drags their feet or puts down a cup. All those little things we take for granted but they are all part of our character. Maybe the way you hold a wine glass and set it down is different from how I would do it. So good Foley artists tune into that right away, and they’ll match it with their performance. They’ll put one edge of the cup down and then the other if that’s how the character does it. So Foley helps to ground a lot of the animation and the VFX to reality. It adds realism. Give it up for the Foley artists!

Nance: So many times the sounds that are in Foley are the ones we recognize and take for granted. You hear those little sounds and think, yeah, that’s exactly what that sounds like. It’s because the Foley artists perform it and these are sounds that you recognize from everyday life. That adds to the realism, like Michael said.

Mix-wise, it must have been pretty difficult to push the subtle sounds through a full mix, like the sounds of the little spork named Forky. What are some techniques and sound tools that help you to get these character sounds to cut through?
Semanick: Director Josh Cooley was very particular about the sounds Forky was going to make. Supervising sound editors Ren Klyce and Coya Elliott and their team went out and got a big palette of sounds for different things.

We weeded through them here with Josh and narrowed it down. Josh then kind of left it up to me. He said he just wanted to hear Forky when he needed to hear him and then not ever have to think about it. The problem with Forky is that if there’s too much sound for him then you’re constantly watching what he’s doing as opposed to listening to what he’s saying. I was very diligent about weeding things out a lot of the time and adding sounds in for the eye movements and other tiny, specific sounds. But there’s not much sound in there for him. It’s just the voice because often his sounds were getting in the way of the dialogue and being distracting. We were very diligent about choosing what to hear and not to hear. Josh was very particular about what those sounds should be. He had been working with Ren on those for a couple months.

In balancing a film (and particularly Toy Story 4 with so many characters and so much going on), you have to really pick and choose sounds. You don’t want to pull the audience away in a direction you don’t want. That was one of the main things for Forky — getting his sounds right.

The opening rain scene was stunning! What was your approach to mixing that scene? How did you use the Dolby Atmos surround field to enhance it?
Semanick: That was a tough scene to mix. There is a lot of rain coming down and the challenge was how to get clarity out of the scene and make sure the audience can follow what was happening. So the scene starts out with rain sounds, but during the action sequence there’s actually no rain in the track.

Amazingly, your human ears and your brain fill in that information. I establish the rain and then when the action starts I literally pull all of the rain out. But your mind puts the rain there still. You think you hear it but it’s actually not there. When the track gets quiet all of a sudden, I bring the rain back up so you never miss the rain. No one has ever said anything about not hearing the rain.

I love the sound of rain; don’t get me wrong. I love the sound of rain on windows, rain on cars, rain on metals… Ren and his team did such an amazing job with that. We had a huge palette of rain. But there’s a certain point in the scene where we need the audience to focus on all of the action that’s happening, what’s really going on.

There’s Woody and Slinky Dog being stretched and RC in the gutter, and all this. So when I put all of the sounds up there you couldn’t make out anything. It was confusing. So I pulled all of the rain out. Then we put in all of the specific sounds. We made sure all of the dialogue, music and sounds worked together so the audience could follow the action. Then I went back through and added the rain back in. When we didn’t need it, I drifted it out. And when we needed it, I brought it back in. It took a lot of time to do that and some careful balancing to make it work.

That was a fun thing to do, but it took time. We’re working on a movie that kids and adults are going to see. We didn’t want to make it too loud. We wanted to make it comfortable. But it’s an action scene, so you want it to be exciting. And it had to work with the music. We were very careful about how loud we made things. When things started to hurt, we pulled it all back. We were diligent about keeping control of the volume and getting those balances was very difficult. We don’t want to make it too quiet, but it’s exciting. If we make it too loud then that pushes you away and you don’t pay attention.

That scene was fun in Dolby Atmos. I had the rain all around the theater, in the ceiling. But it does go away and comes back in when needed. It was a fun thing to do.

Did you have a favorite scene for mixing in Atmos?
Semanick: One of my favorite scenes for Atmos was when Bo Peep takes Woody to the top of the carousel and she asks why Woody would ever want to stay with one kid when you can have all of this. I do a subtle thing with the music — there are a few times in the film where I do this — where I pull the music forward as they’re climbing to the top of the carousel. There’s no music in the surrounds or the tops. I pull it so far forward that it’s almost mono.

Then, as they pop up from atop the carousel and the camera sweeps around, I let the music open up. I bloom it into the surrounds and into the overheads. I bloom it really hard with the camera moves. If you’re paying attention, you will feel the music sweep around you. You’re just supposed to feel it, not to really know that it happened. That’s one of the mixing techniques that I learned over the years. The picture editor, Axel Geddes, would ask me to make it “magical” and put more “magic” into it. I started to interpret that as: fill up the surrounds more.

One of the best parts of Atmos is that you have surrounds that are the same as the front speakers so the sound doesn’t fall off. It’s more full-range because it has bass management toward the back. That helps me, mix-wise, to really bring the sound into the room and fill the room out when I need to do that. There are a few scenes like that and Nathan would look at me funny and say, “Wow, I really hear it.”

We’re so concentrated on the sound. I’m just hoping that the audience will feel it wrap around them and give them a good sense of warmth. I’m trying to help push the emotional content. The music was so good. Randy Newman did a great job on a lot of the music. It really helped the story and I wanted to help that be the best it could be emotionally. It was already there, but I just wanted to give that little extra. Pulling the music into the front and then pushing out into the whole theater gave the music an emotional edge.

Nance: There are a couple of fun Atmos moments for effects. When they’re in the dark closet and the sound is happening all around. Also, when Woody wakes up from his voice box removal surgery. Michael was bringing the sewing machine right up into the overheads. We have the pull string floating around the room and into the ceiling. Those two moments were a pretty cool use of the point-source and the enveloping capability of Atmos.

What was the most challenging scene to mix? Why?
Nance: The whole scene with the lost girl and Gabby all the way through the toys’ goodbyes. That was two full sections, but we get so quiet even though there’s a huge carnival happening. It was a huge cheat. It took a lot of work to get into these quiet, delicate moments where we take everything out, all the backgrounds, and it’s very simple. Michael pulled the music forward in some of those spots and the whole mix becomes very simple and quiet. You’re almost holding your breath in these different moments with the goodbyes. Sometimes we think of the really loud, bombastic scenes as being tough. And they were! The escape from the antique store took quite a lot of work to balance and shape. But I think the quiet, delicate scenes take more work because they take more shaping.

Semanick: I agree. Those areas were very difficult. There was a whole carnival going on and I had to strip it all down. I had my moments. When they’re together above the carnival, it looks beautiful up there. The carnival rides behind them are blurry and we didn’t need to hear the sounds. We heard them before. We know what they sound like. Plus, that moment was with the toys. We were just with them. The whole world has dissolved, and the sound of the world too. You see the carnival back there, but you’re not really paying attention to it. You’re paying attention to Woody and Bo Peep or Gabby and the lost girl.

Another interesting scene was when Woody and Forky first walk through the antique store. It was interesting how the tones in each place change and the reverbs on the voices change in every single room. Those scenes were interesting. The challenge was how to establish the antique store. It’s very quiet, so we were very specific on each cut. Where are they? What’s around them? How high is the camera sitting? You start looking closely at the scene. I was able to do things with Atmos, put things in the ceiling.

What scene went through the most evolution mix-wise? What were some of the different ways you tried mixing it? Ultimately, why did you go with the way it’s mixed in the final?
Semanick: There’s a scene when Woody and Bo Peep reunite on the playground. A little girl picks up Woody and she has Bo Peep in her hands. They meet again for the first time. That scene went through changes musically and dialogue-wise. What do we hear? How much of the girl do we hear before we see Bo Peep and Woody looking at each other? We tried several different ways. There were many opinions that came in on that. When does the music bloom? When does it fill the room out? Is the score quite right? They recut the score. They had a different version.

That scene went through quite a bit of ups and downs. We weren’t sure which way to go. Ultimately, Josh was happy with it, and it plays well.

There was another version of Randy’s score that I liked. But, it’s not about what I like. It’s about how the overall room feels — if everybody feels like it’s the best that we can do. If that’s yes, then that’s the way it goes. I’ll always speak up if I have ideas. I’ll say, “Think about this. Think about that.”

That scene went through some changes, and I’m still on the fence. It works great, but I know there’s another version of the music that I preferred. I’ll just have to live with that.

Nance: We just kept trying things out on that scene until we had it feeling good, like it was hitting the right beats. We had to figure out what the timing was, what would have the most emotional impact. That’s why we tried out so many different versions.

Semanick: That’s a big moment in the film. It’s what starts the back half of the film. Woody gets reacquainted with Bo Peep and then we’re off to the races.

What console did you mix Toy Story 4 on and why?
Semanick: We both mixed on the Neve DFC. It’s my console of choice. I love the console; I love the way it sounds. I love that it has separate automation. There’s the editor’s automation that they did. I can change my automation and that doesn’t affect their automation. It’s the best of both worlds. It runs really smoothly. It’s one of the best sounding consoles around.

Nance: I really enjoy working on the Neve DFC. It’s my console of choice when there’s the option.

Semanick: There are a lot of different consoles and control surfaces you can use now, but I’m used to the DFC. I can really play the console as a musical instrument. It’s like a performance. I can perform these balances. I can grab knobs and change EQ or add reverb and pull things back. It’s like a performance and that console seems the most reliable one for me. I know it really well. It helps when you know your instrument.

Any final thoughts you’d like to share on mixing Toy Story 4?
Semanick: With these Pixar films, I get to benefit from the great storytelling and what they’ve done visually. All the aspects of these films Pixar does — the cinematography down to the lighting down to the character development, the costumes and set design — they spent so many hours debating how things are going to look and the design.

So, on the sound side, it’s about matching what they’ve done. How can I help support it? It’s amazing to me how much time they spend on these films. It’s hardcore filmmaking. It’s a cartoon, but not really. It’s a film. and it’s a really good film. You look at all the aspects of it, like how the camera moves. It’s not a real camera but you’re watching through the lens, seeing the camera angles, where and how they place the camera. They have to debate all that.

One of the hardest scenes for them must have been when Bo Peep and Woody are in the antique store and they turn and look at all the chandeliers. It was gorgeous, a beautiful shot. I bloom the music out there, around the theater. That was a delicate scene. When you look at the filmmaking they’re doing there and the reflections of the lights, you know they’re good. They’re really good.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Audio houses Squeak E. Clean and Nylon Studios have merged

Music and sound studios Squeak E. Clean and Nylon Studios have merged to form Squeak E. Clean Studios. This union brings together a diverse roster of artists offering musical talent and exceptional audio production to agencies and brands. The company combines leadership from both former houses, with Nylon’s Hamish Macdonald serving as managing director and Nylon’s Simon Lister and Squeak E. Clean’s Sam Spiegel overseeing the company’s creative vision as co-executive creative directors. Nylon’s founding partner, David Gaddie, will become strategy partner.

The new Squeak E. Clean Studios has absorbed and operates all the existing studios of the former companies in Los Angeles, New York, Chicago, Austin, Sydney and Melbourne. Clients can now access a full range of services in every studio, including original composition, sound design and mix, music licensing, artist partnerships, experiential and spatial sound and sonic branding. Clients will also be able to license tracks from a vast, consolidated music catalog.

New York-based EP Christina Carlo is transferring to the West Coast to lead the Los Angeles studio alongside Amanda Patterson as senior producer. Deb Oh is executive producer of the New York studio, with Cindy Chao as head of sales. Squeak E. Clean Studios’ Sydney studio is led by executive creative producer Karla Henwood, Ceri Davies is EP of the Melbourne studio, and Jocelyn Brown is leading the Chicago location.The company is deeply committed to strong support of the Free the Bid initiative, with three full-time female staff composers already on the roster.

“I always admired the ‘culture changing’ work that Squeak E. Clean Productions crafted–like the Adidas Hello Tomorrow spot with Karen O and Spike Jonze’s Kenzo World with Ape Drums (featuring Assassin),” says Lister. “These are truly the kind of jobs that are not just famous in advertising, but are part of our popular culture.”

“It’s exciting to be able to combine the revolutionary creativity of Squeak E. Clean with the outstanding post, creative music and exceptional client service that Nylon Studios has always offered at the highest level. We love what we do, and this collaboration is going to be an amazing opportunity for all of our artists and clients,” adds Spiegel. “As a combined force, we will make music and sound that people love.”

Main Image: (L-R) Hamish Macdonald, Simon Lister, Sam Spiegel
Image Credit: Shruti Ashok

 

KRK intros audio tools app to help Rokit G4 monitor setup

KRK Systems has introduced the KRK Audio Tools App for iOS and Android. This free suite of professional studio tools includes five professional analysis-based components that work with any monitor setup, and one tool (EQ Recommendation) that helps acclimate the new KRK Rokit G4 monitors to their individual acoustic environment.

In addition to the EQ Recommendation tool, the app also includes a Spectrum Real Time Analyzer (RTA), Level Meter, Delay and Polarity Analyzers, as well as a Monitor Align tool that helps users set their monitor positioning more accurately to their listening area. Within the app is a sound generator giving the user sound analysis options of sine, continuous sine sweep, white noise and pink noise—all of which can help the analysis process in different conditions.

“We wanted to build something game-changing for the new Rokit G4 line that enables our users to achieve better final mixes overall,” explains Rich Renken, product manager for the pro audio division of Gibson Brands, which owns KRK. “In terms of critical listening, the G4 monitors are completely different and a major upgrade from the previous G3 line.Our intentions with the EQ Recommendation tool are to suggest a flatter condition and help get the user to a better starting point. Ultimately, it still comes down to preference and using your musical ear, but it’s certainly great to have this feature available along with the others in the app.”

Five of the app tools work with any monitor setup. This includes the Level Meter, which assists with monitor level calibration to ensure all monitors are at the same dB level, as well as the Delay Analysis feature that helps calculate the time from each monitor to the user’s ears. Additionally, the app’s Polarity function is used to verify the correct wiring of monitors, minimizing bass loss and incorrect stereo imaging reproduction — the results of monitors being out of phase, while the Spectrum RTA and Sound Generator are made for finding nuances in any environment.

Also included is a Monitor Alignment feature, which is used to determine the best placement of multiple monitors within proximity. This is accomplished by placing a smart device on each monitor separately and then rotating to the correct angle degree. A sixth tool, exclusive to Rokit G4 users, is the EQ Recommendation tool that helps acclimate monitors to an environment by analyzing the app-generated pink noise and subsequently suggesting the best EQ preset, which is set manually on the back of the G4 monitors.

Creating and mixing authentic sounds for HBO’s Deadwood movie

By Jennifer Walden

HBO’s award-winning series Deadwood might have aired its final episode 13 years ago, but it’s recently found new life as a movie. Set in 1889 — a decade after the series finale — Deadwood: The Movie picks up the threads of many of the main characters’ stories and weaves them together as the town of Deadwood celebrates the statehood of South Dakota.

Deadwood: The Movie

The Deadwood: The Movie sound team.

The film, which aired on HBO and is available on Amazon, picked up eight 2019 Emmy nominations including in the categories of sound editing, sound mixing and  best television movie.

Series creator David Milch has returned as writer on the film. So has director Daniel Minahan, who helmed several episodes of the series. The film’s cast is populated by returning members, as is much of the crew. On the sound side, there are freelance production sound mixer Geoffrey Patterson; 424 Post’s sound designer, Benjamin Cook; NBCUniversal StudioPost’s re-recording mixer, William Freesh; and Mind Meld Arts’ music editor, Micha Liberman. “Series composers Reinhold Heil and Johnny Klimek — who haven’t been a composing team in many years — have reunited just to do this film. A lot of people came back for this opportunity. Who wouldn’t want to go back to Deadwood?” says Liberman.

Freelance supervising sound editor Mandell Winter adds, “The loop group used on the series was also used on the film. It was like a reunion. People came out of retirement to do this. The richness of voices they brought to the stage was amazing. We shot two days of group for the film, covering a lot of material in that limited time to populate Deadwood.”

Deadwood (the film and series) was shot on a dedicated film ranch called Melody Ranch Motion Picture Studio in Newhall, California. The streets, buildings and “districts” are consistently laid out the same way. This allowed the sound team to use a map of the town to orient sounds to match each specific location and direction that the camera is facing.

For example, there’s a scene in which the town bell is ringing. As the picture cuts to different locations, the ringing sound is panned to show where the bell is in relation to that location on screen. “We did that for everything,” says co-supervising sound editor Daniel Colman, who along with Freesh and re-recording mixer John Cook, works at NBCUniversal StudioPost. “You hear the sounds of the blacksmith’s place coming from where it would be.”

“Or, if you’re close to the Chinese section of the town, then you hear that. If you were near the saloons, that’s what you hear. They all had different sounds that were pulled forward from the series into the film,” adds re-recording mixer Freesh.

Many of the exterior and interior sounds on set were captured by Benjamin Cook, who was sound effects editor on the original Deadwood series. Since it’s a practical location, they had real horses and carriages that Cook recorded. He captured every door and many of the props. Colman says, “We weren’t guessing at what something sounded like; we were putting in the actual sounds.”

The street sounds were an active part of the ambience in the series, both day and night. There were numerous extras playing vendors plying their wares and practicing their crafts. Inside the saloons and out in front of them, patrons talked and laughed. Their voices — performed by the loop group in post — helped to bring Deadwood alive. “The loop group we had was more than just sound effects. We had to populate the town with people,” says Winter, who scripted lines for the loopers because they were played more prominently in the mix than what you’d typically hear. “Having the group play so far forward in a show is very rare. It had to make sense and feel timely and not modern.”

In the movie, the street ambience isn’t as strong a sonic component. “The town had calmed down a little bit as it’s going about its business. It’s not quite as bustling as it was in the series. So that left room for a different approach,” says Freesh.

The attenuation of street ambience was conducive to the cinematic approach that director Minahan wanted to take on Deadwood: The Movie. He used music to help the film feel bigger and more dramatic than the series, notes Liberman. Re-recording mixer John Cook adds, “We experimented a lot with music cues. We saw scenes take on different qualities, depending on whether the music was in or out. We worked hard with Dan [Minahan] to end up with the appropriate amount of music in the film.”

Minahan even introduced music on set by way of a piano player inside the Gem Saloon. Production sound mixer Patterson says, “Dan was very active on the set in creating a mood with that music for everyone that was there. It was part and parcel of the place at that time.”

Authenticity was a major driving force behind Deadwood’s aesthetics. Each location on set was carefully dressed with era-specific props, and the characters were dressed with equal care, right down to their accessories, tools and weapons. “The sound of Seth Bullock’s gun is an actual 1889 Remington revolver, and Calamity Jane’s gun is an 1860’s Colt Army cavalry gun. We’ve made every detail as real and authentic as possible, including the train whistle that opens the film. I wasn’t going to just put in any train whistle. It’s the 1880s Black Hills steam engine that actually went through Deadwood,” reports Colman.

The set’s wooden structures and elevated boardwalk that runs in front of the establishments in the heart of town lent an authentic character to the production sound. The creaky wooden doors and thumpiness of footsteps across the raised wooden floors are natural sounds the audience would expect to hear from that environment. “The set for Deadwood was practical and beautiful and amazing. You want to make sure that you preserve that realness and let the 1800s noises come through. You don’t want to over sterilize the tracks. You want them to feel organic,” says Patterson.

Freesh adds, “These places were creaky and noisy. Wind whistled through the windows. You just embrace it. You enhance it. That was part of the original series sound, and it followed through in the movie as well.”

The location was challenging due to its proximity to real-world civilization and all of our modern-day sonic intrusions, like traffic, airplanes and landscaping equipment from a nearby neighborhood. Those sounds have no place in the 1880s world of Deadwood, but “if we always waited for the moment to be perfect, we would never make a day’s work,” says Patterson. “My mantra was always to protect every precious word of David Milch’s script and to preserve the performances of that incredible cast.”

In the end, the modern-day noises at the location weren’t enough to require excessive ADR. John Cook says, “Geoffrey [Patterson] did a great job of capturing the dialogue. Then, between the choices the picture editors made for different takes and the work that Mandell [Winter] did, there were only one or two scenes in the whole movie that required extra attention for dialogue.”

Winter adds, “Even denoising the tracks, I didn’t take much out. The tracks sounded really good when they got to us. I just used iZotope RX 7 and did our normal pass with it.”

Any fan of Deadwood knows just how important dialogue clarity is since the show’s writing is like Shakespeare for the American West — with prolific profanity, of course. The word choices and their flow aren’t standard TV script fare. To help each word come through clearly, Winter notes they often cut in both the boom and lav mic tracks. This created nice, rich dialogue for John Cook to mix.

On the stage, John Cook used the FabFilter Pro-Q 2 to work each syllable, making sure the dialogue sounded bright and punchy and not too muddy or tubby. “I wanted the audience to hear every word without losing the dynamics of a given monologue or delivery. I wanted to maintain the dynamics, but make sure that the quieter moments were just as intelligible as the louder moments,” he says.

In the film, several main characters experience flashback moments in which they remember events from the series. For example, Al Swearengen (Ian McShane) recalls the death of Jen (Jennifer Lutheran) from the Season 3 finale. These flashbacks — or hauntings, as the post team refers to them — went through several iterations before the team decided on the most effective way to play each one. “We experimented with how to treat them. Do we go into the actor’s head and become completely immersed in the past? Or, do we stay in the present — wherever we are — and give it a slight treatment? Or, should there not be any sounds in the haunting? In the end, we decided they weren’t all going to be handled the same,” says Freesh.

Before coming together for the final mix on Mix 6 at NBCUniversal StudioPost on the Universal Studios Lot in Los Angeles, John Cook and Freesh pre-dubbed Deadwood: The Movie in separate rooms as they’d do on a typical film — with Freesh pre-dubbing the backgrounds, effects, and Foley while Cook pre-dubbed the dialogue and music.

The pre-dubbing process gave Freesh and John Cook time to get the tracks into great shape before meeting up for the final mix. Freesh concludes, “We were able to, with all the people involved, listen to the film in real good condition from the first pass down and make intelligent decisions based on what we were hearing. It really made a big difference in making this feel like Deadwood.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Creating Foley for FX’s Fosse/Verdon

Alchemy Post Sound created Foley for Fosse/Verdon, FX’s miniseries about choreographer Bob Fosse (Sam Rockwell) and his collaborator and wife, the singer/dancer Gwen Verdon (Michelle Williams). Working under the direction of supervising sound editors Daniel Timmons and Tony Volante, Foley artist Leslie Bloome and his team performed and recorded hundreds of custom sound effects to support the show’s dance sequences and add realistic ambience to its historic settings.

Spanning five decades, Fosse/Verdon focuses on the romantic and creative partnership between Bob Fosse and Gwen Verdon. The former was a visionary filmmaker and one of the theater’s most influential choreographers and directors, while the latter was one of the greatest Broadway dancers of all time.

Given the subject matter, it’s hardly surprising that post production sound was a crucial element in the series. For its many musical scenes, Timmons and Volante were tasked with conjuring intricate sound beds to match the choreography and meld seamlessly with the score. They also created dense soundscapes to back the very distinctive environments of film sets and Broadway stages, as well as a myriad of other exterior and interior locations.

For Timmons, the project’s mix of music and drama posed significant creative challenges but also a unique opportunity. “I grew up in upstate New York and originally hoped to work in live sound, potentially on Broadway,” he recalls. “With this show, I got to work with artists who perform in that world at the highest level. It was not so much a television show as a blend of Broadway music, Broadway acting and television. It was fun to collaborate with people who were working at the top of their game.”

The crew drew on an incredible mix of sources in assembling the sound. Timmons notes that to recreate Fosse’s hacking cough (a symptom of his overuse of prescription medicine), they poured through audio stems from the classic 1979 film All That Jazz. “Roy Scheider, who played Bob Fosse’s alter ego in the film, was unable to cough like him, so Bob went into a recording studio and did some of the coughing himself,” Timmons says. “We ended up using those old recordings along with ADR of Sam Rockwell. When Bob’s health starts to go south, some of the coughing you hear is actually him. Maybe I’m superstitious, but for me it helped to capture his identity. I felt like the spirit of Bob Fosse was there on the set.”

A large portion of the post sound effects were created by Alchemy Post Sound. Most notably, Foley artists meticulously reproduced the footsteps of dancers. Foley tap dancing can be heard throughout the series, not only in musical sequences, but also in certain transitions. “Bob Fosse got his start as a tap dancer, so we used tap sounds as a motif,” explains Timmons. “You hear them when we go into and out of flashbacks and interior monologues.”Along with Bloome, Alchemy’s team included Foley artist Joanna Fang, Foley mixers Ryan Collison and Nick Seaman, and Foley assistant Laura Heinzinger.

Ironically, Alchemy had to avoid delivering sounds that were “too perfect.”  Fang points out that scenes depicting musical performances from films were meant to represent the production of those scenes rather than the final product. “We were careful to include natural background sounds that would have been edited out before the film was delivered to theaters,” she explains, adding that those scenes also required Foley to match the dancers’ body motion and costuming. “We spent a lot of time watching old footage of Bob Fosse talking about his work, and how conscious he was not just of the dancers’ footwork, but their shuffling and body language. That’s part of what made his art unique.”

Foley production was unusually collaborative. Alchemy’s team maintained a regular dialogue with the sound editors and were continually exchanging and refining sound elements. “We knew going into the series that we needed to bring out the magic in the dance sequences,” recalls production Foley editor Jonathan Fuhrer. “I spoke with Alchemy every day. I talked with Ryan and Nick about the tonalities we were aiming for and how they would play in the mix. Leslie and Joanna had so many interesting ideas and approaches; I was ceaselessly amazed by the thought they put into performances, props, shoes and surfaces.”

Alchemy also worked hard to achieve realism in creating sounds for non-musical scenes. That included tracking down props to match the series’ different time periods. For a scene set in a film editing room in the 1950s, the crew located a 70-year-old Steenbeck flatbed editor to capture its unique sounds. As musical sequences involved more than tap dancing, the crew assembled a collection of hundreds of pairs of shoes to match the footwear worn by individual performers in specific scenes.

Some sounds undergo subtle changes over the course of the series relative to the passage of time. “Bob Fosse struggled with addictions and he is often seen taking anti-depression medication,” notes Seaman. “In early scenes, we recorded pills in a glass vial, but for scenes in later decades, we switched to plastic.”

Such subtleties add richness to the soundtrack and help cement the character of the era, says Timmons. “Alchemy fulfilled every request we made, no matter how far-fetched,” he recalls. “The number of shoes that they used was incredible. Broadway performers tend to wear shoes with softer soles during rehearsals and shoes with harder soles when they get close to the show. The harder soles are more strenuous. So the Foley team was always careful to choose the right shoes depending on the point in rehearsal depicted in the scene. That’s accuracy.”

The extra effort also resulted in Foley that blended easily with other sound elements, dialogue and music. “I like Alchemy’s work because it has a real, natural and open sound; nothing sounds augmented,” concludes Timmons. “It sounds like the room. It enhances the story even if the audience doesn’t realize it’s there. That’s good Foley.”

Alchemy used Neumann KMR 81 and U 87 mics, Millennia mic pres, Apogee converters, and C24 mixer into Avid Pro Tools.

Steinberg’s SpectraLayers Pro 6: visual audio editing with ARA support

Steinberg’s SpectraLayers Pro 6 audio editing software is now available. First distributed by Sony Creative Software and then by Magix Software, the developers behind SpectraLayers have joined forces with Steinberg to release its sixth iteration.

Unlike most audio editing tools, SpectraLayers offers a visual approach to audio editing, allowing users to visualize audio in the spectral domain (in 2D and 3D) and to manipulate its spectral data in many different ways. While many dedicated audio pros typically edit with their ears, this offering targets those who are more comfortable with visuals leading their editing decisions.

With its 25 advanced tools, SpectraLayers Pro 6 provides precision-editing within the spectral domain, comparable with the editing capabilities applied in high-performance photo editing software: modification, selection, measurement and drawing. Think Adobe Photoshop for audio editing.

The features newly introduced in SpectraLayers Pro 6 include ARA 2 support; next to the standalone application, Version 6 offers an ARA plug-in that seamlessly integrates into every ARA 2-compatible DAW, such as Nuendo and Cubase, to be used as a native editor. Fades along the selection border are one of the innovative features in SpectraLayers, and Pro 6 now includes visible fade masks and allows users to select from the many available fade types.

SpectraLayers’ advanced selection engine now features nine revamped selection tools — including the new Transient Selector — making selections more flexible. The new Move tool helps users transform audio intuitively: grab layers to activate and move or scale them. SpectraLayers Pro 6 also provides external editor integration, allowing users to include other editor software so that any selection can be processed by them as well.

“This new version of SpectraLayers offers a refined and more intuitive user interface inspired by picture editors and a new selection system combining multiple fade masks, bringing spectral editing and remixing to a whole new level. We’re also excited by the possibilities unlocked by the new ARA connection between SpectraLayers, Cubase and Nuendo, bringing spectral mixing and editing right within your DAW,” says Robin Lobel, creator of SpectraLayers.

The user interface of SpectraLayers Pro 6 has completely been redesigned to build on the original use of image editing software. The menus have been redesigned and the panels are collapsible; the Layers panel is customizable; and users can now refer to comprehensive tool tip documentation and a new user manual.

The full retail version of SpectraLayers Pro 6 is available as download through the Steinberg Online Shop at the suggested retail price of $399.99, together with various downloadable updates from previous versions.
of the respective owners.

Behind the Title: Cinematic Media head of sound Martin Hernández

This audio post pro’s favorite part of the job is the start of a project — having a conversation with the producer and the director. “It’s exciting, like any new relationship,” he says.

Name: Martin Hernández

Job Title: Supervising Sound Editor

Company: Mexico City’s Cinematic Media

Can you describe Cinematic Media and your role there?
I lead a new sound post department at Cinematic Media, Mexico’s largest post facility focused on television and cinema. We take production sound through the full post process: effects, backgrounds, music editing… the whole thing. We finish the sound on our mix stages.

What would surprise people most about what you do?
We want the sound to go unnoticed. The viewer shouldn’t be aware that something has been added or is unnatural. If the viewer is distracted from the story by the sound, it’s a lousy job. It’s like an actor whose performance draws attention to himself. That’s bad acting. The same applies to every aspect of filmmaking, including sound. Sound needs to help the narrative in a subjective and quiet way. The sound should be unnoticed… but still eloquent. When done properly, it’s magical.

Hernandez has been working on Easy for Netflix.

What’s your favorite part of the job?
Entering the project for the first time and having a conversation with the team: the producer and the director. It’s exciting, like any new relationship. It’s beautiful. Even if you’re working with people you’ve worked with before, the project is newborn.

My second favorite part is the start of sound production, when I have a picture but the sound is a blank page. We must consider what to add. What will work? What won’t? How much is enough or too much? It’s a lot like cooking. The dish might need more of this spice and a little less of that. You work with your ingredients, apply your personal taste and find the right flavor. I enjoy cooking sound.

What’s your least favorite part of the job?
Me.

What do you mean?
I am very hard on myself. I only see my shortcomings, which are, to tell you the truth, many. I see my limitations very clearly. In my perception of things, it is very hard to get where I want to go. Often you fail, but every once in a while, a few things actually work. That’s why I’m so stubborn. I know I am going to have a lot of misses, so I do more than expected. I will shoot three or four times, hoping to hit the mark once or twice. It’s very difficult for me to work with me.

What is your most productive time of the day?
In the morning. I’m a morning person. I work from my own place, very early, like 5:30am. I wake up thinking about things that I left behind in the session. It’s useless to remain in bed, so I go to my studio and start working on these ideas. It’s amazing how much you can accomplish between 6am and 9am. You have no distractions. No one’s calling. No emails. Nothing. I am very happy working in the mornings.

If you didn’t have this job, what would you be doing?
That’s a tough question! I don’t know anything else. Probably, I would cook. I’d go to a restaurant and offer myself as an intern in the kitchen.

For most people I know, their career is not something they’ve chosen; it was embedded in them when they were born. It’s a matter of realizing what’s there inside you and embracing it. I never, in my wildest dreams, expected to be doing this work.

When I was young, I enjoyed watching films, going to the movies, listening to music. My earliest childhood memories are sound memories, but I never thought that would be my work. It happened by accident. Actually, it was one accident after another. I found myself working with sound as a hobby. I really liked it, so I embraced it. My hobby then became my job.

So you knew early on that audio would be your path?
I started working in radio when I was 20. It happened by chance. A neighbor told me about a radio station that was starting up from scratch. I told my friend from school, Alejandro Gonzalez Iñárritu, the director. Suddenly, we’re working at a radio station. We’re writing radio pieces and doing production sound. It was beautiful. We had our own on-air, live shows. I was on in the mornings. He did the noon show. Then he decided to make films and I followed him.

Easy

What are some of your recent projects?
I just finished a series for Joe Swanberg, the third season of Easy. It’s on Netflix. It’s the fourth project I’ve done with Joe. I’ve also done two shows here in Mexico. The first one is my first full-time job as supervisor/designer for Argos, the company lead by Epigmenio Ibarra. Yankee is our first series together for Netflix, and we’re cutting another one to be aired later in the year. It’s a very exciting for me.

Is there a project that you’re most proud of?
I am very proud of the results that we’ve been getting on the first two series here in Mexico. We built the sound crew from scratch. Some are editors I’ve worked with before, but we’ve also brought in new talent. That’s a very joyful process. Finding talent is not easy, but once you do, it’s very gratifying. I’m also proud of this work because the quality is very good. Our clients are happy, and when they’re happy, I’m happy.

What pieces of technology can you not live without?
Avid Pro Tools. It’s the universal language for sound. It allows me to share sound elements and sessions from all over the world, just like we do locally, between editing and mixing stages. The second is my converter. We are using the Red system from Focusrite. It’s a beautiful machine.

This is a high-stress job with deadlines and client expectations. What do you do to de-stress from it all?
Keep working.

Mixing sounds of fantasy and reality for Rocketman

By Jennifer Walden

Paramount Pictures’ Rocketman is a musical fantasy about the early years of Elton John. The story is told through flashbacks, giving director Dexter Fletcher the freedom to bend reality. He blended memories and music to tell an emotional truth as opposed to delivering hard facts.

Mike Prestwood Smith

The story begins with Elton John (Taron Egerton) attending a group therapy session with other recovering addicts. Even as he’s sharing details of his life, he’s stretching the truth. “His recollection of the past is not reliable. He often fantasizes. He’ll say a truth that isn’t really the case, because when you flash back to his memory, it is not what he’s saying,” says BAFTA-winning re-recording mixer Mike Prestwood Smith, who handled the film’s dialogue and music. “So we’re constantly crossing the line of fantasy even in the reality sections.”

For Smith, finding the balance between fantasy and reality was what made Rocketman unique. There’s a sequence in which pre-teen Elton (Kit Connor) evolves into grown-up Elton to the tune of “Saturday’s Alright for Fighting.” It was a continuous shot, so the camera tracks pre-teen Elton playing the piano, who then then gets into a bar fight that spills into an alleyway that leads to a fairground where a huge choreographed dance number happens. Egerton (whose actual voice is featured) is singing the whole way, and there’s a full-on band under him, but specific effects from his surrounding environment poke through the mix. “We have to believe in this layer of reality that is gluing the whole thing together, but we never let that reality get in the way of enjoying the music.”

Smith helped the pre-recorded singing to feel in-situ by adding different reverbs — like Audio Ease’s AltiVerb, Exponential Audio’s PhoenixVerb and Avid’s ReVibe. He created custom reverbs from impulse responses taken from the rooms on set to ground the vocal in that space and help sell the reality of it.

For instance, when Elton is in the alleyway, Smith added a slap verb to Egerton’s voice to make it feel like it’s bouncing off the walls. “But once he gets into the main verses, we slowly move away from reality. There’s this flux between making the audience believe that this is happening and then suspending that belief for a bit so they can enjoy the song. It was a fine line and very subjective,” he says.

He and re-recording mixer/supervising sound editor Matthew Collinge spent a lot of time getting it to play just right. “We had to be very selective about the sound of reality,” says Smith. “The balance of that whole sequence was very complex. You can never do those scenes in one take.”

Another way Smith helped the pre-recorded vocals to sound realistic was by creating movement using subtle shifts in EQ. When Elton moves his head, Smith slightly EQ’d Egerton’s vocals to match. These EQ shifts “seem little, but collectively they have a big impact on selling that reality and making it feel like he’s actually performing live,” says Smith. “It’s one of those things that if you don’t know about it, then you just accept it as real. But getting it to sound that real is quite complicated.”

For example, there’s a scene in which Egerton is working out “Your Song,” and the camera cuts from upstairs to downstairs. “We are playing very real perspectives using reverb and EQ,” says Smith. Then, once Elton gets the song, he gives Bernie Taupin (Jamie Bell) a knowing look. The music gets fleshed out with a more complicated score, with strings and guitar. Next, Elton is recording the song in a studio. As he’s singing, he’s looking down and playing piano. Smith EQ’d all of that to add movement, so “it feels like that performance is happening at that time. But not one single sound of it is from that moment on set. There is a laugh from Bernie, a little giggle that he does, and that’s the only thing from the on-set performance. Everything else is manufactured.”

In addition to EQ and reverb, Smith used plugins from Helsinki-based sound company Oeksound to help the studio recordings to sound like production recordings. In particular, Oeksound’s Spiff plugin was useful for controlling transients “to get rid of that close-mic’d sound and make it feel more like it was captured on set,” Smith says. “Combining EQ and compression and adding reverb helped the vocals to sound like sync, but at the same time, I was careful not to take away too much from the quality of the recording. It’s always a fine line between those things.”

The most challenging transitions were going from dialogue into singing. Such was the case with quiet moments like “Your Song” and “Goodbye Yellow Brick Road.” In the latter, Elton quietly sings to his reflection in a mirror backstage. The music slowly builds up under his voice as he takes off down the hallway and by the time he hops into a cab outside it’s a full-on song. Part of what makes the fantasy feel real is that his singing feels like sync. The vocals had to sound impactful and engage the audience emotionally, but at the same time they had to sound believable — at least initially. “Once you’re into the track, you have the audience there. But getting in and out is hard. The filmmakers want the audience to believe what they’re seeing, that Taron was actually in the situations surrounded by a certain level of reality at any given point, even though it’s a fantasy,” says Smith.

The “Rocketman” song sequence is different though. Reality is secondary and the fantasy takes control, says Smith. “Elton happens to be having a drug overdose at that time, so his reality becomes incredibly subjective, and that gives us license to play it much more through the song and his vocal.”

During “Rocketman,” Elton is sinking to the bottom of a swimming pool, watching a younger version of himself play piano underwater. On the music side, Smith was able to spread the instruments around the Dolby Atmos surround field, placing guitar parts and effect-like orchestrations into speakers discretely and moving those elements into the ceiling and walls. The bubble sound effects and underwater atmosphere also add to the illusion of being submerged. “Atmos works really well when you have quiet, and you can place sounds in the sound field and really hear them. There’s a lot of movement musically in Rocketman and it’s wonderful to have that space to put all of these great elements into,” says Smith.

That sequence ends with Elton coming on stage at Dodger Stadium and hitting a baseball into the massive crowd. The whole audience — 100,000 people — sing the chorus with him. “The moment the crowd comes in is spine-tingling. You’re just so with him at that point, and the sound and the music are doing all of that work,” he explains.

The Music
The music was a key ingredient to the success of Rocketman. According to Smith, they were changing performances from Egerton and also orchestrations right through the post sound mix, making sure that each piece was the best it could be. “Taron [Egerton] was very involved; he was on the dub stage a lot. Once everything was up on the screen, he’d want to do certain lines again to get a better performance. So, he did pre-records, on-set performances and post recording as well,” notes Smith.

Smith needed to keep those tracks live through the mix to accommodate the changes, so he and Collinge chose Avid S6 control surfaces and mixed in-the-box as opposed to printing the tracks for a mix on a traditional large-format console. “To have locked down the music and vocals in any way would have been a disaster. I’ve always been a proponent of mixing inside Pro Tools mainly because workflow-wise, it’s very collaborative. On Rocketman, having the tracks constantly addressable — not just by me but for the music editors Cecile Tournesac and Andy Patterson as well — was vital. We were able to constantly tweak bits and pieces as we went along. I love the collaborative nature of making and mixing sound for film, and this workflow allows for that much more so than any other. I couldn’t imagine doing this any other way,” says Smith.

Smith and Collinge mixed in native Dolby Atmos at Goldcrest London in Theatre 1 and Theatre 2, and also at Warner Bros. De Lane Lea. “It was such a tight schedule that we had all three mixing stages going for the very end of it, because it got a bit crazy as these things do,” says Smith. “All the stages we mixed at had S6s, and I just brought the drives with me. At one point we were print mastering and creating M&Es on one stage and doing some fold-downs on a different stage, all with the same session. That made it so much more straightforward and foolproof.”

As for the fold-down from Atmos to 5.1, Smith says it was nearly seamless. The pre-recorded music tracks were mixed by music producer Giles Martin at Abbey Road. Smith pulled those tracks apart, spread them into the Atmos surround field and then folded them down to 5.1. “Ultimately, the mixing that Giles Martin did at Abbey Road was a great thing because it meant the fold-downs really had the best backbone possible. Also, the way that Dolby has been tweaking their fold-down processing, it’s become something special. The fold-downs were a lot easier than I thought they’d be,” concludes Smith.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Accusonus intros plugin bundles for sound and video editors

Accusonus is bringing its single-knob audio cleaning and noise reduction technology to its new ERA 4 Bundles for video editors, audio engineers and podcasters.

The ERA 4 Bundles (Enhancement and Repair of Audio) are a collection of single-knob audio cleaning plugins designed to reduce the complexity of the sound design and audio workflow without compromising sound quality or fidelity.

Accusonus says that its patented single-knob design appeals to professional editors, filmmakers and podcasters because it reduces the time-consuming audio repair workflow to a twist of a dial. Additionally, the ERA 4 Standard family of plugins enables aspiring content creators, YouTubers and film and audio students to quickly master audio workflows with minimal effort or expertise.

ERA 4 Bundles are available in two collections: The Standard Bundle and the Pro Bundle.

The ERA 4 Standard Bundle features audio cleaning plugins designed for speed and fidelity with minimal effort, even if users have never edited audio before. The Standard Bundle offers professional sound design and includes: Noise Remover, Reverb Remover, De-esser, Plosive Remover, Voice Leveler and De-clipper.

The ERA 4 Pro Bundle targets professional editors, audio engineers and podcasters in advanced post and music production environments. It includes all of the plugins from the Standard Bundle and adds the sophisticated ERA De-Esser Pro plugin. Except from the large main knob, ERA De-Esser Pro offers extra controls for greater granularity and fine-tuning when fixing an especially rough recording.

The Accusonus ERA Bundle is fully supported by Avid Pro Tools 12.6 (or higher), Audacity 2.2.2, Apple Logic Pro 10.4.3 (or higher), Ableton Live 9 (or higher), Cockos Reaper v5.9, Image Line FL Studio 12, Presonus Studio One 3 (or higher), Steinberg Cubase 8 (or higher), Adobe Audition CC 2017 (or higher) and Apple GarageBand 10.3.2

The ERA Bundle supports Adobe Premiere CC 2017 (or higher), Apple Final Cut Pro X 10.4 (or higher), Blackmagic DaVinci Resolve 14 (or higher), Avid Media Composer 2018.12 and Magix Vegas Pro 15 (or higher).

The ERA 4 Standard Bundle is available at a special introductory price of $119 until July 31. After that, the price will be $149. The ERA 4 Pro Bundle is available at a special introductory price of $349 until July 31. After that, the price will be $499.

Picture Shop buys The Farm Group

Burbank’s Picture Shop has acquired UK-based The Farm Group. The Farm Group was founded in 1998 and currently has four locations in London, as well as facilities in Manchester, Bristol and Los Angeles.

The Farm, London

The Farm also operates the in-house post production teams for BBC Sport in Salford, England; UKTV; and Fremantle Media. This deal marks Picture Shop’s second international acquisition, followed by the deal it made for Vancouver’s Finalé Post earlier this year.

The founders of The Farm, Nicky Sargent and Vikki Dunn, will stay involved in The Farm Group. In a joint statement, Sargent and Dunn said, “We are delighted that after 20 successful years, we have a new partner. Picture Shop is poised to expand in the international post market and provide the combination of technical, creative and professional excellence to the world’s content creators.”

The duo will also re-invest in the expanded Picture Head Group, which includes Picture Head and audio post company Formosa Group, in addition to Picture Shop.

L-R: The Farm Group’s Nicky Sargent and Vikki Dunn.

Bill Romeo, president of Picture Shop, says, “Based on the amount of content being created internationally, we felt it was important to have a presence worldwide and support our clients’ needs. The Farm, based on its reputation and creative talent, will be able to maintain the philosophy of Picture Shop. It is a perfect fit. Our clients will benefit from our collaborative efforts internationally, as well as benefit from our technology and experience. We will continue to partner and support our clients while maintaining our boutique feel.”

Recent work from The Farm Group includes BBC Two’s Summer of Rockets, Sky One’s Jamestown and Britain’s Got Talent.

 

Andy Greenberg on One Union Recording’s fire and rebuild

San Francisco’s One Union Recording Studios has been serving the sound needs of ad agencies, game companies, TV and film producers, and corporate media departments in the Bay Area and beyond for nearly 25 years.

In the summer of 2017, the facility was hit by a terrible fire that affected all six of its recording studios. The company, led by president John McGleenan, immediately began an ambitious rebuilding effort, which it completed earlier this year. One Union Recording is now back up to full operation and its five recording studios, outfitted with the latest sound technologies including Dolby Atmos capability, are better than ever.

Andy Greenberg, One Union Recording’s facility engineer and senior mix engineer, who works alongside engineers Joaby Deal, Eben Carr, Matt Wood and Isaac Olsen. We recently spoke with Greenberg about the company’s rebuild and plans for the future.

Rebuilding the facility after the fire must have been an enormous task.
You’re not kidding. I’ve worked at One Union for 22 years, and I’ve been through every growth phase and upgrade. I was very proud of the technology we had in place in 2017. We had six rooms, all cutting-edge. The software was fully up to date. We had few if any technical problems and zero downtime. So, when the fire hit, we were devastated. But John took a very business-oriented approach to it, and within a few days he was formulating a plan. He took it as an opportunity to implement new technology, like Dolby Atmos, and to grow. He turned sadness into enthusiasm.

How did the facility change?
Ironically, the timing was good. A lot of new technology had just come out that I was very excited about. We were able to consolidate what were large systems into smaller units while increasing quality 10-fold. We moved leaps and bounds beyond where we had been.

Prior to the fire, we were running Avid Pro Tools 12.1. Now we’re on Pro Tools Ultimate. We had just purchased four Avid/Euphonix System 5 digital audio consoles with extra DSP in March of 2017 but had not had time to install them before the fire due to bookings. These new consoles are super powerful. Our number of inputs and outputs quadrupled. The routing power and the bus power are vastly improved. It’s phenomenal.

We also installed Avid MTRX, an expandable interface designed in Denmark and very popular now, especially for Atmos. The box feels right at home with the Avid S5 because it’s MADI and takes the physical outputs of our ProTools systems up to 64 or 128 channels.

That’s a substantial increase.
A lot of delivered projects use from two to six channels. Complex projects might go to 20. Being able to go far beyond that increases the power and flexibility of the studio tremendously. And then, of course, our new Atmos room requires that kind of channel count to work in immersive surround sound.

What do you do for data storage?
Even before the fire, we had moved to a shared storage network solution. We had a very strong infrastructure and workflow in terms of data storage, archiving and the ability to recall sessions. Our new infrastructure includes 40TB of active storage of client data. Forty terabytes is not much for video, but for audio, it’s a lot. We also have 90TB of instantly recallable data.

We have client data archived back 25 years, and we can have anything online in any room in just a few minutes. It’s literally drag and drop. We pride ourselves on maintaining triple redundancy in backups. Even during the fire, we didn’t lose any client data because it was all backed up on tape and off site. We take backup and data security very seriously. Backups happen automatically every day…  actually every three hours.

What are some of the other technical features of the rebuilt studios?
There’s actually a lot. For example, our rooms — including the two Dolby-certified Atmos rooms — have new Genelec SAM studio monitors. They are “smart” speakers that are self-tuning. We can run some test tones and in five minutes the rooms are perfectly tuned. We have custom tunings set up for 5.1 and Atmos. We can adjust the tuning via computer and the speakers have built-in DPS, so we don’t have to rely on external systems.

Another cool technology that we are using is Dante, which is part of the Avid MTRX interface. Dante is basically audio-over-IP or audio-over-Cat6. It essentially replaced our AES router. We were one of the first facilities in San Francisco to have a full audio AES router, and it was very strong for us at the time. It was a 64×64 stereo-paired AES router. It has been replaced by the MTRX interface box that has, believe it or not, a three-inch by two-inch card that handles 64×64 routing per room. So, each room’s routing capability went up exponentially by 64.

We use Dante to route secondary audio, like our ISDN and web-based IP communication devices. We can route signals from room to room and over the web securely. It’s seamless, and it comes up literally into your computer. It’s amazing technology. The other day, I did a music session and used a 96K sample rate, which is very high. The quality of the headphone mix was astounding. Everyone was happy and it took just one, quick setting and we were off and running. The sound is fantastic and there is no noise and no latency problems. It’s super-clean, super-fast and easy to use.

What about video monitoring?
We have 4K monitors and 4K projection in all the rooms via Sony XBR 55A1E Bravia OLED monitors, Sony VPL-VW885ES True 4K Laser Projectors and a DLP 4K550 projector.Our clients appreciate the high-quality images and the huge projection screens.

London’s Media Production Show: technology for content creation

By Mel Lambert

The fourth annual Media Production Show, held June 11-12 at Olympia West, London, once again attracted a wide cross section of European production, broadcast, post and media-distribution pros. According to its organizers, the two-day confab drew 5,300 attendees and “showcased the technology and creativity behind content creation,” focusing on state-of-the-art products and services. The full program of standing room-only discussion seminars covered a number of contemporary topics, while 150-plus exhibitors presented wares from the media industry’s leading brands.

The State of the Nation: Post Production panel.

During a session called “The State of the Nation: Post Production,” Rowan Bray, managing director of Clear Cut Pictures, said that “while [wage and infrastructure] costs are rising, our income is not keeping up.” And with salaries, facility rent and equipment amortization representing 85% of fixed costs, “it leaves little over for investment in new technology and services. In other words, increasing costs are preventing us from embracing new technologies.”

Focusing on the long-term economic health of the UK post industry, Bray pointed out that few post facilities in London’s Soho area are changing hands, which she says “indicates that this is not a healthy sector [for investment].”

“Several years ago, a number of US companies [including Technicolor and Deluxe] invested £100 million [$130 million] in Soho; they are now gone,” stated Ian Dodd, head of post at Dock10.

Some 25 years ago, there were at least 20 leading post facilities in London. “Now we have a handful of high-end shops, a few medium-sized ones and a handful of boutiques,” Dodd concluded. Other panelists included Cara Kotschy, managing director of Fifty Fifty Post Production.

The Women in Sound panel

During his keynote presentation called “How we made Bohemian Rhapsody,” leading production designer Aaron Haye explained how the film’s large stadium concert scenes were staged and supplemented with high-resolution CGI; he is currently working on Charlie’s Angels (2019) with director/actress Elizabeth Banks.

The panel discussion “Women in Sound” brought together a trio of re-recording mixers with divergent secondary capabilities and experience. Participants were Emma Butt, a freelance mixer who also handles sound editorial and ADR recordings; Lucy Mitchell, a freelance sound editor and mixer; plus Kate Davis, head of sound at Directors Cut Films. As the audience discovered, their roles in professional sound differ. While exploring these differences, the panel revealed helpful tips and tricks for succeeding in the post world.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Izotope’s Neutron 3 streamlines mix workflows with machine learning

Izotope, makers of the RX audio tools, has introduced Neutron 3, a plug-in that — thanks to advances in machine learning — listens to the entire session and communicates with every track in the mix. Mixers can use Neutron 3’s new Mix Assistant to create a balanced starting point for an initial-level mix built around their chosen focus, saving time and energy when making creative mix decisions. Once a focal point is defined, Neutron 3 automatically set levels before the mixer ever has to touch a fader.

Neutron 3 also has a new module called Sculptor (available in Neutron 3 Standard and Advanced) for sweetening, fixing and creative applications. Using never-before-seen signal processing, Sculptor works like a per-band army of compressors and EQs to shape any track. It also communicates with Track Assistant to understand each instrument and gives realtime feedback to help mixers shape tracks to a target EQ curve or experiment with new sounds.

In addition, Neutron 3 includes many new improvements and enhancements based on feedback from the community, such as the redesigned Masking Meter that automatically flags masking issues and allows them to be fixed from a convenient one-window display. This improvement prevents tracks from stepping on each other and muddying the mix.

Neutron 3 has also had a major overhaul in performance for faster processing and load times and smooth metering. Sessions with multiple Neutrons open much quicker, and refresh rates for visualizations have doubled.

Other Neutron 3 Features
• Visual Mixer and Izotope Relay: Users can launch Mix Assistant directly from Visual Mixer and move tracks in a virtual space, tapping into Izotope-enabled inter-plug-in communication
• Improved interface: Smooth visualizations and a resizable interface
• Improved Track Assistant listens to audio and creates a custom preset based on what it hears
• Eight plug-ins in one: Users can build a signal chain directly within one highly connected, intelligent interface with Sculptor, EQ with Soft Saturation mode, Transient Shaper, 2 Compressors, Gate, Exciter, and Limiter
• Component plug-ins: Users can control Neutron’s eight modules as a single plug-in or as eight individual plug-ins
• Tonal Balance Control: Updated to support Neutron 3
• 7.1 Surround sound support and zero-latency mode in all eight modules for professional, lightweight processing for audio post or surround music mixes

Visual Mixer and Izotope Relay will be Included free with all Neutron 3 Advanced demo downloads. In addition, Music Production Suite 2.1 will now include Neutron 3 Advanced, and iZotope Elements Suite will be updated to include Neutron Elements (v3).

Neutron 3 will be available in three different options — Neutron Elements, Neutron 3 Standard and Neutron 3 Advanced. See the comparison chart for more information on what features are included in each version.

Neutron will be available June 30. Check out the iZotope site for pricing.

Sound Lounge ups Becca Falborn to EP 

New York’s Sound Lounge, an audio post house that provides sound services for advertising, television and feature films, has promoted Becca Falborn to executive producer.

In her new role, Falborn will manage the studio’s advertising division and supervise its team of producers. She will also lead client relations and sales. Additionally, she will manage Sound Lounge Everywhere, the company’s remote sound services offering, which currently operates in Boston and Boulder, Colorado.

“Becca is a smart, savvy and passionate producer, qualities that are critical to success in her new role,” said Sound Lounge COO and partner Marshall Grupp. “She has developed an excellent rapport with our team of mixers and clients and has consistently delivered projects on time and on budget, even under the most challenging circumstances.”

Falborn joined Sound Lounge in 2017 as a producer and was elevated to senior producer last year. She has produced voiceover recordings, sound design, and mixing for many advertising projects, including seven out of the nine spots produced by Sound Lounge that debuted during this year’s Super Bowl telecast.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including past positions with the post house Nice Shoes and the marketing agency Hogarth Worldwide.

Sugar Studios LA gets social for celebrity-owned Ladder supplement

Sugar Studios LA completed a social media campaign for Ladder perfect protein powder and clean energy booster supplements starring celebrity founders Arnold Schwarzenegger, LeBron James, DJ Khaled, Cindy Crawford and Lindsey Vonn. The playful ad campaign focuses on social media, foregoing the usual TV commercial push and pitching the protein powder directly to consumers.

One spot shows Arnold in the gym annoyed by a noisy dude on the phone, prompting him to turn up his workout soundtrack. Then DJ Khaled is scratching encouragement for LeBron’s workout until Arnold drowns them out with his own personal live oompah band.

The ads were produced and directed by longtime Schwarzenegger collaborator Peter Grigsby, while Sugar Studios’ editor Nico Alba (Chevrolet, Ferrari, Morongo Casino, Mattel) cut the project using Adobe Premiere. When asked about using random spot lengths, as opposed to traditional :15s, :30s, and :60s, Alba explains, “Because it’s social media, we’re not always bound to those segments of time anymore. Basically, it’s ‘find the story,’ and because there are no rules, it makes the storytelling more fun. It’s a process of honing everything down without losing the rhythm or the message and maintaining a nice flow.”

Nico Alba and Jijo Reed. Credit: David Goggin

“Peter Grigsby requested a skilled big-brand commercial editor on this campaign,” Reed says. “Nico was the perfect fit to create that rhythm and flow that only a seasoned commercial editor could bring to the table.”

“We needed a heavy-weight gym ambience to set the stage,” says Alba, who worked closely with sound design/mixers Bret Mazur and Troy Ambroff to complement his editing. “It starts out with a barrage of noisy talking and sounds that really irritate Arnold, setting up the dueling music playlists and the sonic payoff.”

The audio team mixed and created sound design with Avid Pro tools Ultimate. Audio plugins called on include Waves Mercury bundle,, DTS Surround tools and iZotope RX7 Advanced.

The Sugar team also created a cinematic look to the spots, thanks to colorist Bruce Bolden, who called on Blackmagic DaVinci Resolve and a Sony BVM OLED monitor. “He’s a veteran feature film colorist,” says Reed, “so he often brings that sensibility to advertising spots as well, meaning rich blacks and nice, even color palettes.”

Storage used at the studio is Avid Nexis and Facilis Terrablock.

Human’s opens new Chicago studio

Human, an audio and music company with offices in New York, Los Angeles and Paris has opened a Chicago studio headed up by veteran composer/producer Justin Hori.

As a composer, Hori’s work has appeared in advertising, film and digital projects. “Justin’s artistic output in the commercial space is prolific,” says Human partner Gareth Williams. “There’s equal parts poise and fun behind his vision for Human Chicago. He’s got a strong kinship and connection to the area, and we couldn’t be happier to have him carve out our footprint there.”

From learning to DJ at age 13 to working Gramaphone Records to studying music theory and composition at Columbia College, Hori’s immersion in the Chicago music scene has always influenced his work. He began his career at com/track and Comma Music, before moving to open Comma’s Los Angeles office. From there, Hori joined Squeak E Clean, where he served as creative director for the past five years. He returned to Chicago in 2016.

Hori is known for producing unexpected yet perfectly spot-on pieces of music for advertising, including his track “Da Diddy Da,” which was used in the four-spot summer 2018 Apple iPad campaign. His work has won top industry honors including D&AD Pencils, The One Show, Clio and AICP Awards and the Cannes Gold Lion for Best Use of Original Music.

Meanwhile, Post Human, the audio post sister company run by award-winning sound designer and engineer Sloan Alexander, continues to build momentum with the addition of a second 5.1 mixing suite in NYC. Plans for similar build-outs in both LA and Chicago are currently underway.

With services ranging from composition, sound design and mixing, Human works in advertising, broadcast, digital and film.

NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.

Creating audio for the cinematic VR series Delusion: Lies Within

By Jennifer Walden

Delusion: Lies Within is a cinematic VR series from writer/director Jon Braver. It is available on the Samsung Gear VR and Oculus Go and Rift platforms. The story follows a reclusive writer named Elena Fitzgerald who penned a series of popular fantasy novels, but before the final book in the series was released, the author disappeared. Rumors circulated about the author’s insanity and supposed murder, so two avid fans decide to break into her mansion to search for answers. What they find are Elena’s nightmares come to life.

Delusion: Lies Within is based on an interactive play written by Braver and Peter Cameron. Interactive theater isn’t your traditional butts-in-the-seat passive viewing-type theater. Instead, the audience is incorporated into the story. They interact with the actors, search for objects, solve mysteries, choose paths and make decisions that move the story forward.

Like a film, the theater production is meticulously planned out, from the creature effects and stunts to the score and sound design. With all these components already in place, Delusion seemed like the ideal candidate to become a cinematic VR series. “In terms of the visuals and sound, the VR experience is very similar to the theatrical experience. With Delusion, we are doing 360° theater, and that’s what VR is too. It’s a 360° format,” explains Braver.

While the intent was to make the VR series match the theatrical experience as much as possible, there are some important differences. First, immersive theater allows the audience to interact with the actors and objects in the environment, but that’s not the case with the VR series. Second, the live theater show has branching story narratives and an audience member can choose which path he/she would like to follow. But in the VR series there’s one set storyline that follows a group who is exploring the author’s house together. The viewer feels immersed in the environment but can’t manipulate it.

L-R: Hamed_Hokamzadeh and Thomas Ouziel

According to supervising sound editor Thomas Ouziel from Hollywood’s MelodyGun Group, “Unlike many VR experiences where you’re kind of on rails in the midst of the action, this was much more cinematic and nuanced. You’re just sitting in the space with the characters, so it was crucial to bring the characters to life and to design full sonic spaces that felt alive.”

In terms of workflow, MelodyGun sound supervisor/studio manager Hamed Hokamzadeh chose to use the Oculus Developers Kit 2 headset with Facebook 360 Spatial Workstation on Avid Pro Tools. “Post supervisor Eric Martin and I decided to keep everything within FB360 because the distribution was to be on a mobile VR platform (although it wasn’t yet clear which platform), and FB360 had worked for us marvelously in the past for mobile and Facebook/YouTube,” says Hokamzadeh. “We initially concentrated on delivering B-format (2nd Order AmbiX) playing back on Gear VR with a Samsung S8. We tried both the Audio-Technica ATH-M50 and Shure SRH840 headphones to make sure it translated. Then we created other deliverables: quad-binaurals, .tbe, 8-channel and a stereo static mix. The non-diegetic music and voiceover was head-locked and delivered in stereo.”

From an aesthetic perspective, the MelodyGun team wanted to have a solid understanding of the audience’s live theater experience and the characters themselves “to make the VR series follow suit with the world Jon had already built. It was also exciting to cross our sound over into more of a cinematic ‘film world’ than was possible in the live theatrical experience,” says Hokamzadeh.

Hokamzadeh and Ouziel assigned specific tasks to their sound team — Xiaodan Li was focused on sound editorial for the hard effects and Foley, and Kennedy Phillips was asked to design specific sound elements, including the fire monster and the alchemist freezing.

Ouziel, meanwhile, had his own challenges of both creating the soundscape and integrating the sounds into the mix. He had to figure out how to make the series sound natural yet cinematic, and how to use sound to draw the viewer’s attention while keeping the surrounding world feeling alive. “You have to cover every movement in VR, so when the characters split up, for example, you want to hear all their footsteps, but we also had to get the audience to focus on a specific character to guide them through. That was one of the biggest challenges we had while mixing it,” says Ouziel.

The Puppets
“Chapter Three: Trial By Fire” provides the best example of how Ouziel tackled those challenges. In the episode, Virginia (Britt Adams) finds herself stuck in Marion’s chamber. Marion (Michael J. Sielaff) is a nefarious puppet master who is clandestinely controlling a room full of people on puppet strings; some are seated at a long dining table and others are suspended from the ceiling. They’re all moving their arms as if dancing to the scratchy song that’s coming from the gramophone.

The sound for the puppet people needed to have a wiry, uncomfortable feel and the space itself needed to feel eerily quiet but also alive with movement. “We used a grating metallic-type texture for the strings so they’d be subconsciously unnerving, and mixed that with wooden creaks to make it feel like you’re surrounded by constant danger,” says Ouziel.

The slow wooden creaks in the ambience reinforce the idea that an unseen Marion is controlling everything that’s happening. Braver says, “Those creaks in Marion’s room make it feel like the space is alive. The house itself is a character in the story. The sound team at MelodyGun did an excellent job of capturing that.”

Once the sound elements were created for that scene, Ouziel then had to space each puppet’s sound appropriately around the room. He also had to fill the room with music while making sure it still felt like it was coming from the gramophone. Ouziel says, “One of the main sound tools that really saved us on this one was Audio Ease’s 360pan suite, specifically the 360reverb function. We used it on the gramophone in Marion’s chamber so that it sounded like the music was coming from across the room. We had to make sure that the reflections felt appropriate for the room, so that we felt surrounded by the music but could clearly hear the directionality of its source. The 360pan suite helped us to create all the environmental spaces in the series. We pretty much ran every element through that reverb.”

L-R: Thomas Ouziel and Jon Braver.

Hokamzadeh adds, “The session got big quickly! Imagine over 200 AmbiX tracks, each with its own 360 spatializer and reverb sends, plus all the other plug-ins and automation you’d normally have on a regular mix. Because things never go out of frame, you have to group stuff to simplify the session. It’s typical to make groups for different layers like footsteps, cloth, etc., but we also made groups for all the sounds coming from a specific direction.”

The 360pan suite reverb was also helpful on the fire monster’s sounds. The monster, called Ember, was sound designed by Phillips. His organic approach was akin to the bear monster in Annihilation, in that it felt half human/half creature. Phillips edited together various bellowing fire elements that sounded like breathing and then manipulated those to match Ember’s tormented movements. Her screams also came from a variety of natural screams mixed with different fire elements so that it felt like there was a scared young girl hidden deep in this walking heap of fire. Ouziel explains, “We gave Ember some loud sounds but we were able to play those in the space using the 360pan suite reverb. That made her feel even bigger and more real.”

The Forest
The opening forest scene was another key moment for sound. The series is set in South Carolina in 1947, and the author’s estate needed to feel like it was in a remote area surrounded by lush, dense forest. “With this location comes so many different sonic elements. We had to communicate that right from the beginning and pull the audience in,” says Braver.

Genevieve Jones, former director of operations at Skybound Entertainment and producer on Delusion: Lies Within, says, “I love the bed of sound that MelodyGun created for the intro. It felt rich. Jon really wanted to go to the south and shoot that sequence but we weren’t able to give that to him. Knowing that I could go to MelodyGun and they could bring that richness was awesome.”

Since the viewer can turn his/her head, the sound of the forest needed to change with those movements. A mix of six different winds spaced into different areas created a bed of textures that shifts with the viewer’s changing perspective. It makes the forest feel real and alive. Ouziel says, “The creative and technical aspects of this series went hand in hand. The spacing of the VR environment really affects the way that you approach ambiences and world-building. The house interior, too, was done in a similar approach, with low winds and tones for the corners of the rooms and the different spaces. It gives you a sense of a three-dimensional experience while also feeling natural and in accordance to the world that Jon made.”

Bringing Live Theater to VR
The sound of the VR series isn’t a direct translation of the live theater experience. Instead, it captures the spirit of the live show in a way that feels natural and immersive, but also cinematic. Ouziel points to the sounds that bring puppet master Marion to life. Here, they had the opportunity to go beyond what was possible with the live theater performance. Ouziel says, “I pitched to Jon the idea that Marion should sound like a big, worn wooden ship, so we built various layers from these huge wooden creaks to match all his movements and really give him the size and gravitas that he deserved. His vocalizations were made from a couple elements including a slowed and pitched version of a raccoon chittering that ended up feeling perfectly like a huge creature chuckling from deep within. There was a lot of creative opportunity here and it was a blast to bring to life.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Butter Music and Sound adds new ECDs in NYC and LA

Music shop Butter Music and Sound has expanded its in-house creative offerings with the addition of two new executive creative directors (ECDs): Tim Kvasnosky takes the helm in Los Angeles and Aaron Kotler in New York.

The newly appointed ECDs will maintain creative oversight on all projects going through the Los Angeles and New York offices, managing workflow across staff and freelance talent, composing on a wide range of projects and supporting and mentoring in-house talent and staff.

Kvasnosky and Kotler both have extensive experience as composers and musicians, with backgrounds crafting original music for commercials, film and television. They also maintain active careers in the entertainment and performance spaces. Kvasnosky recently scored the feature film JT LeRoy, starring Kristen Stewart and Laura Dern. Kotler performs and records regularly.

Kvasnosky is a composer and music producer with extensive experience across film, TV, advertising and recording. A Seattle native who studied at NYU, he worked as a jazz pianist and studio musician before composing for television and film. His tracks have been licensed in many TV shows and films. He has scored commercial campaigns for Nike, Google, McDonald’s, Amazon, Target and VW. Along with Detroit-based music producer Waajeed and singer Dede Reynolds, Kvasnosky formed the electronic group Tiny Hearts.

Native New Yorker Kotler holds a Bachelor of Music from Northwestern University School of Music and a Master of Music from Manhattan School of Music, both in jazz piano performance. He began his career as a performer and studio musician, playing in a variety of bands and across genres including neo-soul, avante garde jazz, funk, rock and more. He also music directed Jihad! The Musical to a month of sold-out performances at the Edinburgh Festival Fringe. Since then, he has composed commercials, themes and sonic branding campaigns for AT&T, Coca-Cola, Nike, Verizon, PlayStation, Samsung and Honda. He has also arranged music for American Idol and The Emmys, scored films that were screened at a variety of film festivals, and co-produced Nadje Noordhuis’ debut record. In 2013, he teamed up with Michael MacAllister to co-design and build Creekside Sound, a recording and production studio in Brooklyn.

Main Image: (L-R) Tim Kvasnosky and Aaron Kotler

Review: Sonarworks Reference 4 Studio Edition for audio calibration

By David Hurd

What is a flat monitoring system, and how does it benefit those mixing audio? Well, this is something I’ll be addressing in this review of Sonarworks Reference 4 Studio Edition, but first some background…

Having a flat audio system simply means that whatever signal goes into the speakers comes out sonically pure, exactly as it was meant to. On a graph, it would look like a straight line from 20 cycles on the left to 20,000 cycles on the right.

A straight, flat line with no peaks or valleys would indicate unwanted boosts or cuts at certain frequencies. There is a reason that you want this for your monitoring system. If there are peaks in your speakers at the hundred-cycle mark on down you get boominess. At 250 to 350 cycles you get mud. At around a thousand cycles you get a honkiness as if you were holding your nose when you talked, and too much high-end sounds brittle. You get the idea.

Before

After

If your system is not flat, your monitors are lying to your ears and you can’t trust what you are hearing while you mix.

The problem arises when you try to play your audio on another system and hear the opposite of what you mixed. It works like this: If your speakers have too much bass then you cut some of the bass out of your mix to make it sound good to your ears. But remember, your monitors are lying, so when you play your mix on another system, the bass is missing.

To avoid this problem, professional recording studios calibrate their studio monitors so that they can mix in a flat-sounding environment. They know that what they hear is what they will get in their mixes, so they can happily mix with confidence.

Every room affects what you hear coming out of your speakers. The problem is that the studio monitors that were close to being flat at the factory are not flat once they get put into your room and start bouncing sound off of your desk and walls.

Sonarworks
This is where Sonarwork’s calibration mic and software come in. They give you a way to sonically flatten out your room by getting a speaker measurement. This gives you a response chart based upon the acoustics of your room. You apply this correction using the plugin and your favorite DAW, like Avid Pro Tools. You can also use the system-wide app to correct sound from any source on your computer.

So let’s imagine that you have installed the Sonarworks software, calibrated your speakers and mixed a music project. Since there are over 30,000 locations that use Sonarworks, you can send out your finished mix, minus the Sonarworks plugins since their room will have different acoustics, and use a different calibration setting. Now, the mastering lab you use will be hearing your mix on their Sonarworks acoustically flat system… just as you mixed it.

I use a pair of Genelec studio monitors for both audio projects and audio-for-video work. They were expensive, but I have been using them for over 15 years with great results. If you don’t have studio monitors and just choose to mix on headphones, Sonarworks has you covered.

The software will calibrate your headphones.

There is an online product demo at sonarworks.com that lets you select which headphones you use. You can switch between bypass and the Sonarworks effect. Since they have already done the calibration process for your headphones, you can get a good idea of the advantages of mixing on a flat system. The headphone option is great for those who mix on a laptop or small home studio. It’s less money as well. I used my Sennheiser HD300 Pro series headphones.

I installed Sonarworks on my “Review” system, which is what I use to review audio and video production products. I then tested Sonarworks on both Pro Tools 12 music projects and video editing work, like sound design using a sound FX library and audio from my Blackmagic Ursa 4.6K camera footage. I was impressed at the difference that the Sonarworks software made. It opened my mixes and made it easy to find any problems.

The Sonarworks Reference 4 Studio Edition takes your projects to a whole new level, and finally lets you hear your work in a sonically pure and flat listening environment.

My Review System
The Sonarworks Reference 4 Studio Edition was tested on
my Mac Pro 6-core trash can running High Sierra OSX, 64GB RAM, 12GB of RAM on the D700 video cards; a Blackmagic UltraStudio 4K box; four G-Tech G-Speed 8TB RAID boxes with HighPoint RAID controllers; Lexar SD and Cfast card readers; video output viewed a Boland 32-inch broadcast monitor; a Mackie mixer; a Complete Control S25 keyboard; and a Focusrite Clarett 4 Pre.

Software includes Apple FCPX, Blackmagic Resolve 15 and Pro Tools 12. Cameras used for testing are a Blackmagic 4K Production camera and the Ursa Mini 4.6K Pro, both powered by Blueshape batteries.


David Hurd is production and post veteran who owns David Hurd Productions in Tampa. You can reach him at david@dhpvideo.com.

Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.

After fire, SF audio house One Union is completely rebuilt

San Francisco-based audio post house One Union Recording Studios has completed a total rebuild of its facility. It features five all-new, state-of-the-art studios designed for mixing, sound design, ADR, voice recording and other sound work.

Each studio offers Avid/Euphonix digital mixing consoles, Avid MTRX interface systems, the latest Pro Tools software PT Ultimate and robust monitoring and signal processing gear. All studios have dedicated, large voice recording booths. One is certified for Dolby Atmos sound production. The facility’s infrastructure and central machine room are also all new.

One Union began its reconstruction in September 2017 in the aftermath of a fire that affected the entire facility. “Where needed, we took the building back to the studs,” says One Union president/owner John McGleenan. “We pulled out, removed and de-installed absolutely everything and started fresh. We then rebuilt the studios and rewired the whole facility. Each studio now has new consoles, speakers, furniture and wiring, and all are connected to new machine rooms. Every detail has been addressed and everything is in its proper place.”

During the 18 months of reconstruction, One Union carried on operations on a limited basis while maintaining its full staff. That included its team of engineers Joaby Deal, Eben Carr, Andy Greenberg, Matt Wood and Isaac Olsen who worked continuously and remain in place.

Reconstruction was managed by LA-based Yanchar Design & Consulting Group. All five studios feature Avid/Euphonix System 5 digital audio consoles, Pro Tools 2018 and Avid MTRX with Dante interface systems. Studio 4 adds Dolby Atmos capability with a full Atmos Production Suite as well as Atmos RMU. Studio 5, the facility’s largest recording space, has two MTRX systems, with a total of more than 240 analog, MADI and Dante outputs (256 inputs), integrated with a nine-foot Avid/Euphonix console. It also features a 110-inch, retractable projection screen in the control room and a 61-inch playback monitor in its dedicated voice booth. Among other things, the central machine room includes 300TB LTO archiving system.

John McGleenan

The facility was also rebuilt with an eye toward avoiding production delays. “All of the equipment is enterprise-grade and everything is redundant,” McGleenan notes. “The studios are fed by a dual power supply and each is equipped with dual devices. If some piece of gear goes down, we have a redundant system in place to keep going. Additionally, all our critical equipment is hot-swappable. Should any component experience a catastrophic failure, it will be replaced by the manufacturer within 24 hours.”

McGleenan adds that redundancy extends to broadband connectivity. To avoid outages, the facility is served by two 1Gig fiber optic connections provided by different suppliers. WiFi is similarly available through duplicate services.

One Union Recording was founded by McGleenan, a former advertising agency executive, in 1994 and originally had just one sound studio. More studios were soon added as the company became a mainstay sound services provider to the region’s advertising industry.

In recent years, the company has extended its scope to include corporate and branded media, television, film and games, and built a client base that extends across the country and around the world.

Recent work includes commercials for Mountain Dew and carsharing company Turo, the television series Law and Order SVU and Grand Hotel, and the game The Grand Tour.

Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Providing audio post for Three Identical Strangers documentary

By Randi Altman

It is a story that those of us who grew up in the New York area know well. Back in the ‘80s, triplet brothers separated at birth were reunited, after two of them attended the same college within a year of each other — with one being confused for the other. A classmate figured it out and their story was made public. Enter brother number three.

It’s an unbelievable story that at the time was considered to be a heart-warming tale of lost brothers — David Kellman, Bobby Shafran and Eddy Galland — who found each other again at the age of 19. But heart-warming turned heart-breaking when it was discovered that the triplets were part of a calculated, psychological research project. Each brother was intentionally placed in different levels of economic households, where they were “checked in on” over the years.

L-R: Chad Orororo, Nas Parkash and Kim Tae Hak

Last year, British director Tim Wardle told the story in his BAFTA-nominated documentary, Three Identical Strangers, produced by Raw TV. For audio post production, Wardle called on dialogue editor and re-recording mixer Nas Parkash, sound effects editor Kim Tae Hak and Foley and archive FX editor editor Chad Orororo, all from London-based post house Molinare. The trio was nominated for an MPSE Award earlier this year for their work on the film.

We recently reached out to the team to ask about workflow on this compelling work.

When you first started on Three Identical Strangers, did you realize then how powerful a film it was going to be?
Nas Parkash: It was after watching the film for the first time that we realized it was going to be seminal film. It’s an outrageous story — the likes of which we hadn’t come across before. We as a team have been fortunate to work on a broad range of documentary features, but this one has stuck out, probably because of its unpredictability and sheer number of plot twists.

Chad Orororo: I agree. It was quite an exciting moment to watch an offline cut and instantly know that it was going to be phenomenal project. The great thing about having this reaction was that the pressure was fused with excitement, which is always a win-win. Especially as the storytelling had so much charisma.

Kim Tae Hak: When the doc was first mentioned, I had no idea about their story, but soon after viewing the first cut I realized that this would be a great film. The documentary is based on an unbelievable true story — it evokes a lot of mixed feelings, and I wanted to ensure that every single sound effect element reflected those emotions and actions.

How early did you get involved in the project?
Tae Hak: I got to start working on the SFX as soon as the picture was locked and available.

Parkash: We had a spotting session a week before we started, with director Tim Wardle and editor Michael Harte, where we watched the film in sections and made notes. This helped us determine what the emotion in each scene should be, which is important when you’ve come to a film cold. They had been living with the edit, evolving it over months, so it was important to get up to speed with their vision as quickly as possible.

Courtesy of Newsday

Documentary audio often comes from many different sources and in varying types of quality. Can you talk about that and the challenges related to that?
Parkash: The audio quality was pretty good. The interview recordings were clean and on mic. We had two mics for every interview, but I went with the boom every time, as it sounded nicer, albeit more ambient, but with atmospheres that bedded in nicely.

Even the archive clips, such as from the Phil Donahue Show, were good. Funnily enough, you tend to get worse-sounding archives the more recent it is in history. 1970s stuff on the whole seems to have been preserved quite well, whereas stuff from the 1990s can be terrible.

Any technical challenges on the project?
Parkash: The biggest challenge for me was mixing in commercial music with vocals underneath interview dialogue. It had to be kept at a loud enough level to retain impact in the cinema, but low enough that it didn’t fight with the interview dialogue. The biggest deliberation was to what degree should we use sound effects in the drama recon — do we fully fill or just go with dialogue and music? In the end it was judged on a case-by-case basis.

How was Foley used within the doc?
Orororo: The Foley covered everything that you see on screen — all of the footsteps, clothing movement, shaving and breathing. You name it. It’s in there somewhere. My job was to add a level of subtle actuality, especially during the drama reconfiguration scenes.

These scenes took quite a bit of work to get right because they had to match the mood of the narration. For example, the coin spillage during the telephone box scene required a specific amount of coins on the right surface. It took a numerous amount of takes to get right because you can’t exactly control how objects fall and the texture also changes depending on the height from which you drop an object. So generally, there’s a lot more to consider when recording Foley than people may assume.

Unfortunately there we’re a few scenes where Foley was completely dropped (mainly on the archive material), but this is something that usually happens. The shape of the overall mix always takes favor over the individual elements that contribute to the mix. Teamwork makes the dream work, as they say, and I really think that showed with the final result.

Parkash: We did have sync sound recorded on location, but we decided it would be better to re-record at a higher fidelity. Some of it was noisy or didn’t sound cinematic enough. When it’s cleaner sound, you can make more of it.

What about the sound effects? Did you use a library or your own?
Parkash: Kim has his own extensive sound effects library. We also have our own personal ones, plus of Molinare’s. Anything we can’t find, we’ll go out and record. Kim has a Zoom recorder and his breathing has been featured on many films now (laughs).

Tae Hak: I mainly used my own SFX library. I always build up my own FX library, which I can apply instantly for any type of motioned pictures. I then tweak by applying various software plugins, such as Pitch & Time Pro, Altiverb and many more.

As a brief example of how I completed sound design for the opening title, the first thing I did was specifically look for realistic heartbeats of six-month infants. After successfully collecting some natural heartbeats. I then blended them with other synthetic elements as I started to vary the pitch slightly between them (for the three babies), applying various effects, such as chorus and reverb, so each heartbeat has a slightly different texture. It was a bit tricky to make them distinct, but still the same (like identical triplets).

The three heartbeats were panned across the front three speakers in order to create as much separation and clarity as possible. Once I was happy with the heartbeats as a foundation. I then added other sound elements, such as underwater, ambiguous liquids and other sound design elements. It was important for this sequence to build in a dramatic way, starting as mono and gradually filling the 5.1 space before a hard cut into the interview room.

Can you talk about working with director Tim Wardle?
Tae Hak: Tim was fantastic and very supportive throughout the project. As an FX editor, I had less face to face with him than Nas, but we had a spot session together before the first day of working, and we also talked about our sound designing approach over the phone, especially for the opening title, and the aforementioned sound of triplets’ heartbeats.

Orororo: Tim was great to work with! He’s a very open-minded director who also trusts in the talent that he’s working with, which can be hard to come by especially on a project as important as Three Identical Strangers.

Parkash: Tim and editor Michael Harte were wonderful to work with. The best aspect of working in this industry are the people you meet and the friendships you make. They are both cinephiles, who cited numerous other films and directors in order to guide us through the process — “this scene should feel like this scene from such and such movie.” But they were also open to our suggestions and willing to experiment with different approaches. It felt like a collaboration, and I remember having fun in those intense few weeks.

How much stock footage versus new footage was shot?
Parkash: It was all pretty much new — the sit-down interviews, drama recon and the GVs (b-roll). The archive material was obviously cleared from various sources. The home movie footage came mute, so we rebuilt the sound but upon review decided that it was better left mute. It tends to change the audience’s perspective of the material depending on whether you hear the sound or not. Without, it feels more like you’re looking upon the subjects, as opposed to being with them.

What kind of work went into the new interviews?
Parkash: EQ, volume automation, de-essing, noise reduction, de-reverb, reverb, mouth de-click — Izotope RX6 software basically. We’ve become quite reliant upon this software for unifying our source material into something consistent and to achieve a quality good enough to stand up in the cinema, at theatrical level.

What are you all working on now at Molinare?
Tae Hak: I am working on a project about football (soccer for Americans) as the FX editor. I can’t name it yet, but it’s a six-episode series for Amazon Prime. I’m thoroughly enjoying the project, as I am a football fan myself. It’s filmed across the world, including Russia where the World Cup was held last year. The story really captures the beautiful game, how it’s more than just a game, and its impact on so much of the global culture.

Parkash: We’ve just finished a series for Discovery ID, about spouses who kill each other. I’m also working on the football series that Kim mentioned for Amazon Prime. So, murder and footy! We are lucky to work on such varied, high-quality films, one after another.

Orororo: Surprisingly, I’m also working on this football series (smiles). I work with Nas fairly often and we’ve just finished up on an evocative, feature-length TV documentary that follows personal accounts of people who have survived massacre attacks in the US.

Molinare has revered creatives everywhere you look, and I’m lucky enough to be working with one of the sound greats — Greg Gettens — on a new HBO Channel 4 documentary. However, it’s quite secret so I can’t say much more, but keep your eyes peeled.

Main Image: Courtesy of Neon


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Hulu’s PEN15: Helping middle school sound funny

By Jennifer Walden

Being 13 years old once was hard enough, but the creators of the Hulu series PEN15 have relived that uncomfortable age — braces and all — a second time for the sake of comedy.

James Parnell

Maya Erskine and Anna Konkle might be in their 30s, but they convincingly play two 13-year-old BFFs journeying through the perils of 7th grade. And although they’re acting alongside actual teenagers, it’s not Strangers With Candy grown-up-interfacing-with-kids kind of weird — not even during the “first kiss” scene. The awkwardness comes from just being 13 and having those first-time experiences of drinking, boyfriends, awkward school dances and even masturbation (the topic of focus in Episode 3). Erskine, Konkle and co-showrunner Sam Zvibleman hilariously capture all of that cringe-worthy coming-of-age content in their writing on PEN15.

The show is set in the early 2000s, a time when dial-up Internet and the Sony Discman were prevailing technology. The location is a non-descript American suburb that is relatable in many ways to many people, and that is one way the show transports the audience back to their early teenage years.

At Monkeyland Audio in Glendale, California, supervising sound editor/re-recording mixer James Parnell and his team worked hard to capture that almost indescribable nostalgic essence that the showrunners were seeking. Monkeyland was responsible for all post sound editorial, including Foley, ADR, final 5.1 surround mixing and stereo fold-downs for each episode. Let’s find out more from Parnell.

I happened to watch Episode 3, “Ojichan,” with my mom, and it was completely awkward. It epitomized the growing pains of the teenage years, which is what this series captures so well.
Well, that was an awkward one to mix as well. Maya (Erskine) and Anna (Konkle) were in the room with me while I was mixing that scene! Obviously, the show is an adult comedy that targets adults. We all ended up joking about it during the mix — especially about the added Foley sound that was recorded.

The beauty of this show is that it has the power to take something that might otherwise be thought of as, perhaps, inappropriate for some, and humanize it. All of us went through that period in our lives and I would agree that the show captures that awkwardness in a perfect and humorous way.

The writers/showrunners also star. I’m sure they were equally involved with post as well as other aspects of the show. How were they planning to use sound to help tell their story?
Parnell: In terms of the post schedule, I was brought on very early. We were doing spotting sessions to pre-locked picture, for Episode 1 and Episode 3. From the get-go, they were very specific about how they wanted the show to sound. I got the vibe that they were going for that Degrassi/Afterschool Special feeling but kept in the year 2000 — not the original Degrassi of the early ‘90s.

For example, they had a very specific goal for what they wanted the school to sound like. The first episode takes place on the first day of 7th grade and they asked if we could pitch down the school bell so it sounds clunky and have the hallways sound sparse. When class lets out, the hallway should sound almost like a relief.

Their direction was more complex than “see a school hallway, hear a school hallway.” They were really specific about what the school should sound like and specific about what the girls’ neighborhoods should sound like — Anna’s family in the show is a bit better off than Maya’s family so the neighborhood ambiences reflect that.

What were some specific sounds you used to capture the feel of middle school?
The show is set in 2000, and they had some great visual cues as throwbacks. In Episode 4 “Solo,” Maya is getting ready for the school band recital and she and her dad (a musician who’s on tour) are sending faxes back and forth about it. So we have the sound of the fax machine.

We tried to support the amazing recordings captured by the production sound team on-set by adding in sounds that lent a non-specific feeling to the school. This doesn’t feel like a California middle school; it could be anywhere in America. The same goes for the ambiences. We weren’t using California-specific birds. We wanted it to sound like Any Town, USA so the audience could connect with the location and the story. Our backgrounds editor G.W. Pope did a great job of crafting those.

For Episode 7, “AIM,” the whole thing revolves around Maya and Anna’s AOL instant messenger experience. The creatives on the show were dreading that episode because all they were working with was temp sound. They had sourced recordings of the AOL sound pack to drop into the video edit. The concern was how some of the Hulu execs would take it because the episode mostly takes place in front of a computer, while they’re on AOL chatting with boys and with each other. Adding that final layer of sound and then processing on the mix stage helped what might otherwise feel like a slow edit and a lagging episode.

The dial-up sounds, AOL sign-on sounds and instant messenger sounds we pulled from library. This series had a limited budget, so we didn’t do any field recordings. I’ve done custom recordings for higher-budget shows, but on this one we were supplementing the production sound. Our sound designer on PEN15 was Xiang Li, and she did a great job of building these scenes. We had discussions with the showrunners about how exactly the fax and dial-up should sound. This sound design is a mixture of Xiang Li’s sound effects editorial with composer Leo Birenberg’s score. The song is a needle drop called “Computer Dunk.” Pretty cool, eh?

For Episode 4, “Solo,” was the middle school band captured on-set? Or was that recorded in the studio?
There was production sound recorded but, ultimately, the music was recorded by the composer Leo Birenberg. In the production recording, the middle school kids were actually playing their parts but it was poorer than you’d expect. The song wasn’t rehearsed so it was like they were playing random notes. That sounded a bit too bad. We had to hit that right level of “bad” to sell the scene. So Leo played individual instruments to make it sound like a class orchestra.

In terms of sound design, that was one of the more challenging episodes. I got a day to mix the show before the execs came in for playback. When I mixed it initially, I mixed in all of Leo’s stems — the brass, percussion, woodwinds, etc.

Anna pointed out that the band needed to sound worse than how Leo played it, more detuned and discordant. We ended up stripping out instruments and pitching down parts, like the flute part, so that it was in the wrong key. It made the whole scene feel much more like an awkward band recital.

During the performance, Maya improvises a timpani solo. In real life, Maya’s father is a professional percussionist here in LA, and he hooked us up with a timpani player who re-recorded that part note-for-note what she played on-screen. It sounded really good, but we ended up sticking with production sound because it was Maya’s unique performance that made that scene work. So even though we went to the extremes of hiring a professional percussionist to re-perform the part, we ultimately decided to stick with production sound.

What were some of the unique challenges you had in terms of sound on PEN15?
On Episode 3, “Ojichan,” Maya is going through this process of “self-discovery” and she’s disconnecting her friendship from Anna. There’s a scene where they’re watching a video in class and Anna asks Maya why she missed the carpool that morning. That scene was like mixing a movie inside a show. I had to mix the movie, then futz that, and then mix that into the scene. On the close-ups of the 4:3 old-school television the movie would be less futzed and more like you’re in the movie, and then we’d cut back to the girls and I’d have to futz it. Leo composed 20 different stems of music for that wild life video. Mixing that scene was challenging.

Then there was the Wild Things film in Episode 8, “Wild Things.” A group of kids go over to Anna’s boyfriend’s house to watch Wild Things on VHS. That movie was risqué, so if you had an older brother or older cousin, then you might have watched it in middle school. That was a challenging scene because everyone had a different idea of how the den should sound, how futzed the movie dialogue should be, how much of the actual film sound we could use, etc. There was a specific feel to the “movie night” that the producers were looking for. The key was mixing the movie into the background and bringing the awkward flirting/conversation between the kids forward.

Did you have a favorite scene for sound?
The season finale is one of the bigger episodes. There’s a middle school dance and so there’s a huge amount of needle-drop songs. Mixing the music was a lot of fun because it was a throwback to my youth.

Also, the “AIM” episode was fun because it ended up being fun to work on — even though everyone was initially worried about it. I think the sound really brought that episode to life. From a general standpoint, I feel like sound lent itself more so than any other aspect to that episode.

The first episode was fun too. It was the first day of school and we see the girls getting ready at their own houses, getting into the carpool and then taking their first step, literally, together toward the school. There we dropped out all the sound and just played the Lit song “My Own Worst Enemy,” which gets cut off abruptly when someone on rollerblades hops in front of the girls. Then they talk about one of their classmates who grew boobs over the summer, and we have a big sound design moment when that girl turns around and then there’s another needle-drop track “Get the Job Done.” It’s all specifically choreographed with sound.

The series music supervisor Tiffany Anders did an amazing job of picking out the big needle-drops. We have a Nelly song for the middle school dance, we have songs from The Cranberries, and Lit and a whole bunch more that fit the era and age group. Tiffany did fantastic work and was great to work with.

What were some helpful sound tools that you used on PEN15?
Our dialogue editor’s a huge fan of iZotope’s RX 7, as am I. Here at Monkeyland, we’re on the beta-testing team for iZotope. The products they make are amazing. It’s kind of like voodoo. You can take a noisy recording and with a click of a button pretty much erase the issues and save the dialogue. Within that tool palette, there are lot of ways to fix a whole host of problems.

I’m a huge fan of Audio Ease’s Altiverb, which came in handy on the season finale. In order to create the feeling of being in a middle school gymnasium, I ran the needle-drop songs through Altiverb. There are some amazing reverb settings that allow you to reverse the levels that are going to the surround speakers specifically. You can literally EQ the reverb, take out 200Hz, which would make the music sound more boomy than desired.

The lobby at Monkeyland is a large cinder-block room with super-high ceilings. It has acoustics similar to a middle school gymnasium. So, we captured a few impulse responses (IR), and I used those in Altiverb on a few lines of dialogue during the school dance in the season finale. I used that on a few of the songs as well. Like, when Anna’s boyfriend walks into the gym, there was supposed to be a Limp Bizkit needle-drop but that ended up getting scrapped at the last minute. So, instead there’s a heavy-metal song and the IR of our lobby really lent itself to that song.

The show was a simple single-card Pro Tools HD mix — 256 tracks max. I’m a huge fan of Avid and the new Pro Tools 2018. My dialogue chain features Avid’s Channel Strip; McDSP SA-2; Waves De-Esser (typically bypassed unless being used); McDSP 6030 Leveling Amplifier, which does a great job at handling extremely loud dialogue and preventing it from distorting, as well as Waves WNS.

On staff, we have a fabulous ADR mixer named Jacob Ortiz. The showrunners were really hesitant to record ADR, and whenever we could salvage the production dialogue we did. But when we needed ADR, Jacob did a great job of cueing that, and he uses the Sound In Sync toolkit, including EdiCue, EdiLoad and EdiMarker.

Any final thoughts you’d like to share on PEN15?
Yes! Watch the show. I think it’s awesome, but again, I’m biased. It’s unique and really funny. The showrunners Maya, Anna and Sam Zvibleman — who also directed four episodes — are three incredibly talented people. I was honored to be able to work with them and hope to be a part of anything they work on next.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

Spider-Man Into the Spider-Verse: sound editors talk ‘magical realism’

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse isn’t your ordinary Spider-Man movie, from its story to its look to its sound. The filmmakers took a familiar story and turned it on its head a bit, letting audiences know that Spider-Man isn’t just one guy wearing that mask… or even a guy, or even from this dimension.

The film focuses on Miles Morales, a teenager from Brooklyn, struggling with all things teenager while also dealing with the added stress of being Spider-Man.

Geoff Rubay

Audio played a huge role in this story, and we recently reached out to Sony supervising sound editors Geoff Rubay and Curt Schulkey to dig in a bit deeper. The duo recently won an MPSE Award for Outstanding Achievement in Sound Editing — Feature Animation… industry peers recognizing the work that went into creating the sound for this stylized world.

Let’s find out more about the sound process on Spider-Man: Into the Spider Verse, which won the Academy Award for Best Animated Feature.

What do you think is the most important element of this film’s sound?
Curt Schulkey: It is fun, it is bold, it has style and it has attitude. It has energy. We did everything we could to make the sound as stylistic and surprising as the imagery. We did that while supporting the story and the characters, which are the real stars of the movie. We had the opportunity to work with some incredibly creative filmmakers, and we did our best to surprise and delight them. We hope that audiences like it too.

Geoff Rubay: For me, it’s the fusion of the real and the fantastic. Right from the beginning, the filmmakers made it clear that it should feel believable — grounded — while staying true to the fantastic nature of the visuals. We did not hold back on the fantastic side, but we paid close attention to the story and made sure we were supporting that and not just making things sound awesome.

Curt Schulkey

How early did your team get involved in the film?
Rubay: We started on an SFX pre-design phase in late February for about a month. The goal was to create sounds for the picture editors and animators to work with. We ended up doing what amounted to a temp mix of some key sequences. The “Super Collider” was explored. We only worked on the first sequence for the collider, but the idea was that material could be recycled by the picture department and used in the early temp mixes until the final visuals arrived.

Justin Thompson, the production designer, was very generous with his time and resources early on. He spent several hours showing us work-in-progress visuals and concept art so that we would know where visuals would eventually wind up. This was invaluable. We were able to work on sounds long before we saw them as part of the movie. In the temp mix phase, we had to hold back or de-emphasize some of those elements because they were not relevant yet. In some cases, the sounds would not work at all with the storyboards or un-lit animation that was in the cut. Only when the final lit animation showed up would those sounds make sense.

Schulkey: I came onto the film in May, about 9.5 months before completion. We were neck-deep in following changes throughout our work. We were involved in the creation of sounds from the very first studio screening, through previews and temp mixes, right on to the end of the final mix. This sometimes gave us the opportunity to create sounds in advance of the images, or to influence the development of imagery and timing. Because they were so involved in building the movie, the directors did not always have time to discuss their needs with us, so we would speculate on what kinds of sounds they might need or want for events that they were molding visually. As Geoff said, the time that Justin Thompson spent with us was invaluable. The temp-mix process often gave us the opportunity to audition creations for the directors/producers.

What sort of direction did you receive from the directors?
Schulkey: Luckily, because of our previous experiences with producers Chris Miller and Phil Lord and editor Bob Fisher, we had a pretty good idea of their tastes and sensitivities, so our first attempts were usually pointed in the right direction. The three directors — Bob Persichetti, Peter Ramsey and Rodney Rothman — also provided input, so we were rich with direction.

As with all movies, we had hundreds of side discussions with the directors along the way about details, nuances, timing and so on. I think that the most important overall direction we got from the filmmakers was related to the dynamic arc of the movie. They wanted the soundtrack to be forceful but not so much that it hurt. They wanted it to breathe — quiet in some spots, loud in others, and they wanted it to be fun. So, we had to figure out what “fun” sounds like.

Rubay: This will sound strange, but we never did a spotting session for the movie. We just started our work and got feedback when we showed sequences or did temp mixes. Phil called when we started the pre-design phase and gave us general notes about tone and direction. He made it clear he did not want us to hold back, but he wanted to keep the film grounded. He explained the importance of the various levels of technology of different characters.

Peni Parker is from the 31st century, so her robot sidekick needed to sound futuristic. Scorpion is a pile of rusty metal. Prowler’s tech is appropriated from his surroundings and possibly with some help from Kingpin. We discussed the sound of previous Spider-Man movies and asked how much we needed to stay true to established sounds from those films. The direction was “not at all unless it makes sense.” We endeavored to make Peter Parker’s web-slings sound like the previous films. After that, we just “went for it.”

How was working on a film like this different than working on something live-action? Did it allow you more leeway?
Schulkey: In a live-action film, most or all of the imagery is shot before we begin working. Many aspects of the sound are already stamped in. On this film, we had a lot more creative involvement. At the start, a good percentage of the movie was still in storyboards, so if we expanded or contracted the timing of an event, the animators might adjust their work to fit the sounds. As the visual elements developed, we began creating layers of sound to support them.

For me, one of the best parts of an animated film’s soundtrack is that no sounds are imposed by the real world, as is often the case in live-action productions. In live-action, if a dialogue scene is shot on a city street in Brooklyn, there is a lot of uninteresting traffic noise built into the dialogue recordings.

Very few directors (or actors) want to lose the spontaneity of the original performance by re-recording dialogue in a studio, so we tweak, clean and process the dialogue to lessen unwanted noise, sometimes diminishing the quality of the recording. We sometimes make compromises with sound effects and music to support a not-so-ideal dialogue track. In an animated film, we don’t have that problem. Sound effects and ambiences can shine without getting in the way. This film has very quiet moments, which feel very natural and organic. That’s a pleasure to have in the movie.

Rubay: Everything Curt said! You have quite a bit of freedom because there is no “production track.” On the flip side, every sound that is added is just that — added. You have to be aware of that; more is not always better.

Spider-Man: Into the Spider-Verse is an animated film with a unique visual style. At times, we played the effects straight, as we might in a live-action picture, to ground it. Other times, we stripped away any notion of “reality.” Sometimes we would do both in the same scene as we cut from one angle to the next. Chris and Phil have always welcomed hard right angle turns, snapping sounds off on a cut or mixing and matching styles in close proximity. They like to do whatever supports the story and directs the audience. Often, we use sound to make your eye notice one thing or look away from another. Other times, we expand the frame, adding sounds outside of what you can see to further enhance the image.

There are many characters in the film. Can you talk about helping to create personality for each?
Rubay: There was a lot of effort made to differentiate the various “spider people” from each other. Whether it was through their web-slings or inherent technology, we were directed to give as much individual personality as possible to each character. Since that directive was baked in from the beginning, every department had it in mind. We paid attention to every visual cue. For example, Miles wears a particular pair of shoes — Nike Air Jordan 1s. My son, Alec Rubay, who was the Foley supervisor, is a real sneakerhead. He tracked down those shoes — very rare — and we recorded them, capturing every sound we could. When you hear Miles’s shoes squeak, you are hearing the correct shoes. Those shoes sound very specific. We applied that mentality wherever possible.

Schulkey: We took the opportunity to exploit the fact that some characters are from different universes in making their sound signatures different from one another. Spider-Ham is from a cartoon universe, so many of the sounds he makes are cartoon sounds. Sniffles, punches, swishes and other movements have a cartoon sensibility. Peni Parker, the anime character, is in a different sync than the rest of the cast, and her voice is somewhat more dynamic. We experimented with making Spider-Man Noir sound like he was coming from an old movie soundtrack, but that became obnoxious, so we abandoned the idea. Nicolas Cage was quite capable of conveying that aspect of the character without our help.

Because we wanted to ground characters in the real world, a lot of effort was put into attaching their voices to their images. Sync, of course, is essential, as is breathing. Characters in most animated films don’t do much breathing, but we added a lot of breaths, efforts and little stutters to add realism. That had to be done carefully. We had a very special, stellar cast and we wanted to maintain the integrity of their performances. I think that effort shows up nicely in some of the more intimate, personal scenes.

To create the unique look of this movie, the production sometimes chose to animate sections of the film “on twos.” That means that mouth movements change every other frame rather than every frame, so sync can be harder than usual to pinpoint. I worked closely with director Bob Persichetti to get dialogue to look in its best sync, doing careful reviews and special adjustments, as needed, on all dialogue in the film.

The main character in this Spider-Man thread is Miles Morales, a brilliant African-American/Puerto Rican Brooklyn teenager trying to find his way in his multi-cultural world. We took special care to show his Puerto Rican background with added Spanish-language dialogue from Miles and his friends. That required dialect coaches, special record sessions and thorough review.

The group ADR required a different level of care than most films. We created voices for crowds, onlookers and the normal “general” wash of voices for New York City. Our group voices covered many very specific characters and were cast in detail by our group leader, Caitlin McKenna. We took a very realistic approach to crowd activity. It had to be subtler than most live-action films to capture the dry nonchalance of Miles Morales’s New York.

Would you describe the sounds as realistic? Fantastical? Both?
Schulkey: The sounds are fantastically realistic. For my money, I don’t want the sounds in my movie to seem fantastical. I see our job as creating an illusion for the audience — the illusion that they are hearing what they are seeing, and that what they are seeing is real. This is an animated film, where nothing is actually real, but has its own reality. The sounds need to live in the world we are watching. When something fantastical happens in the movie’s reality, we had to support that illusion, and we sometimes got to do fun stuff. I don’t mean to say that all sounds had to be realistic.

For example, we surmised that an actual supercollider firing up below the streets of Brooklyn would sound like 10,000 computer fans. Instead, we put together sounds that supported the story we were telling. The ambiences were as authentic as possible, including subway tunnels, Brooklyn streets and school hallways. Foley here was a great tool for giving reality to animated images. When Miles walks into the cemetery at night, you hear his footsteps on snow and sidewalk, gentle cloth movements and other subtle touches. This adds to a sense that he’s a real kid in a real city. Other times, we were in the Spider-Verse and our imagination drove the work.

Rubay: The visuals led the way, and we did whatever they required. There are some crazy things in this movie. The supercollider is based on a real thing so we started there. But supercolliders don’t act as they are depicted in the movie. In reality, they sound like a giant industrial site, fans and motors, but nothing so distinct or dramatic, so we followed the visuals.

Spider-sense is a kind of magical realism that supports, informs, warns, communicates, etc. There is no realistic basis for any of that, so we went with directions about feelings. Some early words of direction were “warm,” “organic,” “internal” and “magical.” Because there are no real sounds for those words, we created sounds that conveyed the emotional feelings of those ideas to the audience.

The portals that allow spider-people to move between dimensions are another example. Again, there was no real-world event to link to. We saw the visuals and assumed it should be a pretty big deal, real “force of nature” stuff. However, it couldn’t simply be big. We took big, energetic sounds and glued them onto what we were seeing. Of course, sometimes people are talking at the same time, so we shifted the frequency center of the moment to clear for the dialog. As music is almost always playing, we had to look for opportunities within the spaces it left.

 

Can you talk about working on the action scenes?
Rubay: For me, when the action starts, the sound had to be really specific. There is dialogue for sure. The music is often active. The guiding philosophy for me at that point is not “Keep adding until there is nothing left to add,” rather, it’s, “We’re done when there is nothing left to strip out.” Busy action scene? Broom the backgrounds away. Usually, we don’t even cut BG’s in a busy action scene, but, if we do, we do so with a skeptical eye. How can we make it more specific? Also, I keep a keen eye on “scale.” One wrong, small detail sound, no matter how cool or interesting, will get the broom if it throws off the scale. Sometimes everything might be sounding nice and big; impressive but not loud, just big, and then some small detail creeps in and spoils it. I am constantly looking out for that.

The “Prowler Chase” scene was a fun exploration. There are times where the music takes over and runs; we pull out every sound we can. Other times, the sound effects blow over everything. It is a matter of give and take. There is a truck/car/prowler motorcycle crash that turns into a suspended slo-mo moment. We had to decide which sounds to play where and when. Its stripped-down nature made it among my favorite moments in the picture.

Can you talk about the multiple universes?
Rubay: The multiverse presented many challenges. It usually manifested itself as a portal or something we move between. The portals were energetic and powerful. The multiverse “place” was something that we used as a quiet place. We used it to provide contrast because, usually, there was big action on either side.

A side effect of the multiple universes interacting was a buildup or collision/overlap. When universes collide or overlap, matter from each tries to occupy the same space. Visually, this created some very interesting moments. We referred to the multi-colored prismatic-looking stuff as “Picasso” moments. The supporting sound needed to convey “force of nature” and “hard edges,” but couldn’t be explosive, loud or gritty. Ultimately, it was a very multi-layered sound event: some “real” sounds teamed with extreme synthesis. I think it worked.

Schulkey: Some of the characters in the movie are transported from another dimension into the dimension of the movie, but their bodies rebel, and from time to time their molecules try to jump back to their native dimension, causing “glitching.” We developed, with a combination of plug-ins, blending, editing and panning, a signature sound that served to signal glitching throughout the movie, and was individually applied for each iteration.

What stands out in your mind as the most challenging scenes audio wise?
Rubay: There is a very quiet moment between Miles and his dad when dad is on one side of the door and Miles is on the other. It’s a very quiet, tender one-way conversation. When a movie gets that quiet every sound counts. Every detail has to be perfect.

What about the Dolby Atmos mix? How did that enhance the film? Can you give a scene or two as an example?
Schulkey: This film was a native Atmos mix, meaning that the primary final mix was directly in the Atmos format, as opposed to making a 7.1 mix and then going back to re-mix sections using the Atmos format.

The native Atmos mix allowed us a lot more sonic room in the theater. This is an extremely complex and busy mix, heavily driven by dialogue. By moving the score out into the side and surround speakers — away from the center speaker — we were able to make the dialogue clearer and still have a very rich and exciting score. Sonic movement is much more effective in this format. When we panned sounds around the room, it felt more natural than in other formats.

Rubay: Atmos is fantastic. Being able to move sounds vertically creates so much space, so much interest, that might otherwise not be there. Also, the level and frequency response of the surround channels makes a huge difference.

You guys used Avid Pro Tools for editing, can you mention some other favorite tools you employed on this film?
Schulkey : The Delete key and the Undo key.

Rubay: Pitch ‘n’ Time, Envy, Reverbs by Exponential Audio, Recording rigs and microphones of all sorts.

What haven’t I asked that’s important?
Our crew! Just in case anyone thinks this can be done by two people, it can’t.
– re-recording mixers Michael Semanick and Tony Lamberti
– sound designer John Pospisil
– dialogue editors James Morioka and Matthew Taylor
– sound effects editors David Werntz, Kip Smedley, Andy Sisul, Chris Aud, Donald Flick, Benjamin Cook, Mike Reagan and Ando Johnson
– Foley mixer Randy Singer
– Foley artists Gary Hecker, Michael Broomberg and Rick Owens

Warner Bros. Studio Facilities ups Kim Waugh, hires Duke Lim

Warner Bros. Studio Facilities in Burbank has promoted long-time post exec Kim Waugh to executive VP, worldwide post production services. They have also hired Duke Lim to serve as VP, post production sound at the studio.

In his new role, Waugh will be reporting to Jon Gilbert, president, worldwide studio facilities, Warner Bros. and will continue to lead the post creative services senior management team, overseeing all marketing, sales, talent management, facilities and technical operations across all locations. Waugh has been instrumental in expanding the business beyond the studio’s Burbank-based headquarters, first to Soho, London in 2012 with the acquisition of Warner Bros. De Lane Lea and then to New York in the 2015 acquisition of WB Sound in Manhattan.

The group supports all creative post production elements, ranging from sound mixing, editing and ADR to color correction and restoration, for Warner Bros.’ clients worldwide. Waugh’s creative services group features a vast array of award-winning artists, including the Oscar-nominated sound mixing team behind Warner Bros. Pictures’ A Star is Born.

Reporting to Waugh, Lim is responsible for overseeing the post sound creative services supporting Warner Bros.’ film and television clients on a day-to-day basis across the studio’s three facilities.

Duke Lim

Says Gilbert, “At all three of our locations, Kim has attracted award-winning creative talent who are sought out for Warner Bros. and third-party projects alike. Bringing in seasoned post executive Duke Lim will create an even stronger senior management team under Kim.”

Waugh most recently served as SVP, worldwide post production services, Warner Bros. Studio Facilities, a post he had held since 2007. In this position, he managed the post services senior management team, overseeing all talent, sales, facilities and operations on a day-to-day basis, with a primary focus on servicing all Warner Bros. Studios’ post sound clients. Prior to joining Warner Bros. as VP, post production services in 2004, Waugh worked at Ascent Media Creative Sound Services, where he served as SVP of sales and marketing, managing sales and marketing for the company’s worldwide divisional facilities. Prior to that, he spent more than 10 years at Soundelux, holding posts as president of Soundelux Vine Street Studios and Signet Soundelux Studios.

Lim has worked in the post production industry for more than 25 years, most recently posted at the Sony Sound Department, which he joined in 2014 to help expand the creative team and total number of mix stages. He began his career at Skywalker Sound South serving in various positions until their acquisition by Todd-AO in 1995, when Lim was given the opportunity to move into operations and began managing the mixing facilities for both its Hollywood location and the Todd-AO West studio in Santa Monica.

CAS and MPSE honor audio post pros and their work

By Mel Lambert

With a BAFTA win and high promise for the upcoming Oscar Awards, the sound team behind Bohemian Rhapsody secured a clean sweep at both the Cinema Audio Society (CAS) and Motion Picture Sound Editors (MPSE) ceremonies here in Los Angeles last weekend.

Paul Massey

The 55th CAS Awards also honored sound mixer Lee Orloff with a Cinema Audio Society Career Achievement Award, while director Steven Spielberg received its Cinema Audio Society Filmmaker Award. And at the MPSE Awards, director Antoine Fuqua accepted the 2019 Filmmaker Award, while supervising sound editor Stephen H. Flick secured the MPSE Career Achievement honor.

Re-recording mixer Paul Massey — accepting the CAS Award for Outstanding Sound Mixing Motion Picture-Live Action on behalf of his fellow dubbing mixers Tim Cavagin and Niv Adiri, together with production mixer John Casali — thanked Bohemian Rhapsody’s co-executive producer and band members Roger Taylor and Brian May for “trusting me to mix the music of Queen.”

The film topped a nominee field that also included A Quiet Place, A Star is Born, Black Panther and First Man; for several years the CAS winner in the feature-film category also has secured an Oscar Award for sound mixing.

Isle of Dogs secured a CAS Award in the animation category, which also included Incredibles 2, Ralph Breaks the Internet, Spider-Man: Into the Spider-Verse and The Grinch. The sound-mixing team included original dialogue mixer Darrin Moore and re-recording mixers Christopher Scarabosio and Wayne Lemmer, together with scoring mixers Xavier Forcioli and Simon Rhodes and Foley mixer Peter Persaud.

Free Solo won a documentary award for production mixer Jim Hurst, re-recording mixers Tom Fleischman and Ric Schnupp, together with scoring mixer Tyson Lozensky, ADR mixer David Boulton and Foley mixer Joana Niza Braga.

Finally, American Crime Story: The Assassination of Gianni Versace (Part 1) The Man Who Would Be Vogue, The Marvelous Mrs. Maisel: Vote For Kennedy, Vote For Kennedy and Anthony Bourdain: Parts Unknown (Bhutan) won CAS Awards within various broadcast sound categories.

Steven Spielberg and Bradley Cooper

The CAS Filmmaker Award was presented to Steven Spielberg by fellow director Bradley Cooper. This followed tributes from regular members of Spielberg’s sound team, including production sound mixer Ron Judkins plus re-recording mixers Andy Nelson and Gary Rydstrom, who quipped: “We spent so much money on Jurassic Park that [Steven] had to shoot Schindler’s List in black & white!”

“Through your talent, [sound editors and mixers] allow the audience to see with their ears,” Spielberg acknowledged, while stressing the full sonic and visual impact of a theatrical experience. “There’s nothing like a big, dark theater,” he stated. He added that he still believes that movie theaters are the best environment in which to fully enjoy his cinematic creations.

Upon receiving his Career Achievement Award from sound mixer Chris Noyes and director Dean Parisot, production sound mixer Lee Orloff acknowledged the close collaboration that needs to exist between members of the filmmaking team. “It is so much more powerful than the strongest wall you could build,” he stated, recalling a 35-year career that spans nearly 80 films.

Lee Orloff

Outgoing CAS president Mark Ulano presented the President’s Award to leading Foley mixer MaryJo Lang, while the CAS Student Award went to Anna Wozniewicz of Chapman University. Finalists included Maria Cecilia Ayalde Angel of Pontificia Universidad Javeriana, Bogota, Allison Ng of USC, Bo Pang of Chapman University and Kaylee Yacono of Savannah College of Art and Design.

Finally, the CAS Outstanding Product Awards went to Dan Dugan Sound Design for its Dugan Automixing in the Sound Devices 633 Compact Mixer, and to Izotope for its RX7 Audio Repair Software.

The CAS Awards ceremony was hosted by comedian Michael Kosta.

 

Motion Picture Sound Editors Awards

During the 66th Annual Golden Reels, outstanding achievement in sound editing awards were presented in 23 categories, encompassing feature films, long- and short-form television, animation, documentaries, games, special venue and other media.

The Americans, Atlanta, The Marvelous Mrs. Maisel and Westworld figured prominently within the honored TV series.

Following introductions by re-recording mixer Steve Pederson and supervising sound editor Mandell Winter, director/producer Michael Mann presented the 2019 MPSE Filmmaker Award to Antoine Fuqua, while Academy Award-winning supervising sound editor Ben Wilkins presented the MPSE Career Achievement Award to fellow supervising sound editor Stephen H. Flick, who also serves as professor of cinematic arts at the University of Southern California.

Antoine Fuqua

“We celebrate the creation of entertainment content that people will enjoy for generations to come,” MPSE president Tom McCarthy stated in his opening address. “As new formats appear and new ways to distribute content are developed, we need to continue to excel at our craft and provide exceptional soundtracks that heighten the audience experience.”

As Pederson stressed during his introduction to the MPSE Filmmaker Award, Fuqua “counts on sound to complete his vision [as a filmmaker].” “His films are stylish and visceral,” added Winter, who along with Pederson has worked on a dozen films for the director during the past two decades.

“He is a director who trusts his own vision,” Mandell confirmed. “Antoine loves a layered soundtrack. And ADR has to be authentic and true to his artistic intentions. He is a bone fide storyteller.”

Four-time Oscar-nominee Mann stated that the honored director “always elevates everything he touches; he uses sound design and music to its fullest extent. [He is] a director who always pushes the limits, while evolving his art.”

Pre-recorded tributes to Fuqua came from actor Chis Pratt, who starred in The Magnificent Seven (2017). “Nobody deserves [this award] more,” he stated. Actor Mark Wahlberg, who starred in Shooter (2007), and producer Jerry Bruckheimer were also featured.

Stephen Hunter Flick

During his 40-year career in the motion picture industry, while working on some 150 films, Steven H. Flick has garnered two Oscar Award wins for Speed (1994) and Robocop (1987) together with nominations for Total Recall (1990), Die Hard (1988) and Poltergeist (1982).

The award for Outstanding Achievement in Sound Editing – Animation Short Form went to Overwatch – Reunion from Blizzard Entertainment, headed by supervising sound editor Paul Menichini. The Non-Theatrical Animation Long Form award was awarded to NextGen from Netflix, headed by supervising sound editors David Acord and Steve Slanec.

The Feature Animation award went to the Oscar-nominated Spider-Man: Into the Spider-Verse from Sony Pictures Entertainment/Marvel, headed by supervising sound editors Geoffrey Rubay and Curt Schulkey. The Non-Theatrical Documentary award went to Searching for Sound — Islandman and Veyasin from Karga Seven Pictures/Red Bull TV, headed by supervising sound editor Suat Ayas. Finally, the Feature Documentary was a tie between Free Solo from National Geographic Documentary Films, headed by supervising sound editor Deborah Wallach, and They Shall Not Grow Old from Wingnut Films/Fathom Events/Warner Bros., headed by supervising sound editors Martin Kwok, Brent Burge, Melanie Graham and Justin Webster.

The Outstanding Achievement in Sound Editing — Music Score award also went to Spider-Man: Into the Spider-Verse, with music editors Katie Greathouse and Catherine Wilson, while the Musical award went to Bohemian Rhapsody from GK Films/Fox Studios, with supervising music editor John Warhurst and music editor Neil Stemp. The Dialogue/ADR award also went to Bohemian Rhapsody, with supervising ADR/dialogue editors Nina Hartston and Jens Petersen, while the Effects/Foley award went to A Quiet Place from Paramount Pictures, with supervising sound editors Ethan Van der Ryn and Erik Aadahl.

The Student Film/Verna Fields Award went to Facing It from National Film and Television School, with supervising sound designer/editor Adam Woodhams.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Sound designer Ash Knowlton joins Silver Sound

Emmy Award-winning NYC sound studio Silver Sound has added sound engineer Ash Knowlton to its roster. Knowlton is both a location sound recordist and sound designer, and on rare and glorious occasions she is DJ Hazyl. Knowlton has worked on film, television, and branded content for clients such as NBC, Cosmopolitan and Vice, among others.

“I know it might sound weird but for me, remixing music and designing sound occupy the same part of my brain. I love music, I love sound design — they are what make me happy. I guess that’s why I’m here,” she says.

Knowlton moved to Brooklyn from Albany when she was 18 years old. To this day, she considers making the move to NYC and surviving as one of her biggest accomplishments. One day, by chance, she ran into filmmaker John Zhao on the street and was cast on the spot as the lead for his feature film Alexandria Leaving. The experience opened Knowlton’s eyes to the wonders and complexity of the filmmaking process. She particularly fell in love with sound mixing and design.

Ten years later, with over seven independent feature films now under her belt, Knowlton is ready for the next 10 years as an industry professional.

Her tools of choice at Silver Sound are Reaper, Reason and Kontakt.

Main Photo Credit: David Choy

Karol Urban is president of CAS, others named to board

As a result of the Cinema Audio Society board of Directors election Karol Urban will replace CAS president Mark Ulano, whose term has come to an end.  Steve Venezia with replace treasurer Peter Damski who opted not to run for re-election.

“I am so incredibly honored to have garnered the confidence of our esteemed members,” says Urban. “After years of serving under different presidents and managing the content for the CAS Quarterly I have learned so much about the achievements, interests, talents and concerns of our membership. I am excited to given this new platform to celebrate the achievements and herald new opportunities to serve this incredibly dynamic and talented community.”

For 2019 the Executive Committee with include newly elected Urban and Venezia as well as VP Phillip W. Palmer, CAS, and secretary David J. Bondelevitch, CAS,  who were not up for election.

The incumbent CAS Board of Directors (Production) that were re-elected are  Peter J. Devlin CAS, Lee Orloff CAS, and Jeffrey W. Wexler, CAS. They will be joined by newly elected Amanda Beggs, CAS, and Mary H. Ellis, CAS, who are taking the seats of outgoing  board members Chris Newman CAS and Lisa Pinero, CAS.

Incumbent board members (Post Production) who were reelected are Bob Bronow CAS, and Mathew Waters, CAS, and they will be joined by newly elected Board Members Onnalee Blank, CAS, and Mike Minkler CAS, who will be taking the seats of board members Urban and Steve Venezia, CAS, who are now officers.

Continuing to serve as their terms were not up for reelection are for production Willie Burton, CAS, and Glen Trew, CAS, and for post production Tom Fleischman, CAS, Doc Kane CAS, Sherry Klein, CAS, and Marti Humphrey, CAS.

The new Board will be installed at the 55 Annual CAS Awards Saturday, February 16.

Sundance: Audio post for Honey Boy and The Death of Dick Long

By Jennifer Walden

Brent Kiser, an Emmy award-winning supervising sound editor/sound designer/re-recording mixer
at LA’s Unbridled Sound, is no stranger to the Sundance Film Festival. His resume includes such Sundance premieres as Wild Wild Country, Swiss Army Man and An Evening with Beverly Luff Lin.

He’s the only sound supervisor to work on two films that earned Dolby fellowships: Swiss Army Man back in 2016 and this year’s Honey Boy, which premiered in the US Dramatic Competition. Honey Boy is a biopic of actor Shia LaBeouf’s damaging Hollywood upbringing.

Brent Kiser (in hat) and Will Files mixing Honey Boy.

Also showing this year, in the Next category, was The Death of Dick Long. Kiser and his sound team once again collaborated with director Daniel Scheinert. For this dark comedy, the filmmakers used sound to help build tension as a group of friends tries to hide the truth of how their buddy Dick Long died.

We reached out to Kiser to find out more.

Honey Boy was part of the Sundance Institute’s Feature Film Program, which is supported by several foundations including the Ray and Dagmar Dolby Family Fund. You mentioned that this film earned a grant from Dolby. How did that grant impact your approach to the soundtrack?
For Honey Boy, Dolby gave us the funds to finish in Atmos. It allowed us to bring MPSE award-winning re-recording mixer Will Files on to mix the effects while I mixed the dialogue and music. We mixed at Sony Pictures Post Production on the Kim Novak stage. We got time and money to be on a big stage for 11 days — a five-day pre-dub and six-day final mix.

That was huge because the film opens up with these massive-robot action/sci-fi sound sequences and it throws the audience off the idea of this being a character study. That’s the juxtaposition, especially in the first 15 to 20 minutes. It’s blurring the reality between the film world and real life for Shia because the film is about Shia’s upbringing. Shia LaBeouf wrote the film and plays his father. The story focuses on the relationship of young actor Otis Lort (Lucas Hedges) and his alcoholic father James.

The story goes through Shia’s time on Disney Channel’s Even Stevens series and then on Transformers, and looks at how this lifestyle had an effect on him. His father was an ex-junkie, sex-offender, ex-rodeo clown and would just push his son. By age 12, Shia was drinking, smoking weed and smoking cigarettes — all supplied to him by his dad. Shia is isolated and doesn’t have too many friends. He’s not around his mother that much.

This year is the first year that Shia has been sober since age 12. So this film is one big therapeutic movie for him. The director Alma Har’el comes from an alcoholic family, so she’s able to understand where Shia is coming from. Working with Alma is great. She wants to be in every part of the process — pick each sound and go over every bit to make sure it’s exactly what she wants.

Honey Boy director Alma Har’el.

What were director Alma Har’el’s initial ideas for the role of sound in Honey Boy?
They were editing this film for six months or more, and I came on board around mid-edit. I saw three different edits of the film, and they were all very different.

Finally, they settled on a cut that felt really nice. We had spotting sessions before they locked and we were working on creating the environment of the motel where Otis and James were staying. We were also working on creating the sound of Otis being on-set. It had to feel like we were watching a film and when someone screams, “Cut!” it had to feel like we go back into reality. Being able to play with those juxtapositions in a sonic way really helped out. We would give it a cinematic sound and then pulled back into a cinéma vérité-type sound. That was the big sound motif in the movie.

We worked really close with the composer Alex Somers. He developed this little crank sound that helped to signify Otis’ dreams and the turning of events. It makes it feel like Otis is a puppet with all his acting jobs.

There’s also a harness motif. In the very beginning you see adult Otis (Lucas Hedges) standing in front of a plane that has crashed and then you hear things coming up behind him. They are shooting missiles at him and they blow up and he gets yanked back from the explosions. You hear someone say, “Cut!” and he’s just dangling in a body harness about 20 feet up in the air. They reset, pull him down and walk him back. We go through a montage of his career, the drunkenness and how crazy he was, and then him going to therapy.

In the session, he’s told he has PTSD caused by his upbringing and he says, “No, I don’t.” It kicks to the title and then we see young Otis (Noah Jupe) sitting there waiting, and he gets hit by a pie. He then gets yanked back by that same harness, and he dangles for a little while before they bring him down. That is how the harness motif works.

There’s also a chicken motif. Growing up, Otis has a chicken named Henrietta La Fowl, and during the dream sequences the chicken leads Otis to his father. So we had to make a voice for the chicken. We had to give the chicken a dreamy feel. And we used the old-school Yellow Sky wind to give it a Western-feel and add a dreaminess to it.

On the dub stage with director Alma Har’el and her team, plus Will Files (front left) and Andrew Twite (front right).

Andrew Twite was my sound designer. He was also with me on Swiss Army Man. He was able to make some rich and lush backgrounds for that. We did a lot of recording in our neighborhood of Highland Park, which is much like Echo Park where Shia grew up and where the film is based. So it’s Latin-heavy communities with taco trucks and that fun stuff. We gave it that gritty sound to show that, even though Otis is making $8,000 a week, they’re still living on the other side of the tracks.

When Otis is in therapy, it feels like Malibu. It’s nicer, quieter, and not as stressful versus the motel when Otis was younger, which is more pumped up.

My dialogue editor was Elliot Thompson, and he always does a great job for me. The production sound mixer Oscar Grau did a phenomenal job of capturing everything at all moments. There was no MOS (picture without sound). He recorded everything and he gave us a lot of great production effects. The production dialogue was tricky because in many of the scenes young Otis isn’t wearing a shirt and there are no lav mics on him. Oscar used plant mics and booms and captured it all.

What was the most challenging scene for sound design on Honey Boy?
The opening, the intro and the montage right up front were the most challenging. We recut the sound for Alma several different ways. She was great and always had moments of inspiration. We’d try different approaches and the sound would always get better, but we were on a time crunch and it was difficult to get all of those elements in place in the way she was looking for.

Honey Boy on the mix stage at Sony’s Kim Novak Theater.

In the opening, you hear the sound of this mega-massive robot (an homage to a certain film franchise that Shia has been part of in the past, wink, wink). You hear those sounds coming up over the production cards on a black screen. Then it cuts to adult Otis standing there as we hear this giant laser gun charging up. Otis goes, “No, no, no, no, no…” in that quintessential Shia LaBeouf way.

Then, there’s a montage over Missy Elliott’s “My Struggles,” and the footage goes through his career. It’s a music video montage with sound effects, and you see Otis on set and off set. He’s getting sick, and then he’s stuck in a harness, getting arrested in the movie and then getting arrested in real life. The whole thing shows how his life is a blur of film and reality.

What was the biggest challenge in regards to the mix?
The most challenging aspect of the mix, on Will [Files]’s side of the board, was getting those monsters in the pocket. Will had just come off of Venom and Halloween so he can mix these big, huge, polished sounds. He can make these big sound effects scenes sound awesome. But for this film, we had to find that balance between making it sound polished and “Hollywood” while also keeping it in the realm of indie film.

There was a lot of back and forth to dial-in the effects, to make it sound polished but still with an indie storytelling feel. Reel one took us two days on stage to get through. We even spent some time on it on the last mix day as well. That was the biggest challenge to mix.

The rest of the film is more straightforward. The challenge on dialogue was to keep it sounding dynamic instead of smoothed out. A lot of Shia’s performance plays in the realm of vocal dynamics. We didn’t want to make the dialogue lifeless. We wanted to have the dynamics in there, to keep the performance alive.

We mixed in Atmos and panned sounds into the ceiling. I took a lot of the composer’s stems and remixed those in Atmos, spreading all the cues out in a pleasant way and using reverb to help glue it together in the environment.

 

The Death of Dick Long

Let’s look at another Sundance film you’ve worked on this year. The Death of Dick Long is part of the Next category. What were director Daniel Scheinert’s initial ideas for the role of sound on this film?
Daniel Scheinert always shows up with a lot of sound ideas, and most of those were already in place because of picture editor Paul Rogers from Parallax Post (which is right down the hall from our studio Unbridled Sound). Paul and all the editors at Parallax are sound designers in their own right. They’ll give me an AAF of their Adobe Premiere session and it’ll be 80 tracks deep. They’re constantly running down to our studio like, “Hey, I don’t have this sound. Can you design something for me?” So, we feed them a lot of sounds.

The Death of Dick Long

We played with the bug sounds the most. They shot in Alabama, where both Paul and Daniel are from, so there were a lot of cicadas and bugs. It was important to make the distinction of what the bugs sounded like in the daytime versus what they sounded like in the afternoon and at night. Paul did a lot of work to make sure that the balance was right, so we didn’t want to mess with that too much. We just wanted to support it. The backgrounds in this film are rich and full.

This film is crazy. It opens up with a Creed song and ends with a Nickleback song, as a sort of a joke. They wanted to show a group of guys that never really made much of themselves. These guys are in a band called Pink Freud, and they have band practice.

The film starts with them doing dumb stuff, like setting off fireworks and catching each other on fire — just messing around. Then it cuts to Dick (Daniel Scheinert) in the back of a vehicle and he’s bleeding out. His friends just dump him at the hospital and leave. The whole mystery of how Dick dies unfolds throughout the course of the film. The two main guys are Earl (Andre Hyland) and Zeke (Michael Abbott, Jr.).

The Foley on this film — provided by Foley artist John Sievert of JRS Productions — plays a big role. Often, Foley is used to help us get in and out of the scene. For instance, the police are constantly showing up to ask more questions and you hear them sneaking in from another room to listen to what’s being said. There’s a conversation between Zeke and his wife Lydia (Virginia Newcomb) and he’s asking her to help him keep information from the police. They’re in another room but you hear their conversation as the police are questioning Dick Long’s wife, Jane (Jess Weixler).

We used sound effects to help increase the tension when needed. For example, there’s a scene where Zeke is doing the laundry and his wife calls saying she’s scared because there are murderers out there, and he has to come and pick her up. He knows it’s him but he’s trying to play it off. As he is talking to her, Earl is in the background telling Zeke what to say to his wife. As they’re having this conversation, the washing machine out in the garage keeps getting louder and it makes that scene feel more intense.

Director Daniel Scheinert (left) and Puddle relaxing during the mix.

“The Dans” — Scheinert and Daniel Kwan — are known for Swiss Army Man. That film used sound in a really funny way, but it was also relevant to the plot. Did Scheinert have the same open mind about sound on The Death of Dick Long? Also, were there any interesting recording sessions you’d like to talk about?
There were no farts this time, and it was a little more straightforward. Manchester Orchestra did the score on this one too, but it’s also more laid back.

For this film, we really wanted to depict a rural Alabama small-town feel. We did have some fun with a few PA announcements, but you don’t hear those clearly. They’re washed out. Earl lives in a trailer park, so there are trailer park fights happening in the background to make it feel more like Jerry Springer. We had a lot of fun doing that stuff. Sound effects editor Danielle Price cut that scene, and she did a really great job.

What was the most challenging aspect of the sound design on The Death of Dick Long?
I’d say the biggest things were the backgrounds, engulfing the audience in this area and making sure the bugs feel right. We wanted to make sure there was off-screen movement in the police station and other locations to give them all a sense of life.

The whole movie was about creating a sense of intensity. I remember showing it to my wife during one of our initial sound passes, and she pulled the blanket over her face while she was watching it. By the end, only her eyes were showing. These guys keep messing up and it’s stressful. You think they’re going to get caught. So the suspense that the director builds in — not being serious but still coming across in a serious manner — is amazing. We were helping them to build that tension through backgrounds, music and dropouts, and pushing certain everyday elements (like the washing machine) to create tension in scenes.

What scene in this film best represents the use of sound?
I’d say the laundry scene. Also, in the opening scene you hear the band playing in the garage and the perspective slowly gets closer and closer.

During the film’s climax, when you find out how Dick dies, we’re pulling down the backgrounds that we created. For instance, when you’re in the bedroom you hear their crappy fan. When you’re in the kitchen, you hear the crappy compressor on the refrigerator. It’s all about playing up these “bad” sounds to communicate the hopelessness of the situation they are living in.

I want to shout out all of my sound editors for their exceptional work on The Death of Dick Long. There was Jacob “Young Thor” Flack and Elliot Thompson, and Danielle Price who did amazing backgrounds. Also, a shout out to Ian Chase for help on the mix. I want to make sure they share the credit.

I think there needs to be more recognition of the contribution of sound and the sound departments on a film. It’s a subject that needs to be discussed, particularly in these somber days following the death of Oscar-winning re-recording mixer Gregg Rudloff. He was the nicest guy ever. I remember being an intern on the sound stage and he always took the time to talk to us and give us advice. He was one of the good ones.

When post sound gets a credit after the caterers’ on-set, it doesn’t do us justice. On Swiss Army Man, initially I had my own title card because The Dans wanted to give me a title card that said, “Supervising Sound Editor Brent Kiser,” but the Directors Guild took it away. They said it wasn’t appropriate. Their reasoning is that if they give it to one person then they’ll have to give it to everybody. I get it — the visual effects department is new on the block. They wrote their contract knowing what was going on, so they get a title card. But try watching a film on mute and then talk to me about the importance of sound. That needs to start changing, for the sheer fact of burnout and legacy.

At the end of the day, you worked so hard to get these projects done. You’re taking care of someone else’s baby and helping it to grow up to be this great thing, but then we’re only seen as the hired help. Or, we never even get a mention. There is so much pressure and stress on the sound department, and I feel we deserve more recognition for what we give to a film.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney