Cinnafilm 6.6.19

Category Archives: Audio Mixing

Izotope’s Neutron 3 streamlines mix workflows with machine learning

Izotope, makers of the RX audio tools, has introduced Neutron 3, a plug-in that — thanks to advances in machine learning — listens to the entire session and communicates with every track in the mix. Mixers can use Neutron 3’s new Mix Assistant to create a balanced starting point for an initial-level mix built around their chosen focus, saving time and energy when making creative mix decisions. Once a focal point is defined, Neutron 3 automatically set levels before the mixer ever has to touch a fader.

Neutron 3 also has a new module called Sculptor (available in Neutron 3 Standard and Advanced) for sweetening, fixing and creative applications. Using never-before-seen signal processing, Sculptor works like a per-band army of compressors and EQs to shape any track. It also communicates with Track Assistant to understand each instrument and gives realtime feedback to help mixers shape tracks to a target EQ curve or experiment with new sounds.

In addition, Neutron 3 includes many new improvements and enhancements based on feedback from the community, such as the redesigned Masking Meter that automatically flags masking issues and allows them to be fixed from a convenient one-window display. This improvement prevents tracks from stepping on each other and muddying the mix.

Neutron 3 has also had a major overhaul in performance for faster processing and load times and smooth metering. Sessions with multiple Neutrons open much quicker, and refresh rates for visualizations have doubled.

Other Neutron 3 Features
• Visual Mixer and Izotope Relay: Users can launch Mix Assistant directly from Visual Mixer and move tracks in a virtual space, tapping into Izotope-enabled inter-plug-in communication
• Improved interface: Smooth visualizations and a resizable interface
• Improved Track Assistant listens to audio and creates a custom preset based on what it hears
• Eight plug-ins in one: Users can build a signal chain directly within one highly connected, intelligent interface with Sculptor, EQ with Soft Saturation mode, Transient Shaper, 2 Compressors, Gate, Exciter, and Limiter
• Component plug-ins: Users can control Neutron’s eight modules as a single plug-in or as eight individual plug-ins
• Tonal Balance Control: Updated to support Neutron 3
• 7.1 Surround sound support and zero-latency mode in all eight modules for professional, lightweight processing for audio post or surround music mixes

Visual Mixer and Izotope Relay will be Included free with all Neutron 3 Advanced demo downloads. In addition, Music Production Suite 2.1 will now include Neutron 3 Advanced, and iZotope Elements Suite will be updated to include Neutron Elements (v3).

Neutron 3 will be available in three different options — Neutron Elements, Neutron 3 Standard and Neutron 3 Advanced. See the comparison chart for more information on what features are included in each version.

Neutron will be available June 30. Check out the iZotope site for pricing.

Sound Lounge ups Becca Falborn to EP 

New York’s Sound Lounge, an audio post house that provides sound services for advertising, television and feature films, has promoted Becca Falborn to executive producer.

In her new role, Falborn will manage the studio’s advertising division and supervise its team of producers. She will also lead client relations and sales. Additionally, she will manage Sound Lounge Everywhere, the company’s remote sound services offering, which currently operates in Boston and Boulder, Colorado.

“Becca is a smart, savvy and passionate producer, qualities that are critical to success in her new role,” said Sound Lounge COO and partner Marshall Grupp. “She has developed an excellent rapport with our team of mixers and clients and has consistently delivered projects on time and on budget, even under the most challenging circumstances.”

Falborn joined Sound Lounge in 2017 as a producer and was elevated to senior producer last year. She has produced voiceover recordings, sound design, and mixing for many advertising projects, including seven out of the nine spots produced by Sound Lounge that debuted during this year’s Super Bowl telecast.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including past positions with the post house Nice Shoes and the marketing agency Hogarth Worldwide.

Cinnafilm 6.6.19

Sugar Studios LA gets social for celebrity-owned Ladder supplement

Sugar Studios LA completed a social media campaign for Ladder perfect protein powder and clean energy booster supplements starring celebrity founders Arnold Schwarzenegger, LeBron James, DJ Khaled, Cindy Crawford and Lindsey Vonn. The playful ad campaign focuses on social media, foregoing the usual TV commercial push and pitching the protein powder directly to consumers.

One spot shows Arnold in the gym annoyed by a noisy dude on the phone, prompting him to turn up his workout soundtrack. Then DJ Khaled is scratching encouragement for LeBron’s workout until Arnold drowns them out with his own personal live oompah band.

The ads were produced and directed by longtime Schwarzenegger collaborator Peter Grigsby, while Sugar Studios’ editor Nico Alba (Chevrolet, Ferrari, Morongo Casino, Mattel) cut the project using Adobe Premiere. When asked about using random spot lengths, as opposed to traditional :15s, :30s, and :60s, Alba explains, “Because it’s social media, we’re not always bound to those segments of time anymore. Basically, it’s ‘find the story,’ and because there are no rules, it makes the storytelling more fun. It’s a process of honing everything down without losing the rhythm or the message and maintaining a nice flow.”

Nico Alba and Jijo Reed. Credit: David Goggin

“Peter Grigsby requested a skilled big-brand commercial editor on this campaign,” Reed says. “Nico was the perfect fit to create that rhythm and flow that only a seasoned commercial editor could bring to the table.”

“We needed a heavy-weight gym ambience to set the stage,” says Alba, who worked closely with sound design/mixers Bret Mazur and Troy Ambroff to complement his editing. “It starts out with a barrage of noisy talking and sounds that really irritate Arnold, setting up the dueling music playlists and the sonic payoff.”

The audio team mixed and created sound design with Avid Pro tools Ultimate. Audio plugins called on include Waves Mercury bundle,, DTS Surround tools and iZotope RX7 Advanced.

The Sugar team also created a cinematic look to the spots, thanks to colorist Bruce Bolden, who called on Blackmagic DaVinci Resolve and a Sony BVM OLED monitor. “He’s a veteran feature film colorist,” says Reed, “so he often brings that sensibility to advertising spots as well, meaning rich blacks and nice, even color palettes.”

Storage used at the studio is Avid Nexis and Facilis Terrablock.


Human’s opens new Chicago studio

Human, an audio and music company with offices in New York, Los Angeles and Paris has opened a Chicago studio headed up by veteran composer/producer Justin Hori.

As a composer, Hori’s work has appeared in advertising, film and digital projects. “Justin’s artistic output in the commercial space is prolific,” says Human partner Gareth Williams. “There’s equal parts poise and fun behind his vision for Human Chicago. He’s got a strong kinship and connection to the area, and we couldn’t be happier to have him carve out our footprint there.”

From learning to DJ at age 13 to working Gramaphone Records to studying music theory and composition at Columbia College, Hori’s immersion in the Chicago music scene has always influenced his work. He began his career at com/track and Comma Music, before moving to open Comma’s Los Angeles office. From there, Hori joined Squeak E Clean, where he served as creative director for the past five years. He returned to Chicago in 2016.

Hori is known for producing unexpected yet perfectly spot-on pieces of music for advertising, including his track “Da Diddy Da,” which was used in the four-spot summer 2018 Apple iPad campaign. His work has won top industry honors including D&AD Pencils, The One Show, Clio and AICP Awards and the Cannes Gold Lion for Best Use of Original Music.

Meanwhile, Post Human, the audio post sister company run by award-winning sound designer and engineer Sloan Alexander, continues to build momentum with the addition of a second 5.1 mixing suite in NYC. Plans for similar build-outs in both LA and Chicago are currently underway.

With services ranging from composition, sound design and mixing, Human works in advertising, broadcast, digital and film.


NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.


Creating audio for the cinematic VR series Delusion: Lies Within

By Jennifer Walden

Delusion: Lies Within is a cinematic VR series from writer/director Jon Braver. It is available on the Samsung Gear VR and Oculus Go and Rift platforms. The story follows a reclusive writer named Elena Fitzgerald who penned a series of popular fantasy novels, but before the final book in the series was released, the author disappeared. Rumors circulated about the author’s insanity and supposed murder, so two avid fans decide to break into her mansion to search for answers. What they find are Elena’s nightmares come to life.

Delusion: Lies Within is based on an interactive play written by Braver and Peter Cameron. Interactive theater isn’t your traditional butts-in-the-seat passive viewing-type theater. Instead, the audience is incorporated into the story. They interact with the actors, search for objects, solve mysteries, choose paths and make decisions that move the story forward.

Like a film, the theater production is meticulously planned out, from the creature effects and stunts to the score and sound design. With all these components already in place, Delusion seemed like the ideal candidate to become a cinematic VR series. “In terms of the visuals and sound, the VR experience is very similar to the theatrical experience. With Delusion, we are doing 360° theater, and that’s what VR is too. It’s a 360° format,” explains Braver.

While the intent was to make the VR series match the theatrical experience as much as possible, there are some important differences. First, immersive theater allows the audience to interact with the actors and objects in the environment, but that’s not the case with the VR series. Second, the live theater show has branching story narratives and an audience member can choose which path he/she would like to follow. But in the VR series there’s one set storyline that follows a group who is exploring the author’s house together. The viewer feels immersed in the environment but can’t manipulate it.

L-R: Hamed_Hokamzadeh and Thomas Ouziel

According to supervising sound editor Thomas Ouziel from Hollywood’s MelodyGun Group, “Unlike many VR experiences where you’re kind of on rails in the midst of the action, this was much more cinematic and nuanced. You’re just sitting in the space with the characters, so it was crucial to bring the characters to life and to design full sonic spaces that felt alive.”

In terms of workflow, MelodyGun sound supervisor/studio manager Hamed Hokamzadeh chose to use the Oculus Developers Kit 2 headset with Facebook 360 Spatial Workstation on Avid Pro Tools. “Post supervisor Eric Martin and I decided to keep everything within FB360 because the distribution was to be on a mobile VR platform (although it wasn’t yet clear which platform), and FB360 had worked for us marvelously in the past for mobile and Facebook/YouTube,” says Hokamzadeh. “We initially concentrated on delivering B-format (2nd Order AmbiX) playing back on Gear VR with a Samsung S8. We tried both the Audio-Technica ATH-M50 and Shure SRH840 headphones to make sure it translated. Then we created other deliverables: quad-binaurals, .tbe, 8-channel and a stereo static mix. The non-diegetic music and voiceover was head-locked and delivered in stereo.”

From an aesthetic perspective, the MelodyGun team wanted to have a solid understanding of the audience’s live theater experience and the characters themselves “to make the VR series follow suit with the world Jon had already built. It was also exciting to cross our sound over into more of a cinematic ‘film world’ than was possible in the live theatrical experience,” says Hokamzadeh.

Hokamzadeh and Ouziel assigned specific tasks to their sound team — Xiaodan Li was focused on sound editorial for the hard effects and Foley, and Kennedy Phillips was asked to design specific sound elements, including the fire monster and the alchemist freezing.

Ouziel, meanwhile, had his own challenges of both creating the soundscape and integrating the sounds into the mix. He had to figure out how to make the series sound natural yet cinematic, and how to use sound to draw the viewer’s attention while keeping the surrounding world feeling alive. “You have to cover every movement in VR, so when the characters split up, for example, you want to hear all their footsteps, but we also had to get the audience to focus on a specific character to guide them through. That was one of the biggest challenges we had while mixing it,” says Ouziel.

The Puppets
“Chapter Three: Trial By Fire” provides the best example of how Ouziel tackled those challenges. In the episode, Virginia (Britt Adams) finds herself stuck in Marion’s chamber. Marion (Michael J. Sielaff) is a nefarious puppet master who is clandestinely controlling a room full of people on puppet strings; some are seated at a long dining table and others are suspended from the ceiling. They’re all moving their arms as if dancing to the scratchy song that’s coming from the gramophone.

The sound for the puppet people needed to have a wiry, uncomfortable feel and the space itself needed to feel eerily quiet but also alive with movement. “We used a grating metallic-type texture for the strings so they’d be subconsciously unnerving, and mixed that with wooden creaks to make it feel like you’re surrounded by constant danger,” says Ouziel.

The slow wooden creaks in the ambience reinforce the idea that an unseen Marion is controlling everything that’s happening. Braver says, “Those creaks in Marion’s room make it feel like the space is alive. The house itself is a character in the story. The sound team at MelodyGun did an excellent job of capturing that.”

Once the sound elements were created for that scene, Ouziel then had to space each puppet’s sound appropriately around the room. He also had to fill the room with music while making sure it still felt like it was coming from the gramophone. Ouziel says, “One of the main sound tools that really saved us on this one was Audio Ease’s 360pan suite, specifically the 360reverb function. We used it on the gramophone in Marion’s chamber so that it sounded like the music was coming from across the room. We had to make sure that the reflections felt appropriate for the room, so that we felt surrounded by the music but could clearly hear the directionality of its source. The 360pan suite helped us to create all the environmental spaces in the series. We pretty much ran every element through that reverb.”

L-R: Thomas Ouziel and Jon Braver.

Hokamzadeh adds, “The session got big quickly! Imagine over 200 AmbiX tracks, each with its own 360 spatializer and reverb sends, plus all the other plug-ins and automation you’d normally have on a regular mix. Because things never go out of frame, you have to group stuff to simplify the session. It’s typical to make groups for different layers like footsteps, cloth, etc., but we also made groups for all the sounds coming from a specific direction.”

The 360pan suite reverb was also helpful on the fire monster’s sounds. The monster, called Ember, was sound designed by Phillips. His organic approach was akin to the bear monster in Annihilation, in that it felt half human/half creature. Phillips edited together various bellowing fire elements that sounded like breathing and then manipulated those to match Ember’s tormented movements. Her screams also came from a variety of natural screams mixed with different fire elements so that it felt like there was a scared young girl hidden deep in this walking heap of fire. Ouziel explains, “We gave Ember some loud sounds but we were able to play those in the space using the 360pan suite reverb. That made her feel even bigger and more real.”

The Forest
The opening forest scene was another key moment for sound. The series is set in South Carolina in 1947, and the author’s estate needed to feel like it was in a remote area surrounded by lush, dense forest. “With this location comes so many different sonic elements. We had to communicate that right from the beginning and pull the audience in,” says Braver.

Genevieve Jones, former director of operations at Skybound Entertainment and producer on Delusion: Lies Within, says, “I love the bed of sound that MelodyGun created for the intro. It felt rich. Jon really wanted to go to the south and shoot that sequence but we weren’t able to give that to him. Knowing that I could go to MelodyGun and they could bring that richness was awesome.”

Since the viewer can turn his/her head, the sound of the forest needed to change with those movements. A mix of six different winds spaced into different areas created a bed of textures that shifts with the viewer’s changing perspective. It makes the forest feel real and alive. Ouziel says, “The creative and technical aspects of this series went hand in hand. The spacing of the VR environment really affects the way that you approach ambiences and world-building. The house interior, too, was done in a similar approach, with low winds and tones for the corners of the rooms and the different spaces. It gives you a sense of a three-dimensional experience while also feeling natural and in accordance to the world that Jon made.”

Bringing Live Theater to VR
The sound of the VR series isn’t a direct translation of the live theater experience. Instead, it captures the spirit of the live show in a way that feels natural and immersive, but also cinematic. Ouziel points to the sounds that bring puppet master Marion to life. Here, they had the opportunity to go beyond what was possible with the live theater performance. Ouziel says, “I pitched to Jon the idea that Marion should sound like a big, worn wooden ship, so we built various layers from these huge wooden creaks to match all his movements and really give him the size and gravitas that he deserved. His vocalizations were made from a couple elements including a slowed and pitched version of a raccoon chittering that ended up feeling perfectly like a huge creature chuckling from deep within. There was a lot of creative opportunity here and it was a blast to bring to life.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


Butter Music and Sound adds new ECDs in NYC and LA

Music shop Butter Music and Sound has expanded its in-house creative offerings with the addition of two new executive creative directors (ECDs): Tim Kvasnosky takes the helm in Los Angeles and Aaron Kotler in New York.

The newly appointed ECDs will maintain creative oversight on all projects going through the Los Angeles and New York offices, managing workflow across staff and freelance talent, composing on a wide range of projects and supporting and mentoring in-house talent and staff.

Kvasnosky and Kotler both have extensive experience as composers and musicians, with backgrounds crafting original music for commercials, film and television. They also maintain active careers in the entertainment and performance spaces. Kvasnosky recently scored the feature film JT LeRoy, starring Kristen Stewart and Laura Dern. Kotler performs and records regularly.

Kvasnosky is a composer and music producer with extensive experience across film, TV, advertising and recording. A Seattle native who studied at NYU, he worked as a jazz pianist and studio musician before composing for television and film. His tracks have been licensed in many TV shows and films. He has scored commercial campaigns for Nike, Google, McDonald’s, Amazon, Target and VW. Along with Detroit-based music producer Waajeed and singer Dede Reynolds, Kvasnosky formed the electronic group Tiny Hearts.

Native New Yorker Kotler holds a Bachelor of Music from Northwestern University School of Music and a Master of Music from Manhattan School of Music, both in jazz piano performance. He began his career as a performer and studio musician, playing in a variety of bands and across genres including neo-soul, avante garde jazz, funk, rock and more. He also music directed Jihad! The Musical to a month of sold-out performances at the Edinburgh Festival Fringe. Since then, he has composed commercials, themes and sonic branding campaigns for AT&T, Coca-Cola, Nike, Verizon, PlayStation, Samsung and Honda. He has also arranged music for American Idol and The Emmys, scored films that were screened at a variety of film festivals, and co-produced Nadje Noordhuis’ debut record. In 2013, he teamed up with Michael MacAllister to co-design and build Creekside Sound, a recording and production studio in Brooklyn.

Main Image: (L-R) Tim Kvasnosky and Aaron Kotler


Review: Sonarworks Reference 4 Studio Edition for audio calibration

By David Hurd

What is a flat monitoring system, and how does it benefit those mixing audio? Well, this is something I’ll be addressing in this review of Sonarworks Reference 4 Studio Edition, but first some background…

Having a flat audio system simply means that whatever signal goes into the speakers comes out sonically pure, exactly as it was meant to. On a graph, it would look like a straight line from 20 cycles on the left to 20,000 cycles on the right.

A straight, flat line with no peaks or valleys would indicate unwanted boosts or cuts at certain frequencies. There is a reason that you want this for your monitoring system. If there are peaks in your speakers at the hundred-cycle mark on down you get boominess. At 250 to 350 cycles you get mud. At around a thousand cycles you get a honkiness as if you were holding your nose when you talked, and too much high-end sounds brittle. You get the idea.

Before

After

If your system is not flat, your monitors are lying to your ears and you can’t trust what you are hearing while you mix.

The problem arises when you try to play your audio on another system and hear the opposite of what you mixed. It works like this: If your speakers have too much bass then you cut some of the bass out of your mix to make it sound good to your ears. But remember, your monitors are lying, so when you play your mix on another system, the bass is missing.

To avoid this problem, professional recording studios calibrate their studio monitors so that they can mix in a flat-sounding environment. They know that what they hear is what they will get in their mixes, so they can happily mix with confidence.

Every room affects what you hear coming out of your speakers. The problem is that the studio monitors that were close to being flat at the factory are not flat once they get put into your room and start bouncing sound off of your desk and walls.

Sonarworks
This is where Sonarwork’s calibration mic and software come in. They give you a way to sonically flatten out your room by getting a speaker measurement. This gives you a response chart based upon the acoustics of your room. You apply this correction using the plugin and your favorite DAW, like Avid Pro Tools. You can also use the system-wide app to correct sound from any source on your computer.

So let’s imagine that you have installed the Sonarworks software, calibrated your speakers and mixed a music project. Since there are over 30,000 locations that use Sonarworks, you can send out your finished mix, minus the Sonarworks plugins since their room will have different acoustics, and use a different calibration setting. Now, the mastering lab you use will be hearing your mix on their Sonarworks acoustically flat system… just as you mixed it.

I use a pair of Genelec studio monitors for both audio projects and audio-for-video work. They were expensive, but I have been using them for over 15 years with great results. If you don’t have studio monitors and just choose to mix on headphones, Sonarworks has you covered.

The software will calibrate your headphones.

There is an online product demo at sonarworks.com that lets you select which headphones you use. You can switch between bypass and the Sonarworks effect. Since they have already done the calibration process for your headphones, you can get a good idea of the advantages of mixing on a flat system. The headphone option is great for those who mix on a laptop or small home studio. It’s less money as well. I used my Sennheiser HD300 Pro series headphones.

I installed Sonarworks on my “Review” system, which is what I use to review audio and video production products. I then tested Sonarworks on both Pro Tools 12 music projects and video editing work, like sound design using a sound FX library and audio from my Blackmagic Ursa 4.6K camera footage. I was impressed at the difference that the Sonarworks software made. It opened my mixes and made it easy to find any problems.

The Sonarworks Reference 4 Studio Edition takes your projects to a whole new level, and finally lets you hear your work in a sonically pure and flat listening environment.

My Review System
The Sonarworks Reference 4 Studio Edition was tested on
my Mac Pro 6-core trash can running High Sierra OSX, 64GB RAM, 12GB of RAM on the D700 video cards; a Blackmagic UltraStudio 4K box; four G-Tech G-Speed 8TB RAID boxes with HighPoint RAID controllers; Lexar SD and Cfast card readers; video output viewed a Boland 32-inch broadcast monitor; a Mackie mixer; a Complete Control S25 keyboard; and a Focusrite Clarett 4 Pre.

Software includes Apple FCPX, Blackmagic Resolve 15 and Pro Tools 12. Cameras used for testing are a Blackmagic 4K Production camera and the Ursa Mini 4.6K Pro, both powered by Blueshape batteries.


David Hurd is production and post veteran who owns David Hurd Productions in Tampa. You can reach him at david@dhpvideo.com.


Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.

After fire, SF audio house One Union is completely rebuilt

San Francisco-based audio post house One Union Recording Studios has completed a total rebuild of its facility. It features five all-new, state-of-the-art studios designed for mixing, sound design, ADR, voice recording and other sound work.

Each studio offers Avid/Euphonix digital mixing consoles, Avid MTRX interface systems, the latest Pro Tools software PT Ultimate and robust monitoring and signal processing gear. All studios have dedicated, large voice recording booths. One is certified for Dolby Atmos sound production. The facility’s infrastructure and central machine room are also all new.

One Union began its reconstruction in September 2017 in the aftermath of a fire that affected the entire facility. “Where needed, we took the building back to the studs,” says One Union president/owner John McGleenan. “We pulled out, removed and de-installed absolutely everything and started fresh. We then rebuilt the studios and rewired the whole facility. Each studio now has new consoles, speakers, furniture and wiring, and all are connected to new machine rooms. Every detail has been addressed and everything is in its proper place.”

During the 18 months of reconstruction, One Union carried on operations on a limited basis while maintaining its full staff. That included its team of engineers Joaby Deal, Eben Carr, Andy Greenberg, Matt Wood and Isaac Olsen who worked continuously and remain in place.

Reconstruction was managed by LA-based Yanchar Design & Consulting Group. All five studios feature Avid/Euphonix System 5 digital audio consoles, Pro Tools 2018 and Avid MTRX with Dante interface systems. Studio 4 adds Dolby Atmos capability with a full Atmos Production Suite as well as Atmos RMU. Studio 5, the facility’s largest recording space, has two MTRX systems, with a total of more than 240 analog, MADI and Dante outputs (256 inputs), integrated with a nine-foot Avid/Euphonix console. It also features a 110-inch, retractable projection screen in the control room and a 61-inch playback monitor in its dedicated voice booth. Among other things, the central machine room includes 300TB LTO archiving system.

John McGleenan

The facility was also rebuilt with an eye toward avoiding production delays. “All of the equipment is enterprise-grade and everything is redundant,” McGleenan notes. “The studios are fed by a dual power supply and each is equipped with dual devices. If some piece of gear goes down, we have a redundant system in place to keep going. Additionally, all our critical equipment is hot-swappable. Should any component experience a catastrophic failure, it will be replaced by the manufacturer within 24 hours.”

McGleenan adds that redundancy extends to broadband connectivity. To avoid outages, the facility is served by two 1Gig fiber optic connections provided by different suppliers. WiFi is similarly available through duplicate services.

One Union Recording was founded by McGleenan, a former advertising agency executive, in 1994 and originally had just one sound studio. More studios were soon added as the company became a mainstay sound services provider to the region’s advertising industry.

In recent years, the company has extended its scope to include corporate and branded media, television, film and games, and built a client base that extends across the country and around the world.

Recent work includes commercials for Mountain Dew and carsharing company Turo, the television series Law and Order SVU and Grand Hotel, and the game The Grand Tour.