Tag Archives: Robin Shore

Review: Krotos Reformer Pro for customizing sounds

By Robin Shore

Krotos has got to be one of the most.innovative developers of sound design tools in the industry right now. That is a strong statement, but I stand by it. This Scottish company has become well known over the past few years for its Dehumaniser line of products, which bring a fresh approach to the creation of creature vocals and monster sounds. Recently, they released a new DAW plugin, Reformer Pro, which aims to give sound editors creative new ways of accessing and manipulating their sound effects.

Reformer Pro brings a procedural approach to working with sound effects libraries. According to their manual, “Reformer Pro uses an input to control and select segments of prerecorded audio automatically, and recompiles them in realtime, based on the characteristics of the incoming signal.” In layman’s terms this means you can “perform” sound effects from a library in realtime, using only a microphone and your voice.

It’s dead simple to use. A menu inside the plugin lets you choose from a list of libraries that have been pre-analyzed for use with Reformer Pro. Once you’ve loaded up the library you want, all that’s left to do is provide some sort of sonic input and let the magic happen. Whatever sound you put in will be instantly “reformed” into a new sound effect of your choosing. A number of libraries come bundled in when you buy Reformer Pro and additional libraries can be purchased from the Krotos website. The choice to include the Black Leopard library as a default when you first open the plugin was a very good one. There is just something so gratifying about breathing and grunting into a microphone and hearing a deep menacing growl come out the speakers instead of your own voice. It made me an immediate fan.

There are a few knobs and switches that let you tweak the response characteristics of Reformer Pro’s output, but for the most part you’ll be using sound to control things, and the amount of control you can get over the dynamics and rhythm of Reformer Pro’s output is impressive. While my immediate instinct was to drive Reformer Pro by vocalizing through a mic, any sound source can work well as an input. I also got great results by rubbing and tapping my fingers directly against the grill of a microphone and by dragging the mic across the surface of my desk.

Things get even more interesting if you start feeding pre-recorded audio into Reformer Pro. Using a Foley footstep track as the input for library of cloth and leather sounds creates a realistic and perfectly synced rustle track. A howling wind used as the input for a library of creaks and rattles can add a nice layer of texture to a scenes ambience tracks. Pumping music through Reformer Pro can generate some really wacky sounds and is great way to find inspiration and test out abstract sound design ideas.

If the only libraries you could use with Reformer Pro’s were the 100 or so available on the Krotos website it would still be a fun and innovative tool, but its utility would be pretty limited. What makes Reformer Pro truly powerful is its analysis tool. This lets you create custom libraries out of sounds from your own collection. The possibilities here are literally endless. As long as sound exists it can turned into a unique new plugin. To be sure some sounds are better for this than others, but it doesn’t take long at all figure out what kind of sounds will work best and I was pleasantly surprised with how well most of the custom libraries I created turned out. This is a great way to breath new life into an old sound effects collection.

Summing Up
Reformer Pro adds a sense liveliness, creativity and most importantly fun to the often tedious task of syncing sound effects to picture. It’s also a great way to breath new life into an old sound effects collection. Anyone who spends their days working with sound effects would be doing themselves a disservice by not taking Reformer Pro for a test drive, I imagine most will be both impressed and excited by it’s novel approach to sound effects editing and design.


Robin Shore is an audio engineer at NYC’s Silver Sound Studios

Silver Sound opens audio-focused virtual reality division

By Randi Altman

New York City’s Silver Sound has been specializing in audio post and production recording since 2003, but that’s not all they are. Through the years, along with some Emmy wins, they have added services that include animation and color grading.

When they see something that interests them, they investigate and decide whether or not to dive in. Well, virtual reality interests them, and they recently dove in by opening a VR division specializing in audio for 360 video, called SilVR. Recent clients include Google, 8112 Studios/National Geographic and AT&T.

Stories-From-the-Network-Race-car-experience

Stories From The Network: 360° Race Car Experience for AT&T

I reached out to Silver Sound sound editor/re-recording mixer Claudio Santos to find out why now was the time to invest in VR.

Why did you open a VR division? Is it an audio-for-VR entity or are you guys shooting VR as well?
The truth is we are all a bunch of curious tinkerers. We just love to try different things and to be part of different projects. So as soon as 360 videos started appearing in different platforms, we found ourselves individually researching and testing how sound could be used in the medium. It really all comes down to being passionate about sound and wanting to be part of this exciting moment in which the standards and rules are yet to be discovered.

We primarily work with sound recording and post production audio for VR projects, but we can also produce VR projects that are brought to us by creators. We have been making small in-house shoots, so we are familiar with the logistics and technologies involved in a VR production and are more than happy to assist our clients with the knowledge we have gained.

What types of VR projects do you expect to be working on?
Right now we want to work on every kind of project. The industry as a whole is still learning what kind of content works best in VR and every project is a chance to try a new facet of the technology. With time we imagine producers and post production houses will naturally specialize in whichever genre fits them best, but for us at least this is something we are not hurrying to do.

What tools do you call on?
For recording we make use of a variety of ambisonic microphones that allow us to record true 360 sound on location. We set up our rig wirelessly so it can be untethered from cables, which are a big problem in a VR shoot where you can see in every direction. Besides the ambisonics we also record every character ISO with wireless lavs so that we have as much control as possible over the dialogue during post production.

Robin Shore using a phone to control the 360 video on screen, and on his head is a tracker that simulates the effect of moving around without a full headset.

For editing and mixing we do most of our work in Reaper, a DAW that has very flexible channel routing and non-standard multichannel processing. This allows us to comfortably work with ambisonics as well as mix formats and source material with different channel layouts.

To design and mix our sounds we use a variety of specialized plug-ins that give us control over the positioning, focus and movement of sources in the 360 sound field. Reverberation is also extremely important for believable spatialization, and traditional fixed channel reverbs are usually unconvincing once you are in a 360 field. Because of that we usually make use of convolution reverbs using ambisonic Impulse responses.

When it comes to monitoring the video, especially with multiple clients in the room, everyone in the room is wearing headphones. At first this seemed very weird, but it’s important since that’s the best way to reproduce what the end viewer will be experiencing. We have also devised a way for clients to use a separate controller to move the view around in the video during playback and editing. This gives a lot more freedom and makes the reviewing process much quicker and more dynamic.

How different is working in VR from traditional work? Do you wear different hats for different jobs?
That depends. While technically it is very different, with a whole different set of tools, technologies and limitations, the craft of designing good sound that aids in the storytelling and that immerses the audience in the experience is not very different from traditional media.

The goal is to affect the viewer emotionally and to transmit pieces of the story without making the craft itself apparent, but the approaches necessary to achieve this in each medium are very different because the final product is experienced differently. When watching a flat screen, you don’t need any cues to know where the next piece of essential action is going to happen because it is all contained by a frame that is completely in your field of view. That is absolutely not true in VR.

The user can be looking in any direction at any given time, so the sound often fills in the role of guiding the viewer to the next area of interest, and this reflects on how we manipulate the sounds in the mix. There is also a bigger expectation that sounds will be more realistic in a VR environment because the viewer is immersed in an experience that is trying to fool them into believing it is actually real. Because of that, many exaggerations and shorthands that are appropriate in traditional media become too apparent in VR projects.

So instead of saying we need to put on different hats when tackling traditional media or VR, I would say we just need a bigger hat that carries all we know about sound, traditional and VR, because neither exists in isolation anymore.

I am assuming that getting involved in VR projects as early as possible is hugely helpful to the audio. Can you explain?
VR shoots are still in their infancy. There’s a whole new set of rules, standards and whole lot of experimentation that we are all still figuring out as an industry. Often a particular VR filming challenge is not only new to the crew but completely new in the sense that it might not have ever been done before.

In order to figure out the best creative and technical approaches to all these different situations it is extremely helpful to have someone on the team thinking about sound, otherwise it risks being forgotten and then the project is doomed to a quick fix in post, which might not explore the full potential of the medium.

This doesn’t even take into consideration that the tools still often need to be adapted and tailored to fit the needs of a particular project, simply because new-use-cases are being discovered daily. This tailoring and exploration takes time and knowledge, so only by bringing a sound team early on into the project can they fully prepare to record and mix the sound without cutting corners.

Another important point to take into consideration is that the delivery requirements are still largely dependent on the specific platform selected for distribution. Technical standards are only now starting to be created and every project’s workflows must be adapted slightly to match these specific delivery requirements. It is much easier and more effective to plan the whole workflow with these specific requirements in mind than it is to change formats when the project is already in an advanced state.

What do clients need to know about VR that they might take for granted?
If we had to choose one thing to mention it would be that placing and localizing sounds in post takes a lot of time and care because each sound needs to be placed individually. It is easy to forget how much longer this takes than the traditional stereo or even surround panning because every single diegetic sound added needs to be panned. The difference might be negligible when dealing with a few sound effects, but depending on the action and the number of moving elements in the experience, it can add up very quickly.

Working with sound for VR is still largely an area of experimentation and discovery, and we like to collaborate with our clients to ensure that we all push the limits of the medium. We are very open about our techniques and are always happy to explain what we do to our clients because we believe that communication is the best way to ensure all elements of a project work together to deliver a memorable experience.

Our main is Red Velvet for production company Station Film.

Review: Nugen Audio’s Halo Upmixer

By Robin Shore

Upmixing is nothing new. The basic goal is to take stereo audio and convert it to higher channel count formats (5.1, 7.1, etc.) that can meet surround sound delivery requirements. The most common use case for this is when needing to use stereo music tracks in a surround sound mix for film or television.

Various plug-ins exist for this task, and the results run the gamut from excellent to lackluster. In terms of sonic quality Nugen Audio’s new Halo Upmixer plug-in falls firmly on the excellent side of this range. It creates a nice enveloping surround field, while staying true to the original stereo mix, and it doesn’t seem to rely on any weird reverb or delay effects that you sometimes find in other upmix plug-ins.

NUGEN Audio Halo Upmix - IO panel

What really sets Halo apart is its well-thought-out design, and the high level of control it offers in sculpting the surround environment.

Digging In
At the top of the plug-in window is a dropdown for selecting the channel configuration of the upmix — you can select any standard format from LCR up to 7.1. The centerpiece of Halo is a large circular scope that gives a visual representation of the location and intensity of the upmixed sound. Icons representing each speaker surround the scope, and can be clicked on to solo and mute individual channels.

Several arcs around the perimeter of the scope provide controls for steering the upmix. The Fade arcs around the scope will adjust how much signal is sent to the rear surround channels, while the Divergence arc at the top of the scope adjusts the spread between the mono center and front stereo speakers. On the left side of the scope is a grid representing diffusion. Increasing the amount of diffusion spreads the sound more evenly throughout the surround field, creating a less directionally focused upmix. Lower values of diffusion give a more detailed sound, with greater definition between the front and rear.

The LFE channel in the upmix can be handled in two ways. The “normal” LFE mode in Halo will add additional content into the LFE channels based on low frequencies in the original source. This is nice for adding a little extra oomph to the mix and it also preserves the LFE information when downmixing back to stereo.

For those that are worried about adding too much additional bass into the upmix, the “Split” LFE mode works more like a traditional crossover, siphoning off low frequencies into the LFE without leaving them in the full range channels.

NUGEN Audio Halo Upmix - 5_1 main view - using colour to determine energy source

An Easy And Nuanced UI
The layout and controls in Halo are probably the best of I’ve ever seen in this sort of plug-in. Moving the Fade and Divergence arcs around the circle feels very smooth and intuitive, almost like gesturing on a touchscreen, and the position of the arcs along the edge of the scope seems to correspond really well with what I hear through the speakers.

New users should have no problem quickly wrapping their heads around the basic controls. The diffusion is an especially nice touch as it allows you to very quickly alter the character of the upmix without drastically changing the overall balance between front, rear and center. Typically, I’ve found that leaving the diffusion somewhere on the higher end gives a nice even feel, but for times when I want the upmix to have a little more punch, dragging the diffusion down can really add a lot.

Of course, digging a little deeper reveals some more nuanced controls that may take some more time to master. Below the scope are controls for a shelf filter which, combined with higher levels of diffusion, can be used to dull the surround speakers without decreasing their overall level. This ensures that sharp transients in the rear don’t pop out too much and distract the audience’s attention from the screen in front of them.

The Center window focuses only on the front speakers and gives you some fine control on how the mono center channel is derived and played back. An I/O window acts like a mixer, allowing you to adjust input level of the stereo source, as well levels for each individual channel in the upmix. The settings window provides a high level of customization for the appearance and behavior of the plug-in. One of my favorite things here is the ability to assign different colors for each channel in the surround scope, which aside from creating a really neat looking display, gives a nice clear visual representation of what’s happening with the upmix.

NUGEN Audio Halo Upmix - IO panel       NUGEN Audio Halo Upmix - 7_1 main view

Playback
One of the most important considerations in an upmix tool is how it will all sound once everything is folded down for playback from televisions and portable devices, and Halo really shines here. Less savvy upmixing can cause phasing and other issues when converted back to stereo, so it’s important to be able to compare as you are working.

A monitoring section at the bottom of the plug-in allows you to switch between listening to the original source audio, the upmixed version and a stereo downmix so you can be certain that your mixing is folding down correctly. If that’s not enough, hitting the “Exact” button will guarantee that the downmixed version matches the stereo version completely, by disabling certain parameters that might affect the downmix. All of this can be done as you’re listening in realtime, allowing for fast and easy A-B comparisons.

Summing Up
Nugen has really put out a fine well-thought-out product with the Halo upmixer. It’s at once simple to operate and incredibly tweakable, giving lots of attention to important technical considerations. Above all it sounds great. For mixers who often find themselves having to fit two channel music into a multi-channel mix you’ll be hard pressed to find a nicer solution than this.

Halo Upmixer retails for $499 and is available in AAX, AU, VST2 and VST3 formats.

Robin Shore is a co-owner and audio post pro at SilverSound Studios in New York City.

Dolby bringing Atmos to homes… are small post houses next?

By Robin Shore

Last month Dolby announced that its groundbreaking Atmos surround sound format will soon be available outside of commercial cinemas. By sometime early next year consumer’s will be able to buy special Atmos-enabled A/V receivers and speakers for their home theater systems.

I recently had the chance to demo a prototype of an Atmos home system at an event hosted at Dolby’s New York offices.

A brief overview for those who might not be totally familiar with this technology: Atmos is Dolby’s latest surround sound format. It includes overhead speakers which allow sounds to be panned above the audience. Rather than using a traditional track-based paradigm, Atmos mixes are object-oriented. An Atmos mix contains up to 128 audio objects, each with  Continue reading

Review: Krotos Limited’s Dehumaniser

By Robin Shore

Dehumaniser’s aim is to make the usually time consuming and esoteric process of designing creature vocals into something as simple as twiddling a few knobs and flicking some switches. If you’ve ever found yourself struggling to create the voice of dinosaur, alien or troll then this software is for you.

I first used Dehumaniser about a year ago when it was available as a free-by-request beta version from creator Orfeas Boteas’ personal website. Even with this early version I was impressed by how easy it was to transform a fairly dopey sounding recording of myself groaning into a massive King Kong-sized growl.

Continue reading