Author Archives: Randi Altman

Behind the Title: Jogger Studios’ CD Andy Brown

This veteran creative director can also often be found at the controls of his Flame working on a new spot.

NAME: Andy Brown

COMPANY: Jogger Studios (@joggerstudios)

CAN YOU DESCRIBE YOUR COMPANY?
We are a boutique post house with offices in the US and UK providing visual effects, motion graphics, color grading and finishing. We are partnered with Cut + Run for editorial and get to work with their editors from around the world. I am based in our Jogger Los Angeles office, after having helped found the company in London.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
Overseeing compositing, visual effects and finishing. Looking after staff and clients. Juggling all of these things and anticipating the unexpected.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I’m still working “on the box” every day. Even though my title is creative director, it is the hands-on work that is my first love as far as project collaborations go. Also I get to re-program the phones and crawl under the desks to get the wires looking neater when viewed from the client couch.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The variety, the people and the challenges. Just getting to work on a huge range of creative projects is such a privilege. How many people get to go to work each day looking forward to it?

WHAT’S YOUR LEAST FAVORITE?
The hours, occasionally. It’s more common to have to work without clients nowadays. That definitely makes for more work sometimes, as you might need to create two or three versions of a spot to get approval. If everyone was in the room together you reach a consensus more quickly.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like the start of the day best, when everyone is coming into the office and we are getting set up for whatever project we are working on. Could be the first coffee of the day that does it.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I want to say classic car dealer, but given my actual career path the most likely alternative would be editor.

WHY DID YOU CHOOSE THIS PROFESSION?
There were lots of reasons, when I look at it. It was either the Blue Peter Book of Television (the longest running TV program for kids, courtesy of the BBC) or my visit to the HTV Wales TV station with my dad when I was about 12. We walked around the studios and they were playing out a film to air, grading it live through a telecine. I was really struck by the influence that the colorist was having on what was seen.

I went on to do critical work on photography, film and television at the Centre for Contemporary Cultural Studies at Birmingham University. Part of that course involved being shown around the Pebble Mill BBC Studios. They were editing a sequence covering a public enquiry into the Handsworth riots in 1985. It just struck me how powerful the editing process was. The story could be told so many different ways, and the editor was playing a really big part in the process.

Those experiences (and an interest in writing) led me to think that television might be a good place to work. I got my first job as a runner at MPC after a friend had advised me how to get a start in the business.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We worked on a couple of spots for Bai recently with Justin Timberlake creating the “brasberry.” We had to make up some graphic animations for the newsroom studio backdrop for the shoot and then animate opening title graphics to look just enough like it was a real news report, but not too much like a real news report.

We do quite a bit of food work, so there’s always some burgers, chicken or sliced veggies that need a bit of love to make them pop.

There’s a nice set extension job starting next week, and we recently finished a job with around 400 final versions, which made for a big old deliverables spreadsheet. There’s so much that we do that no one sees, which is the point if we do it right.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Sometimes the job that you are most proud of isn’t necessarily the most amazing thing to look at. I used to work on newspaper commercials back in the UK, and it was all so “last minute.” A story broke, and all of a sudden you had to have a spot ready to go on air with no edit, no footage and only the bare bones of a script. It could be really challenging, but we had to get it done somehow.

But the best thing is seeing something on TV that you’ve worked on. At Jogger Studios, it is primarily commercials, so you get that excitement over and over again. It’s on air for a few weeks and then it’s gone. I like that. I saw two of our spots in a row recently on TV, which I got a kick out of. Still looking for that elusive hat-trick.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Flame, the Land Rover Series III and, sadly, my glasses.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Just friends and family on Instagram, mainly. Although like most Flame operators, I look at the Flame Learning Channel on YouTube pretty regularly. YouTube also thinks I’m really interested in the Best Fails of 2018 for some reason.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
More often than not it is podcasts. West Wing Weekly, The Adam Buxton Podcast, Short Cuts and Song Exploder. Plus some of the shows on BBC 6 Music, which I really miss.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I go to work every day feeling incredibly lucky to be doing the job that I do, and it’s good to remember that. The 15-minute walk to and from work in Santa Monica usually does it.

Living so close to the beach is fantastic. We can get down to the sand, get the super-brella set up and get in the sea with the bodyboards in about 15 minutes. Then there’s the Malibu Cars & Coffee, which is a great place to start your Sunday.

Showrunner/EP Robert Carlock talks Netflix’s Unbreakable Kimmy Schmidt

By Iain Blair

When Unbreakable Kimmy Schmidt first premiered back in 2015, the sitcom seemed quite shocking — and not just because NBC sold it off to Netflix so quickly. While at the streaming service, it has been a big hit with audiences and critics alike, racking up dozens of industry awards and nominations, including 18 Primetime Emmy nominations.

Robert Carlock

Created by Tina Fey and Robert Carlock, the sunny comedy with a dark premise stars Ellie Kemper as the title character. She moves to New York City after being rescued from an underground bunker where she and three other women were held captive for 15 years by doomsday cult leader (John Hamm).

Alone in the Big Apple, and armed only with her unbreakable sense of optimism, Kimmy soon forges a new life that includes her colorful landlady Lillian Kaushtupper (Carol Kane), her struggling actor roommate (Tituss Burgess) and her socialite employer (Jane Krakowski). The strong cast also boasts recurring talent and A-list guests, such as Tina Fey, Martin Short, Fred Armisen, Jeff Goldblum, Amy Sedaris and Lisa Kudrow.

Last year Netflix renewed the show for a final season, with the first six episodes premiering in May 2018.

I recently spoke with Carlock about making the show, the Emmys and the planned movie version.

When Kimmy Schmidt first came out, its premise seemed bizarre and shocking — a young woman who was kidnapped, abused and held captive in an underground bunker. But looking back today, it seems ahead of its time.
Unfortunately, I think you’re right. At the time we felt strongly it was a way to get people talking about things and issues they didn’t necessarily want to talk about, such as how women are really treated in this society. And with the #MeToo movement it’s more timely than ever. Tina would say, “It keeps happening, it’s in the news all the time, and at this level,” and it’s really sad that it’s true. The last two seasons we’ve been dealing more and more with issues like this, and now people really are talking about sexual harassment in the workplace. But we have the added burden of also trying to make it funny.

Is Season 4 definitely the final one?
I think so, and the second half will stream sometime early next year. In the meantime, we’re talking about the movie deal that Netflix wants and what that will entail. We kind of thought about it as, “Let’s give our characters endings since there’s still so much to talk about,” but you also have to bear in mind the topicality of it all in a year or so. So it gave us the luxury of being able to finish the show in a way that felt right, and Season 5 — the second half of Season 4 — will satisfy fans, I think. We’re also very happy that Netflix is so enthusiastic about doing it this way.

Do you like being a showrunner?
I do, and I love it better than not being in charge. The beauty of TV is that, unlike in movies, and for a variety of reasons, writers get to be in charge. I love the fact that when you’re a showrunner, you get to learn so much about everything, including all the post production. You work with all these really skilled artisans and get to oversee the entire process from start to finish, including picking out what shade of blue the dress should be (laughs). It’s much better than watching other people make all the key decisions.

What are the big challenges of showrunning?
The big one is trying to think outside of the writer’s room. You have all that ambition on the page, but then you have to deal with the reality of actually shooting it and making it work. It’s a lot easier to type it than execute it. Then you have to be really objective about what’s working and what isn’t, because you fall in love with what you write. So you have to realize, “Maybe this needs a little insert, or more jokes here to get the point across,” and you have to put that producer hat on — and that can be really tricky. It’s a challenge for sure, but we’ve also been fortunate in having a great crew that’s been with us a while, so there’s that shorthand, and things move quickly on the set and we get a lot done.

Where do you shoot and post?
We do the shooting at the Broadway Stages in Brooklyn, and have all the editing setup there as well. Then we have Tina’s production offices at Columbus Circle, and we do all the sound at Sync Sound in midtown Manhattan.

Do you like the post process?
I love post and the whole process of seeing a script come alive as you edit.  You find ways of telling the story that you maybe didn’t expect.

You have a big cast and a lot of stuff going on in each episode. What are the big editing challenges?
One of the big creative challenges of a single-camera show — which ultimately also gives you so many more tools in writing, shooting and editing — is that you don’t get to see rehearsals. So one of the reasons our episodes are going into post and often coming out of post so stuffed with story and jokes is that we don’t get so many opportunities to see exactly what’s making the scene tick. We’re hitting the story, hitting the jokes and hitting the characters too many times, and  a lot of the challenge is scraping all those away. Our episodes come in around the mid-30s often, and we think they live and play best around 26 or 27 minutes. That’s where I think the sweet spot is. So you can feel, “Oh, I love that joke,” but the hard reality is that the scene plays so much better without it.

Talk about the importance of sound and music.
I think it’s so important in comedy, and it can totally change the feel of a scene. Jeff Richmond — Tina’s husband and one of our producers — does all the music. He’s also fantastic in the edit. So if I’m not available or Tina isn’t, then he or Sam Means, another producer, can take our edit notes and interpret them. We’ll type up 15 pages on a Director’s Cut, and then we hone the show until it’s a lock for the network, and we go through it all frame by frame.

How important are the Emmys to you and a show like this?
Increasingly now, with all the noise and static out there, and so many other good shows, it’s really important. I think it helps cut through the clutter. When you’re working hard on a show like this, with your head down all the time, you don’t really know where you stand sometimes. So to be nominated by your peers means a lot. (Laughs) I wish it didn’t, but we’re small-minded people who only really care about other people’s opinions.

What’s the latest on talk about a movie? Will it be a theatrical release or just Netflix, or both?
That’s a great question. Who knows? We’re in the middle of trying to figure out the budget. I imagined it would be just streaming, but maybe it will be theatrical as well. One thing’s for sure. We won’t be one of those TV shows that gets a whole new cast for the movie version. Lightning struck with our first cast, and we’re not looking to replace anyone.

There’s been a lot of talk about lack of opportunity for women in movies. Are things better in TV?
I can only speak for us, but we like shows where there’s a lot of diversity and different voices, and sometimes we step in a bear trap we didn’t even know was there because we’re trying to write for so many different voices. For us, it just makes sense to embrace diversity, but it’s such a complicated and thorny issue. I’m just glad we’re talking about it more now. It’s what interests us. When Tina and I first sat down to write this, we didn’t want to do something salacious and exploitive. We were thinking about a really startling way to get people talking about gender and class. It’s been a fun challenge.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Sound Lounge, Mad Hat team on Sound Lounge Everywhere Atlanta

Sound Lounge has partnered with Atlanta’s Mad Hat Creative to bring its Sound Lounge Everywhere remote collaboration service to the Southeast. Sound Lounge Everywhere will allow advertising, broadcast and corporate clients in Atlanta and neighboring states to work with Sound Lounge sound editors, designers and mixers in New York in realtime and share high-quality audio and video.

This will allow clients access to top sound talent, while saving time, travel and production costs. Sound Lounge already has launched Sound Lounge Everywhere at sites in Boston and Boulder, Colorado.

At Mad Hat’s Atlanta offices, a suite dedicated to sound work is equipped with Bowers & Wilkins speakers and other leading-edge gear to ensure accurate playback of music and sound. Proprietary Sound Lounge Everywhere hardware and software facilitates realtime streaming of high-quality video and uncompressed, multichannel audio between the Mad Hat and Sound Lounge locations with virtually no latency. Web cameras and talkback modules support two-way communication.

For Mad Hat Creative, Sound Lounge Everywhere helps the company round out an offering that includes video production, editorial, visual effects, motion graphics, color correction and post services.

To help manage the new service, Sound Lounge has promoted Becca Falborn to senior producer. Falborn, who joined the studio as a producer last year, will coordinate sound sessions between the two sites, assist Sound Lounge head of production Liana Rosenberg in overseeing local sound production and serve as the studio’s social coordinator.

A graduate of Manhattan College, Falborn has a background in business affairs, client services and marketing, including posts with the post house Nice Shoes and the marketing agency Hogarth Worldwide.

The Darkest Minds director Jennifer Yuh Nelson

By Iain Blair

Jennifer Yuh Nelson has been an acclaimed — and highly bankable — director in animation for years, thanks to her work on the billion-dollar-grossing Kung Fu Panda franchise.

Now she’s taken on her first live-action film with Fox’s The Darkest Minds. Adapted from the best-selling book by Alexandra Bracken, the first in a YA trilogy, the film stars Amandla Stenberg in the lead as Ruby, along with Harris Dickinson, Miya Cech and Skylan Brooks.

The Darkest Minds also features adults, including Mandy Moore and Bradley Whitford, and revolves around a group of teens who mysteriously develop powerful new abilities and who are then declared a threat by the government and detained. It’s essentially a genre mash-up — a road movie with some sci-fi elements and lots of kinetic action. It was written by Chad Hodge, best known for his work as the creator and executive producer of TNT’s Good Behavior and Fox’s Wayward Pines.

Nelson’s creative team included DP Kramer Morgenthau (Terminator Genisys, Thor: The Dark World), editors Maryann Brandon (Star Wars: The Force Awakens) and Dean Zimmerman (Stranger Things), and visual effects supervisor Björn Mayer (Oblivion). Fox-based 21 Laps’ (Stranger Things, Arrival) Shawn Levy and Dan Levine produced.

I recently spoke with Nelson about making the film.

What sort of film did you set out to make?
To start off with, I wanted a great emotional core, and as this was based off a book, it already had that built in… even in early versions of the script. It had great characters with strong relationships, and I wanted to do some action stuff.

Any big surprises making the move to a major live-action film, or were you pretty prepared in terms of prep thanks to your background in animation?
I was pretty prepared, and the prep’s essentially the same as in animation. But, of course, production is utterly different, along with the experience of being on location. I had a really great crew and a fantastic DP, which helped me a lot. The big difference is suddenly you have the luxury of coverage, which you don’t get in animation. There you need to know exactly what you want, as it’s so expensive to create. Being outside all day on location, and dealing with the elements and crew and cast all at once — that was a big learning curve, but I really loved it. I had a fantastic time!

What were the main technical challenges in pulling it all together?
There were a lot of moving parts, and the main one was probably all the VFX involved. It’s a very reality-based book. It’s not set in outer space, and it’s supposed to look very grounded and seamless with reality. So you have these characters with superpowers that are meant to be very believable, but then we had fire, flamethrowers, 300 extras running around, wind machines and so on. Then all the post fire stuff we had to add later.

How early on did you start integrating post and all the VFX?
Right at the start, and my VFX super Björn Mayer was so smart about it and figuring out ways to get really cool looks. We tried out a ton of visual approaches. Some were done in camera, most were done in post or augmented in post – especially all the fire effects. It was intense reality, not complete reality, that we aimed for, so we had some flexibility.

I assume you did a lot of previs?
Quite a lot, and that was also a big help. We did full-3D previs, like we do in animation, so I was pretty used to that. We also storyboarded a big chunk of the movie, including scenes that normally you wouldn’t have to storyboard because I wanted to make completely sure we were covered on everything.

Didn’t you start off as a storyboard artist?
I did, and my husband’s one too, so I roped him in and we did the whole thing ourselves. It’s just an invaluable tool for showing people what’s going on in a director’s head, and when they’ve seen the pictures they can then offer creative ideas as everyone knows what you’re trying to achieve.

How tough was the shoot?
We shot in Atlanta, and it went smoothly considering there’s always unexpected things. We had freak thunderstorms and a lot of rain that made some sets sink and so on, but it’s how you respond to all that that counts. Everyone was calm and organized.

Where did you post?
Here in LA. We rented some offices near my home and just set up editorial and all our VFX there. It was very convenient.

In a sense, animation is all post, so you must love the post process?
You’re right – animation is like a long-running post for the whole production. I love post because it’s so transformative, and it’s beautiful to see all the VFX get layered in and see the movie suddenly come to life.

Talk about editing this with two editors. How did that work?
Maryann was on the set with us, working as we shot, and then Dean came on later in post, so we had a great team.

What were the big editing challenges?
I think the big one was making all the relationships believable over the course of the film, and so much of it is very subtle. It can come down to just a look or a moment, and we had to carefully plot the gradations and work hard to make it all feel real.

All the VFX play a big role. How many were there?
Well over 2,000 I think, and MPC and Crafty Apes did most of them. I loved working on them with my VFX supervisor. It’s very similar to working with them in animation, which is essentially one big VFX show. So I was very familiar with the process, although integrating them into live action instead of a virtual world is quite different. I loved seeing how it all got integrated so seamlessly.

Can you talk about the importance of sound and music?
It was so important to me, and we had quite a few songs in the film because it’s partly a road trip. There’s the big dance scene where we found a great song and then were able to shoot to the track. We mixed all the sound on the Fox lot.

Where did you do the DI and how important is it to you?
At Technicolor, and I’m pretty involved, although I don’t micro-manage. I’d give notes, and we’d make some stuff pop a bit more and play around with the palette, but basically it went pretty quickly as what we shot already looked really sweet.

Did the film turn out the way you hoped?
It did, and I can’t wait to do another live-action film. I adore animation, but live action’s like this new shiny toy.

You’re that Hollywood rarity — a successful female director. What advice would you give to young women who want to direct?
Do what makes you happy. Don’t do it just because someone says “you can” or “you can’t.” You’ve got to have that personal desire to do this job, and it’s not easy and I don’t expect change to come very quickly to Hollywood. But it is coming.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.

Ziva VFX 1.4 adds real-world physics to character creation

Ziva Dynamics has launched Ziva VFX 1.4, a major update that gives the company’s character-creation technology five new tools for production artists. With this update, creators can apply real-world physics to even more of the character creation process — muscle growth, tissue tension and the effects of natural elements, such as heavy winds and water pressure — while removing difficult steps from the rigging process.

Ziva VFX 1.4 combines the effects of real-world physics with the rapid creation of soft-tissue materials like muscles, fat and skin. By mirroring the fundamental properties of nature, users can produce CG characters that move, flex and jiggle just as they would in real life.

With External Forces, users are able to accurately simulate how natural elements like wind and water interact with their characters. Making a character’s tissue flap or wrinkle in the wind, ripple and wave underwater, or even stretch toward or repel away from a magnetic field can all be done quickly, in a physically accurate way.

New Pressure and Surface Tension properties can be used to “fit” fat tissues around muscles, augmenting the standard Ziva VFX anatomy tools. These settings allow users to remove fascia from a Ziva simulation while still achieving the detailed wrinkling and sliding effects that make humans and creatures look real.

Muscle growth can rapidly increase the overall muscle definition of a character or body part without requiring the user to remodel the geometry. A new Rest Scale for Tissue feature lets users grow or shrink a tissue object equally in all directions. Together, these tools improve collaboration between modelers and riggers while increasing creative control for independent artists.

Ziva VFX 1.4 also now features Ziva Scene Panel, which allows artists working on complex builds to visualize their work more simply. Ziva Scene Panel’s tree-like structure shows all connections and relationships between an asset’s objects, functions and layers, making it easier to find specific items and nodes within an Autodesk Maya scene file.

Ziva VFX 1.4 is available now as a Maya plug-in for Windows and Linux users.

Review: Blackmagic’s Resolve 15

By David Cox

DaVinci Resolve 15 from Blackmagic Design has now been released. The big news is that Blackmagic’s compositing software Fusion has been incorporated into Resolve, joining the editing and audio mixing capabilities added to color grading in recent years. However, to focus just on this would hide a wide array of updates to Resolve, large and small, across the entire platform. I’ve picked out some of my favorite updates in each area.

For Colorists
Each time Blackmagic adds a new discipline to Resolve, colorists fear that the color features take a back seat. After all, Resolve was a color grading system long before anything else. But I’m happy to say there’s nothing to fear in Version 15, as there are several very nice color tweaks and new features to keep everyone happy.

I particularly like the new “stills store” functionality, which allows the colorist to find and apply a grade from any shot in any timeline in any project. Rather than just having access to manually saved grades in the gallery area, thumbnails of any graded shot can be viewed and copied, no matter which timeline or project they are in, even those not explicitly saved as stills. This is great for multi-version work, which is every project these days.

Grades saved as stills (and LUTS) can also be previewed on the current shot using the “Live Preview” feature. Hovering the mouse cursor over a still and scrubbing left and right will show the current shot with the selected grade temporarily applied. It makes quick work of finding the most appropriate look from an existing library.

Another new feature I like is called “Shared Nodes.” A color grading node can be set as “shared,” which creates a common grading node that can be inserted into multiple shots. Changing one instance, changes all instances of that shared node. This approach is more flexible and visible than using Groups, as the node can be seen in each node layout and can sit at any point in the process flow.

As well as the addition of multiple play-heads, a popular feature in other grading systems, there is a plethora of minor improvements. For example, you can now drag the qualifier graphics to adjust settings, as opposed to just the numeric values below them. There are new features to finesse the mattes generated from the keying functions, as well as improvements to the denoise and face refinement features. Nodes can be selected with a single click instead of a double click. In fact, there are 34 color improvements or new features listed in the release notes.

For Editors
As with color, there are a wide range of minor tweaks all aimed at improving feel and ergonomics, particularly around dynamic trim modes, numeric timecode entry and the like. I really like one of the major new features, which is the ability to open multiple timelines on the screen at the same time. This is perfect for grabbing shots, sequences and settings from other timelines.

As someone who works a lot with VFX projects, I also like the new “Replace Edit” function, which is aimed at those of us that start our timelines with early drafts of VFX and then update them as improved versions come along. The new function allows updated shots to be dragged over their predecessors, replacing them but inheriting all modifications made, such as the color grade.

An additional feature to the existing markers and notes functions is called “Drawn Annotations.” An editor can point out issues in a shot with lines and arrows, then detail them with notes and highlight them with timeline markers. This is great as a “note to self” to fix later, or in collaborative workflows where notes can be left for other editors, colorists or compositors.

Previous versions of Resolve had very basic text titling. Thanks to the incorporation of Fusion, the edit page of Resolve now has a feature called Text+, a significant upgrade on the incumbent offering. It allows more detailed text control, animation, gradient fills, dotted outlines, circular typing and so on. Within Fusion there is a modifier called “Follower,” which enables letter-by-letter animation, allowing Text+ to compete with After Effects for type animation. On my beta test version of Resolve 15, this wasn’t available in the Edit page, which could be down to the beta status or an intent to keep the Text+ controls in the Edit page more streamlined.

For Audio
I’m not an audio guy, so my usefulness in reviewing these parts is distinctly limited. There are 25 listed improvements or new features, according to the release notes. One is the incorporation of Fairlight’s Automated Dialog Replacement processes, which creates a workflow for the replacement of unsalvageable originally recorded dialog.

There are also 13 new built-in audio effects plugins, such as Chorus, Echo and Flanger, as well as de-esser and de-hummer clean-up tools.
Another useful addition both for audio mixers and editors is the ability to import entire audio effects libraries, which can then be searched and star-rated from within the Edit and Fairlight pages.

Now With Added Fusion
So to the headline act — the incorporation of Fusion into Resolve. Fusion is a highly regarded node-based 2D and 3D compositing software package. I reviewed Version 9 in postPerspective last year [https://postperspective.com/review-blackmagics-fusion-9/]. Bringing it into Resolve links it directly to editing, color grading and audio mixing to create arguably the most agile post production suite available.

Combining Resolve and Fusion will create some interesting challenges for Blackmagic, who say that the integration of the two will be ongoing for some time. Their challenge isn’t just linking two software packages, each with their own long heritage, but in making a coherent system that makes sense to all users.

The issue is this: editors and colorists need to work at a fast pace, and want the minimum number of controls clearly presented. A compositor needs infinite flexibility and wants a button and value for every function, with a graph and ideally the ability to drive it with a mathematical expression or script. Creating an interface that suits both is near impossible. Dumbing down a compositing environment limits its ability, whereas complicating an editing or color environment destroys its flow.

Fusion occupies its own “page” within Resolve, alongside pages for “Color,” “Fairlight” (audio) and “Edit.” This is a good solution in so far that each interface can be tuned for its dedicated purpose. The ability to join Fusion also works very well. A user can seamlessly move from Edit to Fusion to Color and back again, without delays, rendering or importing. If a user is familiar with Resolve and Fusion, it works very well indeed. If the user is not accustomed to high-end node-based compositing, then the Fusion page can be daunting.

I think the challenge going forward will be how to make the creative possibilities of Fusion more accessible to colorists and editors without compromising the flexibility a compositor needs. Certainly, there are areas in Fusion that can be made more obvious. As with many mature software packages, Fusion has the occasional hidden right click or alt-click function that is hard for new users to discover. But beyond that, the answer is probably to let a subset of Fusion’s ability creep into the Edit and Color pages, where more common tasks can be accommodated with simplified control sets and interfaces. This is actually already the case with Text+; a Fusion “effect” that is directly accessible within the Edit section.

Another possible area to help is Fusion Macros. This is an inbuilt feature within Fusion that allows a designer to create an effect and then condense it down to a single node, including just the specific controls needed for that combined effect. Currently, Macros that integrate the Text+ effect can be loaded directly in the Edit page’s “Title Templates” section.

I would encourage Blackmagic to open this up further to allow any sort of Macro to be added for video transitions, graphics generators and the like. This could encourage a vibrant exchange of user-created effects, which would arm editors and colorists with a vast array of immediate and community sourced creative options.

Overall, the incorporation of Fusion is a definite success in my view, whether used to empower multi-skilled post creatives or to provide a common environment for specialized creatives to collaborate. The volume of updates and the speed at which the Resolve software developers address the issues exposed during public beta trials, remains nothing short of impressive.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.

Telestream intros ScreenFlow V.8 for editing, screen recording

Telestream’s ScreenFlow video editing and screen recording software for the Mac is now in Version 8. ScreenFlow V.8 adds new features including new styles and templates to help streamline editing workflows. There is also a new integrated stock media library option available.

The new templates allow users to pre-create ScreenFlow projects with placeholder clips in the timeline for important recorded media, as well as external media. Once a template is saved, future ScreenFlow recordings are opened directly in the template project, reducing the amount of editing required to complete jobs. For users creating software tutorials, serialized videos or even videos with similar formats, the new Templates in V.8 allow for quicker video production and less tedious editing, resulting in more time spent on the creative aspects of video production.

The new styles feature offers customized media configurations that streamline individual asset editing, saving time in the editing process. With styles, ScreenFlow users can easily copy/paste video parameters (like scale, positioning, filters, axis rotation and more) and apply them to individual pieces of media. For example, should users want to create a style for their webcam recordings, they can now apply their “webcam-style,” positioning it within their project exactly where they want it, without additional editing.

The new stock media library offers users unlimited access to more than 500,000 pieces of media. It costs $60 a year.

ScreenFlow 8.0 is available for $129. Customers who have purchased previous versions of ScreenFlow on telestream.net can upgrade for $39 (pricing will vary according to the version previously purchased).

Composer and sound mixer Rob Ballingall joins Sonic Union

NYC-based audio studio Sonic Union has added composer/experiential sound designer/mixer Rob Ballingall to its team. He will be working out of both Sonic Union’s Bryant Park and Union Square locations. Ballingall brings with him experience in music and audio post, with an emphasis on the creation of audio for emerging technology projects, including experiential and VR.

Ballingall recently created audio for an experiential in-theatre commercial for Mercedes-Benz Canada, using Dolby Atmos, D-Box and 4DX technologies. In addition, for National Geographic’s One Strange Rock VR experience, directed by Darren Aronofsky, Ballingall created audio for custom VR headsets designed in the style of astronaut helmets, which contained a pinhole projector to display visuals on the inside of the helmet’s visor.

Formerly at Nylon Studios, Ballingall also composed music on brand campaigns for clients such as Ford, Kellogg’s and Walmart, and provided sound design/engineering on projects for AdCouncil and Resistance Radio for Amazon Studios and The Man in the High Castle, which collectively won multiple Cannes Lion, Clio and One Show awards, as well as garnering two Emmy nominations.

Born in London, Ballingall immigrated to the US eight years ago to seek a job as a mixer, assisting numerous Grammy Award-winning engineers at NYC’s Magic Shop recording studio. Having studied music composition and engineering from high school to college in England, he soon found his niche offering compositional and arranging counterpoints to sound design, mix and audio post for the commercial world. Following stints at other studios, including Nylon Studios in NYC, he transitioned to Sonic Union to service agencies, brands and production companies.