Category Archives: A Closer Look

Blending Ursa Mini and Red footage for Aston Martin spec spot

By Daniel Restuccio

When producer/director Jacob Steagall set out to make a spec commercial for Aston Martin, he chose to lens it on the Blackmagic Ursa Mini 4.6k and the Scarlet Red. He says the camera combo worked so seamlessly he dares anyone to tell which shots are Blackmagic and which are Red.

L-R Blackmagic’s Moritz Fortmann and Shawn Carlson with Jacob Steagall and Scott Stevens.

“I had the idea of filming a spec commercial to generate new business,” says Steagall. He convinced the high-end car maker to lend him an Aston Martin 2016 V12 Vanquish for a weekend. “The intent was to make a nice product that could be on their website and also be a good-looking piece on the demo reel for my production company.”

Steagall immediately pulled together his production team, which consisted of co-director Jonathan Swecker and cinematographers Scott Stevens and Adam Pacheco. “The team and I collaborated together about the vision for the spot which was to be quick, clean and to the point, but we would also accentuate the luxury and sexiness of the car.”

“We had access to the new Blackmagic Ursa Mini 4.6k and an older Red Scarlet with the MX chip,” says Stevens. “I was really interested in seeing how both cameras performed.”

He set up the Ursa Mini to shoot ProRes HQ at Ultra HD (3840×2160) and the Scarlet at 8:1 compression at 4K (4096×2160). He used both Canon still camera primes and a 24-105mm zoom, switching them from camera to camera depending on the shot. “For some wide shots we set them up side by side,” explains Stevens. “We also would have one camera shooting the back of the car and the other camera shooting a close-up on the side.”

In addition to his shooting duties, Stevens also edited the spot, using Adobe Premiere, and exported the XML into Blackmagic Resolve Studio 12. Stevens notes that, in addition to loving cinematography, he’s also “really into” color correction. “Jacob (Steagall) and I liked the way the Red footage looked straight out of the camera in the RedGamma4 color space. I matched the Blackmagic footage to the Red footage to get a basic look.”

Blackmagic colorist Moritz Fortmann took Stevens’ basis color correction and finessed the grade even more. “The first step was to talk to Jacob and Scott and find out what they were envisioning, what feel and look they were going for. They had already established a look so we saved a few stills as reference images to work off. The spot was shot on two different types of cameras, and in different formats. Step two was to analyze the characteristics of each camera and establish a color correction to match the two.  Step three was to tweak and refine the look. We did what I would describe as a simple color grade, only relying on primaries, without using any Power Windows or keys.”

If you’re planning to shoot mixed footage, Fortmann suggests you use cameras with similar characteristics, matching resolution, dynamic range and format. “Shooting RAW and/or Log provides for the highest dynamic range,” he says. “The more ‘room’ a colorist has to make adjustments, the easier it will be to match mixed footage. When color correcting, the key is to make mixed footage look consistent. One camera may perform well in low light while another one does not. You’ll need to find that sweet spot that works for all of your footage, not just one camera.”

Daniel Restuccio is a writer and chair of the multimedia department at California Lutheran University.

Sony at NAB with new 4K OLED monitor, 4K, 8X Ultra HFR camera

At last year’s NAB, Sony introduced its first 4K OLED reference monitor for critical viewing — the BVM-X300. This year, Sony added a new monitor, the the PVM-X550, a 55-inch, OLED panel with 12-bit signal processing, perfect for client viewing. The Trimaster EL PVM-X550 supports HDR through various Electro-Optical Transfer Functions (EOTF), such as S-Log3, SMPTE ST.2084 and Hybrid Log-Gamma, covering applications for both cinematography and broadcast. The PVM-X550 is a quad-view OLED monitor, which allows customized individual display settings across four distinct views in HD. It is equipped with the same signal-processing engine as the BVM-X300, providing a 12-bit output signal for picture accuracy and consistency. It also supports industry standard color spaces including the wider ITU-R BT.2020 for Ultra High Definition.

HFR Camera
At NAB 2016, Sony displayed their newest camera system: the HDC-4800 combines 4K resolution with enhanced high frame rate capabilities, capturing up to 8X at 4K, and 16X in full HD. “This camera system can do a lot of everything — very high frame rate, very high resolution,” said Rob Willox, marketing manager for content creation, Sony Electronics.

I broke the second paragraph into two, and they are now: The HDC-4800 uses a new Super 35mm 4K CMOS sensor, supporting a wide color space (both BT.2020 and BT.709), and provides an industry standard PL lens mount, giving the system the capability of using the highest quality cinematic lenses for clear and crisp high resolution images.The new sensor brings the system into the cinematic family of RED and Alexa, making it well suited as a competitor to today’s modern, high end cinematic digital solutions.

An added feature of the HDC-4800 is how it’s specifically designed to integrate with Sony’s companion system, the Sony HDC-4300, a 2/3 inch image sensor 4k/HD camera. Using matching colorimetry and deep toolset camera adjustments, and with the ability to take advantage of existing build-up kits, remote control panels and master setup units, the two cameras can blend seamlessly.

Archive
Sony also showed the second generation of its Optical Disc Archive System, which adopts new, high-capacity optical media, rated with a 100 year shelf life with double the transfer rate and double the capacity of a single cartridge at 3.3 TB. The Generation 2 Optical Disc Archive System also adds an 8-channel optical drive unit, doubling the read/write speeds of the previous generation, helping to meet the data needs of real-time 4K production.

Dell 6.15

Making our dialogue-free indie feature ‘Driftwood’

By Paul Taylor and Alex Megaro

Driftwood is a dialogue-free feature film that focuses on a woman and her captor in an isolated cabin. We chose to shoot entirely MOS… because we are insane. Or perhaps we were insane to shoot a dialogue-free feature in the first place, but our choice to remove sound recording from the set was both freeing and nerve wracking due to the potential post production nightmare that lay ahead.

Our decision was based on how, without speech to carry along the narrative, every sound would need to be enhanced to fill in the isolated world of our characters. We wanted draconian control over the soundscape, from every footstep to every door creak, but we also knew the sheer volume of work involved would put off all but the bravest post studios.

The film was shot in a week with a cast of three and a crew of three in a small cabin in Upstate New York. Our camera of choice was a Canon 5D Mark II with an array of Canon L-series lenses. We chose the 5D because we already owned it — so more bang for our buck — and also because it gave us a high-quality image, even with such a small body. Its ease of use allowed us to set up extremely quickly, which was important considering our extremely truncated shooting schedule. Having no sound team on set allowed us to move around freely without the concerns of planes passing overhead or cars rumbling in the distance delaying a shot.

The Audio Post
The editing was a wonderfully liberating experience in which we cut purely to image, never once needing to worry about speech continuity or a host of other factors that often come into play with dialogue-driven films. Driftwood was edited on Apple’s Final Cut Pro X, a program that can sometimes be a bit difficult for audio editing, but for this film it was a non-issue. The Magnetic Timeline was actually quite perfect for the way we constructed this film and made the entire process smooth and simple.

Once picture locked, we brought the project to New York City’s Silver Sound Studios, who jumped at the chance to design the atmosphere for an entire feature from the ground up. We sat with the engineers at Silver Sound and went through Driftwood shot-by-shot, creating a master list of all the sounds we thought necessary to include. Some were obvious, such as footsteps, breathing, clocks ticking and others less so, such as the humming of an old refrigerator or creaking of a wooden chair.

Once the initial list was set, we discussed whether or not to use stock audio or rerecord everything at the original location. Again, because we wanted complete control to create something wholly unique, we concluded it was important to return to the cabin and capture its particular character. Over the course of a few days, the Silver Sound gang rerecorded nearly every sound in the film, leaving only some basic Foley work to complete in their studio.

Once their library was complete, one of the last steps before mixing was to ADR all of the breathing. We had the actors come into the studio over a one-week period during which they breathed, moaned and sighed inside Silver Sound’s recording booth. These subtle sounds are taken for granted in most films, but for Driftwood they were of utter importance. The way the actors would sigh or breath could change the meaning behind that sound and change the subtext of the scene. If the characters cannot talk, then their expressions must be conveyed in other ways, and in this case we chose a more physiological track.

By the time we completed the film we had spent over a year recording and mixing the audio. The finished product is a world unto itself, a testament to the laborious yet incredibly exciting work performed by Silver Sound.

Driftwood was written, directed and photographed by Paul Taylor. It was produced and edited by Alex Megaro.


Raytracing today and in the future

By Jon Peddie

More papers, patents and PhDs have been written and awarded on ray tracing than any other computer graphic technique.

Ray tracing is a subset of the rendering market. The rendering market is a subset of software for larger markets, including media and entertainment (M&E), architecture, engineering and construction (AEC), computer-aided design (CAD), scientific, entertainment content creation and simulation-visualization. Not all users who have rendering capabilities in their products use it. At the same time there are products that have been developed solely as rendering tools and there are products that include 3D modeling, animation and rendering capabilities, and they may be used primarily for rendering, primarily for modeling or primarily for animation.

Because ray tracing is so important, and at the same time computationally burdensome, individuals and organizations have spent years and millions of dollars trying to speed things up. A typical ray traced scene on an old-fashioned HD screen can tax a CPU so heavily the image can only be upgraded maybe every second or two — certainly not the 33ms needed for realtime rendering.

GPUs can’t help much because one of the characteristics of ray tracing is it has no memory and every frame is a new frame, so the computational load is immutable. Also, the branching that occurs in raytracing defeats the power of a GPU’s SIMD architecture.

Material Libraries Critical
Prior to 2015, all ray tracer engines came with their own materials libraries. Cataloging the characteristics of all the types of materials in the world is beyond the resources of any company’s ability to develop and support. And the lack of standards has held back any cooperative development in the industry. However, a few companies have agreed to work together and share their libraries.

I believe we will see an opening up of libraries and the ability of various ray tracing engines to be able to avail themselves of a much larger library of materials. Nvidia is developing a standard-like capability they are calling the Material Definition Language — (MDL) and using it to allow various libraries to work with a wide range of ray tracing engines.

Rendering Becomes a Function of Price
In the near future, I expect to see 3D rendering become a capability offered as an online service. While it’s not altogether clear how this will affect the market, I think it will boost the use of ray tracing and lower the cost to an as-needed basis. It also offers the promise of being able to apply huge quantities of processing power limited only by the amount of money the user is willing to pay. Ray tracing will resolve to time (to render a scene) divided by cost.

That will continue to bring down the time to generate a ray traced frame for an animation for example, but it probably won’t get us to realtime ray tracing at 4K or beyond.

Shortcuts and Semiconductors
Work continues on finding clever ways to short circuit the computational load by using intelligent algorithms to look at the scene and deterministically allocate what objects will be seen, and which surfaces need to be considered.

Hybrid techniques are being improved and evolved where only certain portions of a scene are ray traced. Objects in the distance for example don’t need to be ray traced and flat, dull colored objects don’t need it.

Chaos Group says the use of variance-based adaptive sampling on this model of Christmas cookies from Autodesk 3ds Max provided a better final image in record time. (Source: Chaos Group)

Semiconductors are being developed to specifically accelerate ray tracing. Imagination Technologies, the company that designs Apple’s iPhone and iPad GPU, has a specific ray tracing engine that, when combined with the advance techniques just described can render an HD scene with partial ray traced elements several times a second. Siliconarts is a startup in Korea that has developed a ray tracing accelerator and I have seen demonstrations of it generating images at 30fps. And Nvidia is working ways to make a standard GPU more ray-tracing friendly.

All these ideas and developments will come together in the very near future and we will begin to realize realtime ray tracing.

Market Size
It is impossible to know how many users there are of ray tracing programs because the major 3D modeling and CAD programs, both commercial and free (e.g., Autodesk, Blender, etc.) have built-in ray tracing engines, as well as the ability to use pluggable add-on software programs for ray tracing.

The potentially available market vs. the totally available market (TAM).

Also, not all users make use of ray tracing on a regular basis— some use it every day, others maybe occasionally or once a project. Furthermore, some users will use multiple ray tracing programs in a project, depending upon their materials library, user interface, specific functional requirements or pipeline functionality.

Free vs. Commercial
A great deal of the raytracing software available on the market is the result of university projects. Some of the developers of such programs have formed companies, others have chosen to stay in academia or work as independent programmers.

The number of new suppliers has not slowed down indicating a continued demand for ray tracing

The non-commercial developers continue to offer their ray tracing rendering software as an open source and for free — and continue to support it, either individually or as part of a group.

Raytracing Engine Suppliers
The market for ray tracing is entering into a new phase. This is partially due to improved and readily available low-cost processors (thank you, Moore’s law), but more importantly it is because of the demand and need for accurate virtual prototyping and improved workflows.

Rendering in the cloud using GPUs (Source OneRender).

As with any market, there is a 20/80 rule, where 20 percent of the suppliers represent 80 percent of the market. The ray tracing market may be even more unbalanced. There would appear to be too many suppliers in the market despite failures and merger and acquisition activities. At the same time many competing suppliers have been able to successfully coexist by offering features customized for their most important customers.

Conclusion
Ray tracing is to manufacturing what a storyboard is to film — the ability to visualize the product before it’s built. Movies couldn’t be made today with the quality they have without ray tracing. Think of how good the characters in Cars looked — that imagery made it possible for you to suspend disbelief and get into the story. It used to be: “Ray tracing — Who needs it?” Today it’s: “Ray tracing? Who doesn’t use it?”

Our Main Image: An example of different materials being applied to the same object (Source Nvidia)

Dr. Jon Peddie is president of Jon Peddie Research, which just completed an in-depth market study on the ray tracing market. He is the former president of Siggraph Pioneers and  serves on advisory boards of several companies. In 2015, he was given the Life Time Achievement award from the CAAD society. His most recent book is “The History of Visual Magic in Computers.”


Quick Chat: Ian Stynes on mixing two Sundance films

By Kristine Pregot

A few years back, I had the pleasure of working with talented sound mixer Ian Stynes on a TV sketch comedy. It’s always nice working with someone you have collaborated with before. There is a comfort level and unspoken language that is hard to achieve any other way. This year we collaborated once again for So Yong Kim’s 2016 film Lovesong, which made its premiere at this year’s Sundance and had its grade at New York’s Nice Shoes via colorist Sal Malfitano.

Ian has been busy. In fact, another film he mixed recently had its premiere at Sundance as well — Other People, from director Chris Kelly.

Ian Stynes

Ian Stynes

Since we were both at the festival, I thought what better time to ask him how he approached mixing these two very different films.

Congrats on your two films at Sundance, Lovesong (which is our main image) and Other People. How did the screenings go?
Both screenings were great; it’s a different experience to see the movie in front of an excited audience. After working on a film for a few months it’s easy to slip into only watching it from a technical standpoint — wondering, if a certain section is loud enough, or if a particular sound effect works — but seeing it with an engaged crowd (especially as a world premiere at a place like Sundance) is like seeing it with fresh eyes again. You can’t help but get caught up.

What was the process like to work with each director for the film?
I’ve been lucky enough to work with some wonderful directors, and these movies were no exception. Chris Kelly, the director for Other People, who is a writer on a bunch of TV shows including SNL and Broad City is so down to earth and funny. The movie was based on the true story of his mother, who died from cancer. So he was emotionally attached to the film in a unique way. He was very focused about what he wanted but also knew when to sit back and let me do my thing. This was Chris’s first movie, but you wouldn’t know it.

For Lovesong, I worked with director So Yong Kim once again. She makes all her films with her husband Bradley Rust Gray. They switch off with directorial duties but are both extremely involved in each other’s movies. This is my third time working on a film with the two of them — the other two were For Ellen with Paul Dano and Jon Heder, and Exploding Girl with Zoe Kazan. So is an amazing director to work with; it feels like a real collaboration mixing with her. She is creative and extremely focused with her vision, but always inclusive and kind to everyone involved in the crew.

With both films a lot of work was done ahead of time. I try and get it to a very presentable place before the directors come in. This way we can focus on the creative tasks together. One of the fun parts of my job is that I get to sit in a room for a good while and work closely with creative and fun people on something that is very meaningful to them. It’s usually a bit of a bonding experience by the end of it.

How long did each film take you to mix?
I am also extremely lucky to work with some great people at Great City Post. I was the mixer, supervising sound editor and sound designer on both films, but I have an amazing team of people working with me.

Matt Schoenfeld did a huge amount of sound designing on both movies, as well as some of the mixing on Lovesong. Jay Culliton was the dialogue editor on Other People. Renne Bautista recorded Foley and dealt with various sound editing tasks. Shaun Brennan was the Foley artist, and additional editing was done by Daniel Heffernan and Houston Snyder. We are a small team but very efficient. We spent about eight to 10 weeks on each film.

Lovesong

How is it different to mix comedy than it is to mix a drama?
When you add sound to a film it’s important to think about how it is helping the story — how it augments or moves the story along. The first level of post sound work involves cleaning and removing anything that might take the viewer out of the world of the story (hearing mics, audio distortion, change in tone etc.).

Beyond that, different films need different things. Narrative features usually call for the sound to give energy to a film but not get in the way. Of course, there are always specific moments where the sound needs to stand out and take center stage. Most people usually aren’t aware of it or know what post sound specifically entails, but they certainly notice when it is missing or a bad sound job was done. Dramas usually have more intensity to the story and comedy’s can be a bit lighter. This often informs the sound design, edit and mix. That said, every movie is still different.

What is your favorite sound design on a film of all time?
I love Ben Burtt, who did all the Star Wars movies. He also did Wall-E, which is such a great sound design movie. The first 40 or so minutes have no direct dialogue — all the audio is sound design. You might not realize it, but it is very effective. On the DVD extra Ben Burtt did a doc about the sound for that movie. The documentary ends up being about the history of sound design itself. It’s so inspiring, even for non-sound people. Here is the link.

I urge anyone reading this to watch it. I guarantee it will get you thinking about sound for film in a way you never have before.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.



Encore colorist Laura Jans Fazio goes dark with ‘Mr. Robot’

By Randi Altman

After watching Mr. Robot when it premiered on USA Network last year, I changed all of my computer passwords and added a degree of difficulty that I’m proud of. I’m also not 100 percent convinced that my laptop’s camera isn’t on even when there’s no green light. That’s right, I completely and gleefully bought into the paranoia, and I wasn’t alone. Mr. Robot won Best Television Series Drama at this year’s Golden Globes, and one of the show’s supporting actors, Christian Slater, took home a statue.

The show, about a genius New York-based computer hacker (Rami Maleck) who believes corporations control, well, everything, has been getting its color grade by Laura Jans Fazio, lead colorist at Deluxe’s Encore, since its second episode.

Laura Jans Fazio

If you watch any TV at all, you’ve very likely seen some of Jans Fazio’s work. Her resume lists House of Cards, Hawaii 5-0, Proof, Empire and The Lottery, and she’s currently gearing up to work on the updated Gilmore Girls and Lady Dynamite.

Jans Fazio was kind enough to take some time out from grading this upcoming season of House of Cards to chat about her work on Mr. Robot.

Were you on Mr. Robot from the very start?
Sam Esmail, the show’s creator, asked me to help out with one of the first scenes in the pilot — the one that took place in Ron’s Coffee Shop. We made some changes, Sam loved it and wanted me to hit the whole show, so I did!

What kind of direction were you given about the look of that scene?
For Ron’s Coffee Shop, the direction was, “just do your thing.” So I was fortunate enough to do my own thing on it, and make it what I felt it should be.

What about when you started the season?
That’s part of what coloring has been — at least in my career — trying to interpret what the client, or the creator, is saying to me, because everybody has a different way of describing things, whether they’re technically savvy or not. I have to take that description and interpret it, and apply that to the image through my tool set on the computer.

That’s the process for this show, like many others I’ve worked on… I’ve been lucky enough to be entrusted to just do what I think feels right, and then I wait for notes. And more often than not, my notes are pretty minimal.

So minimal notes on Mr. Robot?
It was either “go darker” or ” let’s change this room in its entirety — I want it to be colder, and I’m not feeling the emotion of the scene.” In other instances, I’ll take a scene that’s lit completely warm and I’ll go cool with it because I think it looks better. Then I’ll send it out and be happily pleased that it’s liked.

(Photo: David Giesbrecht/USA Network)

Can you describe a scene and give me an example?
The All Safe office, where Elliot worked, actually stayed similar to the pilot. The only difference was I took a lot of magenta out of it. So it had the feeling of a cold, sterile, distant corporate environment with a “working for the man” kind of feel. It’s not dark. It’s airy and lofty, but not airy in a good way. It basically allows the talent to come through — to see the emotion of what the characters are going through, and what they’re talking about. The rest just seems to melt behind them.

How do you build on what the DP Tod Campbell captures on set?
This is the way I approach all images — I take what I’ve got to work with, play with different styles of contrast, densities and color tones and let the image take me where it wants to be. How it feels in the story and what’s it’s cut against, and where are it’s going.

Usually I’ll tap into it straight away, but it’s always that way on the first episode or two of a new show, because you don’t really know where it needs to be. It’s kind of like the first color of paint that you put on a canvas that has been prepped — that’s not always the color that’s going to come through. It’s going to start out one way, and evolve as you go.

Sometimes colorists talk about being given stills or told to emulate the look of a certain film. It’s pretty amazing that they’re just saying, “Go.”
But that’s not always the case. There are many times where people come in with a photography coffee table book, and say, “I want this, this or that.” Or they will reference a movie from 1972 or say, “Let’s make it look like this Japanese film shot in 1942,” and I reference those clips.

That’s a common practice. In this situation I was approached based on my work on House of Cards and entrusted with Mr. Robot.

Mr. Robot - Season 1     

How do you prefer to work? Or do you enjoy both?
I enjoy both. It’s always good to get feedback, and I need an idea of what it is. When I saw the pilot for MrRobot, I of knew automatically what I would do with it.

Is there anything that stuck out from the season that you are most proud of?
The fact that the show is super dark. Dark is good. People are hesitant to do dark because they need to see what’s going on, but I look at it this way: if you’re in a dark forest and see an opening of light, that’s when you want to see more. And going dark was well received, both by the audience and my peers. That was cool.

Your tool of choice is FilmLight Baselight. Why do you like this particular system?
It’s just makes sense, from the way it allows you to layer colors and grade inside/outside, therefore eliminating keystrokes. It allows me to be really fast, and it deals with different color spaces and gammas. Also, the development always seems to be on the cutting edge of the latest technology coming from the camera manufacturers. They are also great about keeping up with where our business is going, including paying attention to different color spaces and HDR and VR.

Mr. Robot - Pilot Where do you find your inspiration?
It’s everywhere. I notice everything. I notice what somebody is wearing, what the colors are, where the contrasts lie and how the light is hitting them. I notice the paint sheens in a room and where the light that is falling onto objects and creating depth. I get lost online viewing design and color palettes and architecture and photography and gardens. The list goes on.

Growing up in New York, I was walking all the time and was just immersed in visual stimulation — from people, buildings, objects, architecture, art and design. I look to all of the man-made things, but I also look to nature, landscapes and skies… the color contrasts of it all.

What’s next for you, and how many shows do you work on at the same time?
Sometimes I’m on multiple shows within a week, and that overlaps. Right now, I’m doing Hawaii 5-0, House of Cards and Lady Dynamite. House of Cards will end soon, but Hawaii 5-0 will still be going on. Gilmore Girls will start up. Lady Dynamite will still be going, and then Robot will start. Then who knows what else is going to come in between those times.

That’s a lot.
The more the merrier!


The Molecule: VFX for ‘The Affair’ and so much more

By Randi Altman

Luke DiTommaso, co-founder of New York City’s The Molecule, recalls “humble”
beginnings when he thinks about the visual effects, motion graphics and VR studio’s launch as a small compositing shop. When The Molecule opened in 2005, New York’s production landscape was quite a bit different than the tax-incentive-driven hotbed that exists today.

Rescue Me was our big break,” explains DiTommaso. “That show was the very beginning of this wave of production that started happening in New York. Then we got Damages and Royal Pains, but were still just starting to get our feet wet with real productions.”

The Molecule partners (L-R) Andrew Bly, Chris Healer and Luke DiTommaso.

Then, thanks to a healthy boost from New York’s production and post tax incentives, things exploded, and The Molecule was at the right place at the right time. They had an established infrastructure, talent and experience providing VFX for television series.

Since then DiTommaso and his partners Chris Healer and Andrew Bly have seen the company grow considerably, doing everything from shooting and editing to creating VFX and animation, all under one roof. With 35 full-time employees spread between their New York and LA offices — oh, yeah, they opened an office in LA! — they also average 30 freelance artists a day, but can seat 65 if needed.

While some of these artists work on commercials, many are called on to create visual effects for an impressive list of shows, including Netflix’s Unbreakable Kimmy Schmidt, House of Cards and Bloodline, Showtime’s The Affair, HBO’s Ballers (pictured below), FX’s The Americans, CBS’ Elementary and Limitless, VH1’s The Breaks, Hulu’s The Path (for NBC and starring Aaron Paul) and the final season of USA’s Royal Pains. Also completed are the miniseries Madoff and Behind the Magic, a special on Snow White, for ABC.

Ballers-before      Ballers-after

The Molecule’s reach goes beyond the small screen. In addition to having completed a few shots for Zoolander 2 and a big one involving a digital crowd for Barbershop 3, at the time of this interview the studio was gearing up for Jodie Foster’s Money Monster; they will be supplying titles, the trailer and a ton of visual effects.

There is so much for us to cover, but just not enough time, so for this article we are going to dig into The Molecule’s bread and butter: visual effects for TV series. In particular, the work they provided for Showtime’s The Affair, which had its season finale just a few weeks ago.

The Affair
Viewers of The Affair, a story of love, divorce and despair, might be surprised to know that each episode averages between 50 to 70 visual effects shots. The Molecule has provided shots that range from simple clean-ups to greenscreen driving and window shots — “We’ll shoot the plates and then composite a view of midtown Manhattan or Montauk Highway outside the car window scene,” says DiTommaso — to set extensions, location changes and digital fire and rain.

One big shot for this past season was burning down a cabin during a hurricane. “They had a burn stage so they could captFire-stageure an amount of practical fire on a stage, but we enhanced that, adding more fire to increase the feeling of peril. The scene then cuts to a wide shot showing the location, which is meant to be on the beach in Montauk during a raging hurricane. We went out to the beach and shot the house day for night — we had flicker lighting on the location so the dunes and surrounding grass got a sort of flickering light effect. Later on, we shot the stage from a similar angle and inserted the burning stage footage into the exterior wide location footage, and then added a hurricane on top of all of that. That was a fun challenge.”

During that same hurricane, the lead character Noah gets his car stuck in the mud but they weren’t able to get the tires to spin practically, so The Molecule got the call. “The tires are spinning in liquid so it’s supposed to kick up a bunch of mud and water and stuff while rain is coming down on top of it, so we had our CG department create that in the computer.”

Another scene that features a good amount of VFX was one that involved a scene that took place on the patio outside of the fictitious Lobster Roll restaurant. “It was shot in Montauk in October and it wasn’t supposed to be cold in the scene, but it was about 30 degrees at 2:00am and Alison is in a dress. They just couldn’t shoot it there because it was just too cold. We shot plates, basically, of the location, without actors. Later we recreated that patio area and lined up the lighting and the angle and basically took the stage footage and inserted it into the location footage. We were able to provide a solution so they could tell the story without having the actors’ breath and their noses all red and shivering.”

Lobster_Roll-before      Lobster_Roll-after

Being on Set
While on-set VFX supervision is incredibly important, DiTommaso would argue “by the time you’re on set you’re managing decisions that have already been set into motion earlier in the process. The most important decisions are made on the tech scouts and in the production/VFX meetings.”

He offers up an example: “I was on a tech scout yesterday. They have a scene where a woman is supposed to walk onto a frozen lake and the ice starts to crack. They were going to build an elaborate catwalk into the water. I was like, ‘Whoa, aren’t we basically replacing the whole ground with ice? Then why does she need to be over water? Why don’t we find a lake that has a flat grassy area leading up to it?’ Now they’re building a much simpler catwalk — imagine an eight-foot-wide little platform. She’ll walk out on that with some blue screens and then we’ll extend the ice and dress the rest of the location with snow.

According to DiTommaso being there at the start saved a huge amount of time, money and effort. “By the time you’re on set they would have already built it into the water and all that stuff.”

But, he says, being on set for the shoot is also very important because you never know what might happen. “A problem will arise and the whole crew kind of turns and looks at you like, ‘You can fix this, right?’ Then we have to say, ‘Yeah. We’re going to shoot this plate. We’re going to get a clean plate, get the actors out, then put them back in.’ Whatever it is; you have to improvise sometimes. Hopefully that’s a rare instance and that varies from crew to crew. Some crews are very meticulous and others are more freewheeling.”

Tools
The Molecule is shooting more and more of their own plates these days, so they recently invested in a Ricoh S camera for shooting 360-degree HDR. “It has some limitations, but it’s perfect for CG HDRs,” explains DiTommaso. “It gives you a full 360-degree dome, instantly, and it’s tiny like a cell phone or a remote. We also have a Blackmagic 4K Cinema camera that we’ll shoot plates with. There are pros and cons to it, but I like the latitude and the simplicity of it. We use it for a quick run and gun to grab an element. If we need a blood spurt, we’ll set that up in the conference room and we’ll shoot a plate.”

The Molecule added John Hamm’s head to this scene for Unbreakable Kimmy Schmidt.

They call on a Canon 74 for stills. “We have a little VFX kit with little LED tracking points and charts that we bring with us on set. Then back at the shop we’re using Nuke to composite. Our CG department has been doing more and more stuff. We just submitted an airplane — a lot of vehicles, trains, planes and automobiles are created in Maya.”

They use Side Effects Houdini for simulations, like fire and rain; for rendering they called on Arnold, and crowds are created in Massive.

What’s Next?
Not ones to be sitting on the sidelines, The Molecule recently provided post on a few VR projects, but their interest doesn’t end there. Chris Healer is currently developing a single lens VR camera rig that DiTommaso describes as essentially “VR in a box.”


The pipeline experts behind Shotgun’s ‘Two Guys and a Toolkit’ blog

Jeff Beeland and Josh Tomlinson know pipelines, and we are not exaggerating. Beeland was a pipeline TD, lead pipeline TD and pipeline supervisor at Rhythm and Hues Studios for over nine years. After that, he was pipeline supervisor at Blur Studio for over two years. Tomlinson followed a similar path, working in the pipeline department at R&H starting in 2003. In 2010 he moved over to the software group at the studio and helped develop its proprietary toolset. In 2014 he took a job as senior pipeline engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

This fall the pair joined Shotgun Software’s Pipeline Toolkit team, working on creating even more efficient — wait for it —pipelines!  In the spirit of diving in head first, they decided to take on the complex challenge of deploying a working pipeline in 10 weeks — and blogging the good, the bad and the ugly of the process along the way. This was the genesis of their Two Guys and a Toolkit series of blogs, which ended last week.

IMG_6655 jbee
Josh Tomlinson and Jeff Beeland.

Before we dig in to find out more, this is what you should know about the Pipeline Toolkit: The Shotgun Pipeline Toolkit (sgtk) is a suite of tools and building blocks designed to help users to set up, customize and evolve their pipelines. Sgtk integrates with apps such as Maya, Photoshop and Nuke and makes it easy to access Shotgun data inside those environments. Ok, let’s talk to the guys…

What made you want to start the Two Guys and a Toolkit series?
Josh: Since we were both relatively new to Shotgun, this was originally just a four-week exercise for us to get up and running with Toolkit; there wasn’t really any discussion of a blog series. The goal of this exercise was to learn the ins and outs of Toolkit, identify what worked well, and point out things we thought could be improved.

After we got started, the word spread internally about what we were up to and the idea for the blog posts came up. It seemed like a really good way for us to meet and interact directly with the Shotgun community and try to get a discussion going about Toolkit and pipeline in general.

Did you guys feel exposed throughout this process? What if you couldn’t get it done in 10 weeks?
Jeff: The scope of the original exercise was fairly small in terms of the requirements for the pipeline. Coupled with the fact that Toolkit comes with a great set of tools out of the box, the 10-week window was plenty of time to get things up and running.

We had most of the functional bits working within a couple of weeks, and we were able to dive deep into that experience over the first five weeks of the blog series. Since then we’ve been able to riff a little bit in the posts and talk about some more sophisticated pipeline topics that we’re passionate about and that we thought might be interesting to the readers.

pipe_layout copy

What would you consider the most important things you did to ensure success?
Josh: One of the most important ideas behind the blog series was that we couldn’t just talk about what worked well for us. The team really stressed the importance of being honest with the readers and letting them in on the good, the bad and the ugly bits of Toolkit. We’ve tried our best to be honest about our experience.

Jeff: Another important component of the series was the goal of starting up a dialogue with the readers. If we just talked about what we did each week, the readers would get bored quickly. In each post we made it a point to ask the readers how they’ve solved a particular problem or what they think of our ideas. After all, we’re new to Toolkit, so the readers are probably much more experienced than us. Getting their feedback and input has been critical to the success of the blog posts.

Josh: Now that the series is over, we’ll be putting together a tutorial that walks through the process of setting up a simple Toolkit pipeline from scratch. Hopefully users new to Toolkit will be able to take that and customize it to fit their needs. If we can use what we’ve learned over the 10 weeks and put together a tutorial that is helpful and gives people a good foundation with Toolkit, then the blog series will have been successful.

Do you feel like you actually produced a pipeline path that will be practical and realistic for applying in real-world production studios?
Jeff: The workflow designs that we model our simple pipeline off of are definitely applicable to a studio pipeline. While our implementations are often at a proof-of-concept level, the ideas behind how the system is designed are sound. Our hope has always been to present how certain workflows or features could be implemented using Toolkit, even if the code we’ve produced as part of that exercise might be too simplistic for a full-scale studio pipeline.

During the second half of the blog series we started covering some larger system designs that are outside of the scope of our simple pipeline. Those posts present some very interesting ideas that studios of any size — including the largest VFX and animation studios — could introduce into their pipelines. The purpose of the later posts was to evoke discussion and spread some possible solutions to very common challenges found in the industry. Because of that, we focused heavily on real-world scenarios that pipeline teams everywhere will have experienced.

What is the biggest mistake you made, what did you do to solve it and how much time did it set you back?
Josh: To be honest, we’ve probably made mistakes that we’ve not even caught yet. The fact that this started as an exercise to help us learn Toolkit means we didn’t know what we were doing when we dove in.

In addition, neither of us have a wealth of modern Maya experience, as R&H used mostly proprietary software and Blur’s pipeline revolved primarily around 3DS Max. As a result, we made a complete mess out of Maya’s namespaces on our first pass through getting the pipeline up and running. It took hours of time and frustration to unravel that mess and get a clean, manageable namespacing structure into place. In fact, we nearly eliminated Maya namespaces from the pipeline simply so we could move on to other things. In that regard, there would still be work to do if we wanted to make proper use of them in our workflow.

You spent 10 weeks building a pipeline essentially in a vacuum… how much time realistically would this take in an operational facility where you would need to integrate pipeline into existing tech infrastructure?
Jeff: That all depends on the scope of the pipeline being developed. It’s conceivable that a small team could get a Toolkit-driven pipeline up and running in weeks, if relying on mostly out-of-the-box functionality provided.

This would require making use of well-supported DCC applications, like Maya and Nuke, as custom integrations with others would require some development time. This sort of timeframe would also limit the pipeline to supporting a single physical studio location, as multi-location or cloud-based workflows would require substantial development resources and time.

It’s worth noting that R&H’s pipeline was initially implemented in a very short period of time by a small team of TDs and engineers, and was then continually evolved by a larger group of developers over the course of 10-plus years. Blur’s pipeline evolved similarly. This goes to show that developing a pipeline involves hitting a constantly moving target, and shouldn’t be viewed as a one-time development project. The job of maintaining and evolving the pipeline will vary in scope and complexity depending on a number of factors, but is something that studios should keep in mind. The requirements laid out by production and artists often change with time, so continued development is not uncommon.

Any lessons learned, parting words of wisdom for others out there taking on pipeline build-out?
Jeff: This really goes for software engineering in general — iterate quickly and set yourself up to fail as fast as possible. Not all of your ideas are going to pan out, and even when they do, your implementation of the good ones will often let you down. You need to know whether the direction you’re going in will work as early as possible so that you can start over quickly if things go wrong.

Josh: A second piece of advice is to listen to the users. Too often, developers think they know how artists should work and fail to vet their ideas with the people that are actually going to use the tools they’re writing. In our experience, many of the artists know more about the software they use than we do. Use that to your advantage and get them involved as early in the process as possible. That way you can get a better idea of whether the direction you’re going in aligns with the expectations of the people that are going to have to live with your decisions.


Checking in with Tattersall Sound & Picture’s Jane Tattersall

By Randi Altman

Toronto-based audio post house Tattersall Sound & Picture has been a fixture in audio post production since 2003, even though the origins of the studio go back further than that. Tattersall Sound & Picture’s work spans films, documentaries, television series, spots, games and more.

Now part of the SIM Group of companies, the studio is run by president/supervising sound editor Jane Tattersall and her partners Lou Solakofski, Peter Gibson and David McCallum. Tattersall is an industry veteran who found her way to audio post in a very interesting way. Let’s find out more…

(back row, L-R) David McCallum, Rob Sim, and Peter Gibson (front row) Jane Tattersall and Lou Solakofski.

How did you get your start in this business?
My start was an accident, but serendipitous. I had just graduated from university with a degree in philosophy and had begun to think of what options I might have — law and journalism were the only fields that came to mind, but then I got a call from my boyfriend’s sister who was an art director. She had just met a producer at a party who was looking for a philosopher to do research on a documentary series. I got the job, did all the research and ended up working with the picture editor. I found his work using sound brought the scenes to life, so decided I would try to learn that job. After that I apprenticed with an editor and learned on the job. I’m still learning!

When did you open Tattersall Sound & Picture?
I started the original Tattersall Sound, which was just sound editing in 1992 but sold it in 1999 to run a larger full post facility. I opened Tattersall Sound & Picture in 2003, along with my partners.

Why did you open it?
After three years running a big post facility I missed the close involvement with projects that comes with being an editor. I was ready for a change and keen to be more hands on.

How has it evolved over the years?
When we started the company it was just sound editing. The first year we shared warehouse space with a framing factory. We had a big open workplace and we all worked with headphones. After a year we moved to where we are today. We had space for picture editing suites as well as sound editing. Over time we expanded our services and facilities. Now we have five mix stages including a Dolby Atmos stage, ADR, as well as offline and sound editorial.

How have you continued to be successful in what can be a tough business?
We focus simultaneously on good creative work and ensuring we have enough resources to continue to operate. Without good and detailed and good work we would lose our clients, but without earning enough money we couldn’t pay people properly, pay the rent and upgrade the stages and edit rooms. I like to think we attract good talent and workers because we care about doing great work, and the great work keeps the clients coming to us.

Does working on diverse types of projects play a role in that success?
Yes, that’s true as well. We have a diversity of projects — TV series, documentaries, independent feature films, some animation and some children’s TV series. Some years ago we were doing mostly indie features and a small amount of television, but our clients moved into television and brought us along with them. Now we are doing some wonderful higher-end series like Vikings, Penny Dreadful and Fargo (pictured below). We continue to do features and love doing them, but it is a smaller part of the business.

FARGO_207_0510_CL_d_hires1 FARGO_209_0108_CL_d_hires1

If you had one tip about keeping staff happy and having them stay for the long-term, what would it be?
Listen to them, and keep them involved and make them feel like an appreciated part of the business.

What is the biggest change in audio post that you’ve seen since your time in the business?
The biggest change would be the change in technology — from Moviolas to Pro Tools and all the digital plug-ins that have become the regular way of editing and mixing. Related to that would be the time allotted to post sound. Our schedules are shorter because we can and do work faster.

The other change is that we work in smaller teams or even alone. This means fewer opportunities for more junior people and assistants to learn by doing their job in the same room. This applies to picture editing as well, of course.

There is no denying that our industry is filled with more males than females, and having one own an audio post house like yours is rare. Can you talk about that?
I certainly didn’t set out to own or run anything! Just to work on interesting projects for directors and producers who wanted to work with me. The company you see today has grown organically. I attracted like minded co-workers and complementary team members and went after films and directors that I wanted to work with.

We would never have built any mix stages if we didn’t have re-recording mixer Lou Solakofski on board as partner. And he in turn would never have got involved if he didn’t trust us to keep the values of good work and respectful working environment that were essential to him. We all trusted one another to retain and respect our shared values.

It has not always been easy though! There were many projects that I just couldn’t get, which was immensely frustrating. Some of these projects were of the action/violent style. Possibly the producers thought a man might be able to provide the right sounds rather than a woman. No one ever said that, so there may have been other reasons.

However, not getting certain shows served to make me more determined to do great work for those producers and directors who did want me/us. So it seems that having customers with the same values is crucial. If there weren’t enough clients who wanted our quality and detail we wouldn’t have got to where we are today.

What type of gear do you have installed? How often to do you update the tech?
Our facility is all Avid and Pro Tools, including the mix stages. We have chosen an all-Pro Tools workflow because we feel it provides the most flexibility in terms of work flow and the easiest way to stay current with new service options. Staying current can be costly but being up to date with equipment is advantageous for both our clients and creative team.

Hyena Road had a Dolby Atmos mix

Hyena Road had a Dolby Atmos mix

We update frequently usually, driven by the requirements of a specific project. For example, in July 2015 we were scheduled to mix the Canadian war film Hyena Road and the producer, distributor and director all wanted to work in Dolby Atmos. So our head tech engineer Ed Segeren and Lou investigated to see how feasible it would be to upgrade one of the stages to accommodate the Dolby requirements. It took some careful research and some time but that stage was updated to facilitate that film.

Another example is when we began the Vikings series and knew the composer was going to deliver very wide — all separate stems as 5.0 — so we needed a dedicated music Pro Tools. This meant we had to expand the console.

As a rule when we update one mix stage, but we know we will soon update the others in order to be able to move sessions between rooms transparently. This is an expense, but it also provides us flexibility — essential in post production as project schedules inevitably shift from their original bookings.

David McCallum, fellow sound supervisor and partner, has a special interest in acoustic listening spaces and providing editors with the best environment to make good decisions. His focus on editorial upgrades help to ensure we can send good tracks to the stage.

Our head tech engineer Ed Segeren attends NAB and AES every year to see new developments, and the staff is very interested in learning about what’s out there and how we might apply new technology. We try to be smart about our upgrades, and it’s always about improving workflow and work quality.

What are some recent projects completed at Tattersall?
We recently completed the series Fargo (mixing), and the feature films Beeba Boys (directed by Deepa Mehta) and Hyena Road (directed by Paul Gross), and we are in the midst of the TV series Sensitive Skin for HBO Canada. We are also doing Saving Hope, Vikings (pictured below) Season 4 and will start Season 3 of Penny Dreadful in early 2016.

v3_09_9262014_bw__13709 v3_10_10132014_bw_14071

Are you still directing?
I’m surprised you even know about that! I’m trying to! Last spring I directed a very short film, a three-minute thriller called Wildlife. This month I am co-directing a short film about a young women indirectly involved in a police shooting and her investigation into what really happened. I have an advantage, which is that I know when a story point can be made using sound rather than needing a shot to convey something, and I have a good idea of how ADR can be employed so no need to worry about the production recording.

The wonderful thing about these non-work film projects is that I learn a huge amount every time, including just how hard producers must work to get something made, and just how vulnerable a director is when putting something of themselves out there for anyone to watch.

Quick Chat: Rampant’s Sean Mullen on new mograph release

Rampant Design Tools who has been prolific about introducing new offerings and updates to its motion graphics products is at it again. This time with two new Style Effects volumes for motion graphic artists offer 1,500 new 2K, 4K and 5K effects.

Rampant Motion Graphics for Editors v1 and v2, are QuickTime elements that users can drag and drop into the software of their choice; Rampant effects are not plug-ins and, therefore, not platform dependent.

The newly launched Rampant Textured Overlays library features 230 effects for editors, also in ultra-high resolution 2K, 4K and 5K elements.

This volume provides a large amount of overlay effects for editors or anyone else looking to add a unique and modern look to video projects. Rampant Textured Overlays are suited for editing, motion graphics, photography and graphic design.

We reached out to Rampant’s Sean Mullen, who runs the company with his wife Stefanie. They create all of the effects themselves. Ok, let’s find out more.

What’s important about this new release?
The Motion Graphics for Editors series is a completely new direction for us. We’ve designed thousands of animated elements so that busy editors or anyone who doesn’t have time to design their own can easily create great looking motion graphics without having to start from scratch. These designs are the same that you see in current television and commercial trends.

Volume one is more of a base, something that you can use in just about any kind of situation.  Volume 2 is more edgy and is similar to the kinds of designs that I’ve previously created for the X Games, MTV, Fuel and the National Guard. Motion Graphics for Editors is the beginning of a new trend at Rampant. You can expect to see a variety of different projects coming out of our shop in the near future.  All vastly different from what people are used to seeing from Rampant. I’m super stoked about the next six to eight months.

Were the new offerings based on user feedback?
In part, yes. I have hundreds of project ideas on my whiteboards that I’d like to build out. We’re only limited by time and resources. We don’t have a studio full of artists cranking out our designs. Rampant is just Stefanie and myself. I’m always roughing out ideas and letting them percolate. The great thing about being a small company is that we get to travel and talk to editors and artists directly. We often visit with amazing groups like the Blue Collar Post Collective in NYC and talk with assistant editors, editors and colorists. This allows us to hear first hand about what people want and need in their respective workflows.

What do you expect to be the most used of the bunch?
Motion Graphics for Editors v1 was designed as a base. It’s got more of a universal appeal. It’s perfect for everything from corporate work to infographics and commercials. Volume 2 is a lot more edgy and has a specific feel.

Rampant_Motion_Graphics_for_Editors_V2_010

What’s your process when developing a new release?
There are dozens of projects in various stages of development at any given time. When an idea pops into my head, I’ll start camera, compositing and animation tests right away. Everything starts at 5K resolution or higher. Typically, I’ll let a project sit for a while after the initial R&D. This allows the idea to mature and gives us time to attack the project from multiple angles. Once we decide that something is worth pursuing, I’ll shoot or animate every possible thing I can think of. This can take days or weeks, depending on the amount of post work and transcoding that is involved. From there we’ll have a vat of hundreds, or in some cases thousands, of elements.

We toss out the ones that don’t work or aren’t deemed as useful. Then we organize the elements and give them a proper naming structure. After the elements are named, we output 4K and 2K versions of our 5K master elements and begin the long process of zipping and uploading them to our servers for download delivery.  The final elements, camera masters and project files are then archived. Lastly, we cut a promo video showing our new products in use, build a new product page on our site and develop a newsletter to let our customers know about the latest release. Once that cycle is complete, it’s back to the whiteboard.

Anything you want to add that’s important?
It’s our mission to save editors time and money.If something normally takes hours or days to complete and our effects can help reduce that time, we’ve achieved our goal. There are many editors out there who use our effects in a pre-visual manner. They use our effects to quickly design something in order to get a green light from their producer or client and this saves a ton of time and money.

Others look at our effects as a starting off point. They start with our elements and combine them to make something new. We receive emails every day from editors who just don’t have time to make anything from scratch. Their budgets are too tight, turnaround time is insane or they simply aren’t mograph designers but still want good-looking motion graphics. These are our people, they are why we work every single day. We read each and every email and take every phone call, even at 3am.