By Randi Altman
Ted, the foul-mouthed but warm-hearted teddy bear, is back on the big screen, this time fighting for the right to be recognized as a person — he really wants to get married — in Ted 2 from director Seth MacFarlane. Once again, this digital character is seen out and about in Boston, in all sorts of environments, so previs, mocap and postvis played a huge role.
Cue Webster Colcord. He was previsualization and postvisualization supervisor on Ted 2, reporting to Culver City, California’s The Creative-Cartel. Colcord held similar titles on the first Ted, serving as the production’s previs/postvis artist and mocap integration artist. He also worked on Ted’s appearances between the two movies — The Jimmy Kimmel Show and the Oscars. He worked out of the production unit set up by Universal Pictures and studio MRC.
For Ted 2, Colcord and team used motion capture, via the Xsens MVN system, to record MacFarlane, who also voices Ted, as he acted out scenes. Because it’s an inertial system, MVN allowed the character (and director) to step out of the mocap volume and onto the streets, something that couldn’t be done with an optical offering.
Colcord has been working in CG since 1997. Prior to that he was a stop-motion animation artist. “I do all kinds of things, he says. “Previs, animation, postvis and supervision. Mocap is not my usual gig, actually! Right now, I’m animation supervising at Atomic Fiction (Flight, Star Trek Into Darkness, Game of Thrones) in the Bay Area.”
We reached out to Colcord to find out more about his process and the workflow on Ted 2.
You worked with The Creative-Cartel and Jenny Fulle. What was that relationship like?
Creative-Cartel has been the VFX management unit on the Ted movies, so they oversee the dissemination of assets between the different parties involved and planning. They are involved every step of the way, from pre-production through to final delivery.
The on-set duties for all of us tend to be all-engrossing but after principal photography, when I am in-house with the editorial department doing postvis, I’m supporting just the editorial department and the VFX teams. At a couple of points in the schedule, however, we were prepping for re-shoots on stage with the main unit, mocap at the editorial office, postvis for upcoming screenings and delivery of synced mocap to the vendors. It could be overwhelming!
Whose decision was it to use Xsens? Do you know if that’s what they used on the first Ted?
During development on the first movie, producer Jason Clark and VFX producer Jenny Fulle researched and tested various mocap options and arrived at Xsens’ inertial mocap system, which was very new at the time. It was decided to go with the Xsens MVN system because of the ease of set-up on location. You don’t need to set up a volume, and it’s very portable — the set-up is minimal. Also, there are no marker occlusion issues. It has a few limitations that the optical systems do not have, but with every update those differences become less and less.
There is a big dance sequence in the film. It must have been particularly challenging to capture the movements of a completely CG character?
It was a complex sequence, and it blends in from a previous scene with Ted dancing in a different environment, adding to the complexity. The credit for working out the choreography goes to Rob Ashford, Sara O’Gleby and Chris Bailey. Also, of course, VFX supervisor Blair Clark. It’s important to understand, though, that the mocap system just provides a core performance and the final Ted is a blending of keyframe animation (Iloura did the dance sequence) and mocap. My role was to facilitate the performance and get it over to the VFX team with a high degree of fidelity and in a pipeline-ready form.
We ended up capturing it in about four sessions, with five different dancers, each of whom acted out Ted’s motions for various parts of the choreography. During production on Ted 2, Xsens released an updated version of their system, which they call MVN Link. The sensors are smaller, the data has been improved and the wireless signal uses Wi-Fi rather than Bluetooth. So we used that version of the system for the dancers. For Seth’s performances we use a fully wired system with an umbilical cable attached to the computer, as Ted is usually not being very acrobatic in his motions.
What’s the workflow like?
We recorded a live feed of the mocap on the low-res Ted model from Autodesk Motion Builder, while we captured the data. In some cases editorial was able to comp this into shots to use as postvis, pretty much right out-of-the-box.
So you captured the data and sent it to Iloura and Tippett Studio?
Yes, I would retarget the data in Motion Builder, then sync the data in Autodesk Maya with a minimal amount of clean-up and send it off to both houses. The data would also be used as the core performance for many of our postvis shots.
If Ted makes any more appearances on talk shows/awards shows will you be using Xsens for that too?
I assume that we will be using the MVN system since we have an established pipeline. It’s pretty much the same as what we do for the feature, but it depends on who is doing the editorial duties, since the first pass at deciding which part of a mocap performance is used is made by the editor.