Tag Archives: motion capture

iPi Motion Capture V.4 software offers live preview

iPi Soft, makers of motion capture technology, has introduced iPi Motion Capture Version 4, the next version of its markerless motion capture software. Version 4 includes realtime preview capability for a single-depth sensor. Other new features and enhancements include support for new depth sensors (Intel RealSense D415/D435, ASUS Xtion2 and Orbbec Astra/ Astra Pro); improved arms and body tracking; and support for action cameras such as GoPro and SJCAM. With Version 4, iPi Soft also introduces a perpetual license model.

The realtime tracking feature in Version 4 uses iPi Recorder, a free software provided by iPi Soft for capturing, playback and processing video records from multiple cameras and depth sensors, to communicate with iPi Mocap Studio software, which tracks in realtime and instantly transfers motion to 3D characters. This allows users to see how the motion will look on a 3D character and improve motion accordingly at the time of acting and recording, without the need to redo multiple iterations of acting, recording and offline tracking.

Live tracking results can then be stored to disk for additional offline post processing, such as tracking refinement (to improve tracking accuracy), manual corrections and jitter removal.

iPi Mocap Version 4 currently includes the realtime tracking feature for a single depth sensor only. iPi Soft is scheduled to bring realtime functionality for multiple depth sensors to users by the end of this year.

Development of plug-ins for popular 3D game engines, including Unreal Engine and Unity, is also underway.

Tracking Improvements include:
• Realtime tracking support of human performance for live preview for a single-depth sensor (Basic and Pro configurations). Motion can be transferred to a 3D character.
• Improved individual body parts tracking, after performing initial tracking, allows users to re-do tracking for selected body parts to fix tracking errors more quickly.
• Tracking improvements of head and hands when used in conjunction with Sony’s PS Move motion controller takes into account joint limits.

New Sensors and cameras supported include:
• Support for Intel RealSense D415 / D435 depth cameras, Asus Xtion2 motion sensors and Orbbec Astra / Astra Pro 3D cameras.
• Support for action cameras such as GoPro and SJCAM, including wide-angle cameras, allows users to come closer to the camera decreasing space requirements.
• The ability to calibrate individual internal parameters of any camera, helps users to correctly reconstruct 3D information from video for improved overall tracking quality.
• The ability to load unsynchronized videos from multiple cameras and then use iPi Recorder to sync and convert footage to .iPiVideo format used by iPi Mocap Studio.
• Support of fast motion action cameras — video frame rate can reach up to 120fps to allow for tracking extremely fast motions.

Version 4’s perpetual license is not time-limited and includes two years with full support and software updates. Afterwards, users have the option to subscribe to a support plan to continue receiving full support and software updates. Alternatively, they can continue using their latest software version.

iPi Motion Capture Version 4 is also available as a subscription-based model. Prices range from $165 to $1995 depending on the version of software (Express, Basic, Pro) and the duration of subscription.

The Basic edition provides support for up to 6 Sony PS3 Eye cameras or 2 Kinect sensors, and tracking of a single actor. The Pro version features full 16-camera/four depth sensors capability and can track up to three actors. A 30-day free trial for Version 4 is available.

 

House of Moves add Selma Gladney-Edelman, Alastair Macleod

Animation and motion capture studio House of Moves (HOM) has strengthened its team with two new hires — Selma Gladney-Edelman was brought on as executive producer and Alastair Macleod as head of production technology. The two industry vets are coming on board as the studio shifts to offer more custom short- and long-form content, and expands its motion capture technology workflows to its television, feature film, video game and corporate clients.

Selma Gladney-Edelman was most recently VP of Marvel Television for their primetime and animated series. She has worked in film production, animation and visual effects, and was a producer on multiple episodic series at Walt Disney Television Animation, Cartoon Network and Universal Animation. As director of production management across all of the Discovery Channels, she oversaw thousands of hours of television and film programming including TLC projects Say Yes To the Dress, Little People, Big World and Toddlers and Tiaras, while working on the team that garnered an Oscar nom for Werner Herzog’s Encounters at the End of the World and two Emmy wins for Best Children’s Animated Series for Tutenstein.

Scotland native Alastair Macleod is a motion capture expert who has worked in production, technology development and as an animation educator. His production experience includes work on films such as Lord of the Rings: The Two Towers, The Matrix Reloaded, The Matrix Revolutions, 2012, The Twilight Saga: Breaking Dawn — Part 2 and Kubo and the Two Strings for facilities that include Laika, Image Engine, Weta Digital and others.

Macleod pioneered full body motion capture and virtual reality at the research department of Emily Carr University in Vancouver. He was also the head of animation at Vancouver Film School and an instructor at Capilano University in Vancouver. Additionally, he developed PeelSolve, a motion capture solver plug-in for Autodesk Maya.

Sony Imageworks’ VFX work on Spider-Man: Homecoming

By Daniel Restuccio

With Sony’s Spider-Man: Homecoming getting ready to release digitally on September 26 and on 4K Ultra HD/Blu-ray, Blu-ray 3D, Blu-ray and DVD on October 17, we thought this was a great opportunity to talk about some of the film’s VFX.

Sony Imageworks has worked on every single Spider-Man movie in some capacity since the 2002 Sam Raimi version. On Spider-Man: Homecoming, Imageworks worked on mostly the “third act,” which encompasses the warehouse, hijacked plane and beach destruction scenes. This meant delivering over 500 VFX shots, created by over 30 artists (at one point this peaked at 200) and compositors, and rendering out 2K finished scenes.

All of the Imageworks artists used Dell R7910 workstations with Intel Xeon CPU E5-2620 24 cores, 64GB memory and Nvidia Quadro P5000 graphics cards. They used Cinesync for client reviews and internally they used their in-house Itview software. Rendering technology was SPI Arnold (not the commercial version) and their custom shading system. Software used was Autodesk 2015, Foundry’s Nuke X 10.0 and Side Effects Houdini 15.5. They avoided plug-ins so that their auto-vend, breaking of comps into layers for the 3D conversion process, would be as smooth as possible. Everything was rendered internally on their on-premises renderfarm. They also used the Sony “Kinect” scanning technique that allowed their artists to do performance capture on themselves and rapidly prototype ideas and generate reference.

We sat down with Sony Imageworks VFX supervisor Theo Bailek, who talks about the studio’s contribution to this latest Spidey film.

You worked on The Amazing Spider-Man in 2012 and The Amazing Spider-Man 2 in 2014. From a visual effects standpoint, what was different?
You know, not a lot. Most of the changes have been iterative improvements. We used many of the same technologies that we developed on the first few movies. How we do our city environments is a specific example of how we build off of our previous assets and techniques, leveraging off the library of buildings and props. As the machines get faster and the software more refined, it allows our artists increased iterations. This alone gave our team a big advantage over the workflows from five years earlier. As the software and pipeline here at Sony has gotten more accessible, it has allowed us to more quickly integrate new artists.

It’s a lot of very small, incremental improvements along the way. The biggest technological changes between now and the early Spider-Mans is our rendering technology. We use a more physically-accurate-based rendering incarnation of our global illumination Arnold renderer. As the shaders and rendering algorithms become more naturalistic, we’re able to conform our assets and workflows. In the end, this translates to a more realistic image out of the box.

The biggest thing on this movie was the inclusion of Spider-Man in a Marvel Universe: a different take on this film and how they wanted it to go. That would be probably the biggest difference.

Did you work directly with director Jon Watts, or did you work with production VFX supervisor Janek Sirrs in terms of the direction on the VFX?
During the shooting of the film I had the advantage of working directly with both Janek and Jon. The entire creative team pushed for open collaboration, and Janek was very supportive toward this goal. He would encourage and facilitate interaction with both the director and Tom Holland (who played Spider-Man) whenever possible. Everything moved so quick on set, often times if you waited to suggest an idea you’d lose the chance, as they would have to set up for the next scene.

The sooner Janek could get his vendor supervisors comfortable interacting, the bigger our contributions. While on set I often had the opportunity to bring our asset work and designs directly to Jon for feedback. There were times on set when we’d iterate on a design three or four times over the span of the day. Getting this type of realtime feedback was amazing. Once post work began, most of our reviews were directly with Janek.

When you had that first meeting about the tone of the movie, what was Jon’s vision? What did he want to accomplish in this movie?
Early on, it was communicated from him through Janek. It was described as, “This is sort of like a John Hughes, Ferris Bueller’s take on Spider-Man. Being a teenager he’s not meant to be fully in control of his powers or the responsibility that comes with them. This translates to not always being super-confident or proficient in his maneuvers. That was the basis of it.

Their goal was a more playful, relatable character. We accomplished this by being more conservative in our performances, of what Spider-Man was capable of doing. Yes, he has heightened abilities, but we never wanted every landing and jump to be perfect. Even superheroes have off days, especially teenage ones.

This being part of the Marvel Universe, was there a pool of common assets that all the VFX houses used?
Yes. With the Marvel movies, they’re incredibly collaborative and always use multiple vendors. We’re constantly sharing the assets. That said, there are a lot of things you just can’t share because of the different systems under the hood. Textures and models are easily exchanged, but how the textures are combined in the material and shaders… that makes them not reusable given the different renderers at companies. Character rigs are not reusable across vendors as facilities have very specific binding and animation tools.

It is typical to expect only models, textures, base joint locations and finished turntable renders for reference when sending or receiving character assets. As an example, we were able to leverage somewhat on the Avengers Tower model we received from ILM. We did supply our Toomes costume model and Spider-Man character and costume models to other vendors as well.

The scan data of Tom Holland, was it a 3D body scan of him or was there any motion capture?
Multiple sessions were done through the production process. A large volume of stunts and test footage were shot with Tom before filming that proved to be invaluable to our team. He’s incredibly athletic and can do a lot of his own stunts, so the mocap takes we came away with were often directly usable. Given that Tom could do backflips and somersaults in the air we were able to use this footage as a reference for how to instruct our animators later on down the road.
Toward the later-end of filming we did a second capture session, focusing on the shots we wanted to acquire using specific mocap performances. Then again several months later, we followed up with a third mocap session to get any new performances required as the edit solidified.

As we were trying to create a signature performance that felt like Tom Holland, we exclusively stuck to his performances whenever possible. On rare occasions when the stunt was too dangerous, a stuntman was used. Other times we resorted to using our own in-house method of performance capture using a modified Xbox Kinect system to record our own animators as they acted out performances.

In the end performance capture accounted for roughly 30% of the character animation of Spider-Man and Vulture in our shots, with the remaining 70% being completed using traditional key-framed methods.

How did you approach the fresh take on this iconic film franchise?
It was clear from our first meeting with the filmmakers that Spider-Man in this film was intended to be a more relatable and light-hearted take on the genre. Yes, we wanted to take the characters and their stories seriously, but not at the expense of having fun with Peter Parker along the way.

For us that meant that despite Spider-Man’s enhanced abilities, how we displayed those abilities on screen needed to always feel grounded in realism. If we faltered on this goal, we ran the risk of eroding the sense of peril and therefore any empathy toward the characters.

When you’re animating a superhero it’s not easy to keep the action relatable. When your characters possess abilities that you never see in the real world, it’s a very thin line between something that looks amazing and something that is amazingly silly and unrealistic. Over-extend the performances and you blow the illusion. Given that Peter Parker is a teenager and he’s coming to grips with the responsibilities and limits of his abilities, we really tried to key into the performances from Tom Holland for guidance.

The first tool at our disposal and the most direct representation of Tom as Spider-Man was, of course, motion capture of his performances. On three separate occasions we recorded Tom running through stunts and other generic motions. For the more dangerous stunts, wires and a stuntman were employed as we pushed the limit of what could be recorded. Even though the cables allowed us to record huge leaps, you couldn’t easily disguise the augmented feel to the actor’s weight and motion. Even so, every session provided us with amazing reference.

Though a bulk of the shots were keyframed, it was always informed by reference. We looked at everything that was remotely relevant for inspiration. For example, we have a scene in the warehouse where the Vulture’s wings are racing toward you as Spider-Man leaps into the air stepping on the top of the wings before flipping to avoid the attack. We found this amazing reference of people who leap over cars racing in excess of 70mph. It’s absurdly dangerous and hard to justify why someone would attempt a stunt like that, and yet it was the perfect for inspiration for our shot.

In trying to keep the performances grounded and stay true to the goals of the filmmakers, we also found it was always better to err on the side of simplicity when possible. Typically, when animating a character, you look for opportunities to create strong silhouettes so the actions read clearly, but we tended to ignore these rules in favor of keeping everything dirty and with an unscripted feel. We let his legs cross over and knees knock together. Our animation supervisor, Richard Smith, pushed our team to follow the guidelines of “economy of motion.” If Spider-Man needed to get from point A to B he’d take the shortest route — there’s not time to strike an iconic pose in-between!


Let’s talk a little bit about the third act. You had previsualizations from The Third Floor?
Right. All three of the main sequences we worked on in the third act had extensive previs completed before filming began. Janek worked extremely closely with The Third Floor and the director throughout the entire process of the film. In addition, Imageworks was tapped to help come up with ideas and takes. From early on it was a very collaborative effort on the part of the whole production.
The previs for the warehouse sequence was immensely helpful in the planning of the shoot. Given we were filming on location and the VFX shots would largely rely on carefully choreographed plate photography and practical effects, everything had to be planned ahead of time. In the end, the previs for that sequence resembled the final shots in most cases.

The digital performances of our CG Spider-Man varied at times, but the pacing and spirit remained true to the previs. As our plane battle sequence was almost entirely CG, the previs stage was more of an ongoing process for this section. Given that we weren’t locked into plates for the action, the filmmakers were free to iterate and refine ideas well past the time of filming. In addition to The Third Floor’s previs, Imageworks’ internal animation team also contributed heavily to the ideas that eventually formed the sequence.

For the beach battle, we had a mix of plate and all-CG shots. Here the previs was invaluable once again in informing the shoot and subsequent reshoots later on. As there were several all-CG beats to the fight, we again had sections where we continued to refine and experiment till late into post. As with the plane battle, Imageworks’ internal team contributed extensively to pre and postvis of this sequence.

The one scene, you mentioned — the fight in the warehouse — in the production notes, it talks about that scene being inspired by an actual scene from the comic The Amazing Spider-Man #33.
Yes, in our warehouse sequence there are a series of shots that are directly inspired by the comic book’s cells. Different circumstances in the the comic and our sequence lead to Spider-Man being trapped under debris. However, Tom’s performance and the camera angles that were shot play homage to the comic as he escapes. As a side note, many of those shots were added later in the production and filmed as reshoots.

What sort of CG enhancements did you bring to that scene?
For the warehouse sequence, we added digital Spider-Man, Vulture wings, CG destruction, enhanced any practical effects, and extended or repaired the plate as needed.The columns that the Vulture wings slice through as it circles Spider-Man were practically destroyed with small denoted charges. These explosives were rigged within cement that encased the actual warehouses steel girder columns. They had fans on set that were used to help mimic interaction from the turbulence that would be present from a flying wingsuit powered by turbines. These practical effects were immensely helpful for our effects artists as they provided the best-possible in-camera reference. We kept much of what was filmed, adding our fully reactive FX on top to help tie it into the motion of our CG wings.

There’s quite a bit of destruction when the Vulture wings blast through walls as well. For those shots we relied entirely on CG rigid body dynamic simulations for the CG effects, as filming it would have been prohibitive and unreliable. Though most of the shots in this sequence had photographed plates, there were still a few that required the background to be generated in CG. One shot in particular, with Spider-Man sliding back and rising up, stands out in particular. As the shot was conceived later in the production, there was no footage for us to use as our main plate. We did however have many tiles shot of the environment, which we were able to use to quickly reconstruct the entire set in CG.

I was particularly proud of our team for their work on the warehouse sequence. The quality of our CG performances and the look of the rendering is difficult to discern from the live action. Even the rare all-CG shots blended seamlessly between scenes.

When you were looking at that ending plane scene, what sort of challenges were there?
Since over 90 shots within the plane sequence were entirely CG we faced many challenges, for sure. With such a large number of shots without the typical constraints that practical plates impose, we knew a turnkey pipeline was needed. There just wouldn’t be time to have custom workflows for each shot type. This was something Janek, our client-side VFX supervisor, stressed from the onset, “show early, show often and be prepared to change constantly!”

To accomplish this, a balance of 3D and 2D techniques were developed to make the shot production as flexible as possible. Using our compositing software Nuke’s 3D abilities we were able to offload significant portions of the shot production into the compositor’s hands. For example: the city ground plane you see through the clouds, the projections of the imagery on the plane’s cloaking LED’s and the damaged flickering LED’s were all techniques done in the composite.

A unique challenge to the sequence that stands out is definitely the cloaking. Making an invisible jet was only half of the equation. The LEDs that made up the basis for the effect also needed to be able to illuminate our characters. This was true for wide and extreme close-up shots. We’re talking about millions of tiny light sources, which is a particularly expensive rendering problem to tackle. Mix in the fact that the design of these flashing light sources is highly subjective and thus prone to needing many revisions to get the look right.

Painting control texture maps for the location of these LEDs wouldn’t be feasible for the detail needed on our extreme close-up shots. Modeling them in would have been prohibitive as well, resulting in excessive geometric complexity. Instead, using Houdini, our effects software, we built algorithms to automate the distribution of point clouds of data to intelligently represent each LED position. This technique could be reprocessed as necessary without incurring the large amounts of time a texture or model solution would have required. As the plane base model often needed adjustments to accommodate design or performance changes, this was a real factor. The point cloud data was then used by our rendering software to instance geometric approximations of inset LED compartments on the surface.

Interestingly, this was a technique we adopted from rendering technology we use to create room interiors for our CG city buildings. When rendering large CG buildings we can’t afford to model the hundreds and sometimes thousands of rooms you see through the office windows. Instead of modeling the complex geometry you see through the windows, we procedurally generate small inset boxes for each window that have randomized pictures of different rooms. This is the same underlying technology we used to create the millions of highly detailed LEDs on our plane.

First our lighters supplied base renders to our compositors to work with inside of Nuke. The compositors quickly animated flashing damage to the LEDs by projecting animated imagery on the plane using Nuke’s 3D capabilities. Once we got buyoff on the animation of the imagery we’d pass this work back to the lighters as 2D layers that could be used as texture maps for our LED lights in the renderer. These images would instruct each LED when it was on and what color it needed to be. This back and forth technique allowed us to more rapidly iterate on the look of the LEDs in 2D before committing and submitting final 3D renders that would have all of the expensive interactive lighting.

Is that a proprietary system?
Yes, this is a shading system that was actually developed for our earlier Spider-Man films back when we used RenderMan. It has since been ported to work in our proprietary version of Arnold, our current renderer.

OptiTrack’s parent company merging with Planar Systems

Planar Systems, a Leyard company and a worldwide provider of display systems, has entered into a definitive merger agreement to acquire NaturalPoint, which makes optical tracking and motion capture solutions, for $125 million in an all-cash transaction. NaturalPoint makes OptiTrack, TrackIR and SmartNav products.

The acquisition brings together companies specializing in complementary technologies to increase attention on the high growth and strategic opportunities in augmented and virtual reality, and in other market segments, like drone tracking, computer visualization and animation.

NaturalPoint is headquartered two hours south of the Planar campus in Oregon and employs a team of 60 in North America. The has a 25,000-square-foot facility for its optical tracking business.

The acquisition is subject to customary closing conditions, and is expected to finalize in the fourth calendar quarter of 2016 or early in the first calendar quarter of 2017. NaturalPoint will remain a separate business with its own executive team, customers and market initiatives.

AWE puts people in a VR battle using mocap

What’s the best way to safely show people what it’s like to be on a battlefield? Virtual reality. Mobile content developer AWE employed motion capture to create a virtual reality experience for the Fort York national historic site in Toronto. Visitors to the nine-acre site will use Google Cardboard to experience key battles and military fortifications in a simulated and safe immersive 3D environment.

“We created content that, when viewed on a mobile phone using Google Cardboard, immerses visitors inside a 360-degree environment that recreates historical events from different eras that occurred right where visitors are standing,” explains Srinivas Krishna, CEO of Toronto-based AWE, who directed the project.

Fort York played a pivotal role during the War of 1812, when US naval ships attacked the fort, which was then under the control of the British army. To recreate that experience, along with several other noteworthy events, Krishna designed a workflow that leaned heavily on iPi Soft markerless mocap software.

The project, which began three years ago, included creating animated model rigs for the many characters that would be seen virtually. Built using Unity’s 3D Game engine, the character models, as well as the environments (designed to look like Fort York at the time) were animated using Autodesk Maya. That content was then composited with the mocap sequences, along with facial mocap data captured using Mixamo Face Plus.

“On the production side, we had a real problem because of the huge number of characters the client wanted represented,” Krishna says. “It became a challenge on many levels. We wondered how are we going to create our 3D models and use motion capture in a cost-effective way. Markerless mocap is perfect for broad strokes, and as a filmmaker I found working with it to be a marvelous creative experience.”

While it would seem historical sites like Fort York are perfect for these types of virtual reality experiences, Krishna notes that the project was a bit of a risk given that when they started Google Cardboard wasn’t yet on anyone’s radar.

“We started developing this in 2012, which actually turned out to be good timing because we were able to think creatively and implement ideas, while at the same time the ecosystem for VR was developing,” says Krishna. “We see this as a game-changer in this arena, and I think more historical sites around the world are going to be interested in creating these types of experiences for their visitors.”

Performance-capture companies Animatrik, DI4D collaborate

Motion- and facial-capture companies Animatrik Film Design and Dimensional Imaging (DI4D) have launched a new collaboration based on their respective mocap expertise. The alliance will deliver facial performance-capture services to the VFX and video game communities across North America.

Animatrik technology has been used on such high-profile projects as Image Engine’s Chappie, Microsoft’s Gears of War series and Duncan Jones’ upcoming Warcraft. DI4D’s technology has appeared in such shows as the BBC’s Merlin and video games like Left 4 Dead 2 and Quantum Break. The new collaboration will allow both companies to bring even more true-to-life animation to similar projects in the coming months.

Animatrik has licensed DI4D’s facial performance-capture software and purchased DI4D systems, which it will operate from its Vancouver, British Columbia, and Toronto motion-capture studios. Animatrik will also offer an “on-location” DI4D facial performance-capture service, which has been used before on projects such as Microsoft’s Halo 4.

Check out our video with Animatrik’s president, Brett Ineson, at SIGGRAPH.

iPi Soft V3 offers Kinect 2 sensor support, more

iPi Soft is offering Version 3.0 of its iPi Motion Capture markerless motion capture technology, which includes support for the new Microsoft Kinect 2 sensor, increased tracking speed, improved arm and leg tracking and a simplified calibration process.

Michael Nikonov, iPi Soft’s founder/chief technology architect, explains that iPi Motion Capture Version 3.0’s compatibility with the Microsoft Kinect 2 sensor lets users get closer to the camera, further reducing overall space requirements, while new calibration algorithms simplify the camera set up process and ensure more accurate tracking.

Here are the updates broken down: 10-20 percent faster tracking algorithms; improved calibration of multi-camera system with no need to manually align cameras; comprehensive feedback on calibration quality, which can save time by preventing the use of incorrect calibration data for tracking; improved tracking for arms and legs with fewer tracking errors; and redesigned user interface.

Additionally, there are several features related to the Kinect 2 sensor that iPi Soft plans to offer users in the near future. These include distributed recording (which will allow recording multiple Kinect 2 sensors using several PCs); up to 12-camera support; substantially increased tracking speed; selected limb clean up; and three actor tracking.

The new version of iPi Motion Capture has been used by production companies and videogame developers, including Rodeo FX, The Graphics Film Company and Ghost Town Media.

Pricing
iPi Soft has introduced new subscription payment model with options for a yearly subscription for frequent software users or a three-month plan for those that only need the software on a limited per-project basis.

Prices range from $45.00 to $1,195.00 depending on the version of software (Express, Basic, Pro) and duration of subscription. Customers who purchased iPi Motion Capture Version 2 software after July 1st, 2014 will receive a free one-year subscription of V3. Customers with iPi Motion Capture V2 or V1 will receive a one-time 50 percent discount for a one-year subscription of Version 3.

Upgrading to Version 3 is optional, and a free 30-day trial for Version 3 is available

Get to know performance capture with James Knight

James Knight, who worked on James Cameron’s Avatar while at Giant Studios, knows motion capture, or more precisely “performance capture.” It’s his specialty, as they say.

Knight is currently at BluStreak Media, a vertically integrated film and post studio, based on the Universal lot. The team has credits that include Real Steel, The Amazing Spider-Man (2012) and the Iron Man franchise. He also works with DuMonde Visual Effects, Richard Edlund’s company.

I thought it would be fun to get some insight and pointers from this LA-based Englishman on an area of the industry that not many know a lot about. He was kind enough to oblige.

So here’s James:

Performance Capture IS Animation
If you are a classic animator it tends to be like “jazz hands at 20 paces,” so some studios known Continue reading