Category Archives: Virtual Reality

Review: Lenovo’s ThinkPad P71 mobile workstation

By Mike McCarthy

Lenovo was nice enough to send me their newest VR-enabled mobile workstation to test out on a VR workflow project I am doing. The new Thinkpad P71 is a beast with a 17-inch UHD IPS screen. The model they sent to me was equipped with the fastest available processor, a Xeon E5-1535M v6 with four cores processing eight threads at an official speed of 3.1GHz. It has 32GB of DDR4-2400 ECC RAM, with two more slots allowing that to be doubled to 64GB if desired.

The headline feature is the Nvidia Quadro P5000 mobile GPU, with 2,048 CUDA cores, fed by another 16GB of dedicated DDR5 memory. The storage configuration is a single NVMe 1TB SSD populating one of two available M.2 slots. This configuration is currently available for $5,279, discounted to $4,223.20 on Lenovo.com right now. So while it is not cheap, it is one of the most powerful mobile workstations you can buy right now.

Connectivity wise, it has dual Thunderbolt 3 ports, which can also be used for USB 3.1 Type C devices. It has four more USB 3.1 Type A ports and a Gigabit Ethernet port. You have a number of options for display connectivity. Besides the Thunderbolt ports, there is a MiniDP 1.2 port and an HDMI 1.4 port (1.4 based on Intel graphics limitations). It has an SDXC slot, an ExpessCard34 slot, and a single 1/8-inch headphone mic combo jack. The system also has a docking connector and a rectangular port for the included 230W power adaptor.

It has the look and feel of a traditional Thinkpad, which goes back to the days when they were made by IBM. It has the customary TrackPoint as well as a touchpad. Both have three mouse buttons, which I like in theory, but I constantly find myself trying to click with the center button to no avail. I would either have to get used to it, or set the center action to click as well, defeating the purpose of the third button. The Fn key in the bottom corner will take some getting used to as well, as I keep hitting that instead of CTRL, but I adapted to a similar configuration on my current laptop.

I didn’t like the combo jack at first, because it required a cheap adapter, but now that I have gotten one, I see why that is the future, once all the peripherals support it. I had plugged my mic and headphones in backwards as recently as last week, so it is an issue when the ports aren’t clearly labeled and the combo jack solves that once and for all. It is a similar jack to most cell phones, and you only need an adapter for the mic functionality, regular headphones work by default.

The system doesn’t weigh as much as I expected, probably due to the lack of spinning disks or optical drive, which can be added if desired. It came relatively clean, with Windows 10 Pro installed, without too many other applications or utilities pre-installed. It had all of the needed drivers and a simple utility for operating the integrated X-Rite Pantone color calibrator for the screen. There was a utility for adding any other applications that would normally be included, which I used to download the Lenovo Performance Tuner. I use the Performance Tuner more for monitoring usage than adjusting settings, but can be nice to have everything in one place, especially in Windows 10.

The system boots up in about 10 seconds, and shuts down even faster. Hibernating takes twice as long, which is to be expected with that much RAM to be cached to disk, even with an NVMe SSD. But that may be worth the extra time to keep your applications open. My initial tests of the SSD showed a 1700MB/s write speed with 2500MB/s reads. Longer endurance tests resulted in write speeds decreasing to 1200MB/s, but the read speeds remained consistently above 2500MB/s. That should be more than enough throughput for most media work, even allowing me to playback uncompressed 6K content, and should allow 4K uncompressed media capture if you connect an I/O device to the Thunderbolt bus.

The main application I use on a daily basis is Adobe Premiere Pro, so most of my performance evaluation revolves around that program, although I used a few others as well. I was able to load a full feature film off of a USB3 drive with no issues. The 6K Cineform and DNxHR media played back at ½ res without issue. The 6K R3D files played at ¼ res without dropping frames, which is comparable to my big tower.

My next playback test is fairly unique to my workflow, but a good benchmark of what is possible. I was able to connect three 1080p televisions to the MiniDP port, using an MST (Multi-Stream Transport) hub, with 3 HDMI ports. Using the Nvidia Mosaic functionality offered by the Quadro P5000 card, I can span them into a single display, which Premiere can send output to, via the Adobe’s Mercury Playback engine. This configuration allows me to playback 6K DNxHR 444 files to all three screens, directly off the timeline, at half res, without dropping frames. My 6K H.265 files playback at full res outside Premiere. That is a pretty impressive display for a laptop. Once I had maxed out the possibilities for playback, I measured a few encodes. In general, the P71 takes about twice as long to encode things in Adobe Media Encoder as my 20-core desktop workstation, but is twice as fast as my existing quad Core i7 4860 laptop.

The other application I have been taxing my system with recently is DCP-O-Matic. It takes 30 hours to render my current movie to a 4K DCP on my desktop, which is 18x the runtime, but I know most of my system’s 20 cores are sitting idle based on the software threading. Doing a similar encode on the Lenovo system took 12.5x the run time, so that means my 100-minute film should take 21 hours. The higher base frequency of the quad core CPU definitely makes a difference in this instance.

The next step was to try my HMD headset with it to test out the VR capability. My Oculus Rift installed without issues, which is saying something, based on the peculiarities of Oculus’ software. Maybe there is something to that “VR Ready” program, but I did frequently have issues booting up the system with the Rift connected, so I recommend plugging it in after you have your system up and running. Everything VR related ran great, except for the one thing I actually wanted to do, which was edit 360 video in Premiere, with the HMD. There was some incompatibility between the drivers for the laptop and the software.

There are a variety of ways to test battery life, but since this is a VR-ready system that seemed to be the best approach. How long would it support using a VR headset, before needing to plug in? I got just short of an hour of heavy 3D VR usage before I started getting low battery warnings. I was hoping to be able to close the display to save power, since I am not looking at it while using the headset. (I usually set the Close Lid action to Do Nothing on all my systems because I want to be able to walk into the other room to show someone something on my timeline without effecting the application. If I want to sleep the system, I can press the button.) But whenever the Rift is active, closing the lid puts the machine to sleep immediately, regardless of the settings. So you have to run the display and the HMD anytime you are working in VR. And don’t plan on doing extensive work without plugging in.

Now to be fair, setting up to use VR involves preparing the environment and configuring sensors, so adding power to that mix is a reasonable requirement and very similar to 3D gaming. Portable doesn’t always mean untethered. But for browsing the Internet, downloading project files and editing articles, I would expect about four hours of battery life from the system before needing to recharge. It is really hard to accurately estimate run time when the system’s performance and power needs scale so much depending on the user’s activities. The GPU alone scales from 5 watts to 100 watts depending on what is being processed, but the run time is not out of line with what is to be expected from products in this class of performance.

Summing Up
All in all, the P71 is an impressive piece of equipment, and one of only a few ways you can currently get a portable professional VR solution. I recognize that most of my applications aren’t using all of the power I would be carrying around in a P71, so for my own work, I would probably hope to find a smaller and lighter weight system at the expense of some of that processing power. But for people who have uncompromising needs for the fastest system they can possibly get, the Lenovo P71 fits the bill. It is a solid performer that can do an impressive amount of processing, while still being able to come with you where ever you need to go.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

postPerspective Impact Award winners from SIGGRAPH 2017

Last April, postPerspective announced the debut of our Impact Awards, celebrating innovative products and technologies for the post production and production industries that will influence the way people work. We are now happy to present our second set of Impact Awards, celebrating the outstanding offerings presented at SIGGRAPH 2017.

Now that the show is over, and our panel of VFX/VR/post pro judges has had time to decompress, dig out and think about what impressed them, we are happy to announce our honorees.

And the winners of the postPerspective Impact Award from SIGGRAPH 2017 are:

  • Faceware Technologies for Faceware Live 2.5
  • Maxon for Cinema 4D R19
  • Nvidia for OptiX 5.0  

“All three of these technologies are very worthy recipients of our first postPerspective Impact Awards from SIGGRAPH,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that define the leading-edge of technology while producing tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category.

“While SIGGRAPH’s focus is on VFX, animation, VR/AR and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production. We’ve tapped real-world users in these areas to vote for our Impact Awards, and they have determined what tools might be most impactful to their day-to-day work. That’s what makes our awards so special.”

There were many new technologies and products at SIGGRAPH this year, and while only three won an Impact Award, our judges felt there were other updates that it was important to let people know about as well.

Blackmagic Design’s Fusion 9 was certainly turning heads and Nvidia’s VRWorks 360 Video was called out as well. Chaos Group also caught our judges attention with V-Ray for Unreal Engine 4.

Stay tuned for future Impact Award winners in the coming months — voted on by users for users — from IBC.

Dell 6.15

Quick Look: Jaunt One’s 360 camera

By Claudio Santos

To those who have been following the virtual reality market from the beginning, one very interesting phenomenon is how the hardware development seems to have outpaced both the content creation and the software development. The industry has been in a constant state of excitement over the release of new and improved hardware that pushes the capabilities of the medium, and content creators are still scrambling to experiment and learn how to use the new technologies.

One of the products of this tech boom is the Jaunt One camera. It is a 360 camera that was developed with the explicit focus of addressing the many production complexities that plague real life field shooting. What do I mean by that? Well, the camera quickly disassembles and allows you to replace a broken camera module. After all, when you’re across the world and the elephant that is standing in your shot decides to play with the camera, it is quite useful to be able to quickly swap parts instead of having to replace the whole camera or sending it in for repair from the middle of the jungle.

Another of the main selling points of the Jaunt One camera is the streamlined cloud finishing service they provide. It takes the content creator all the way from shooting on set through stitching, editing, onlining and preparing the different deliverables for all the different publishing platforms available. The pipeline is also flexible enough to allow you to bring your footage in and out of the service at any point so you can pick and choose what services you want to use. You could, for example, do your own stitching in Nuke, AVP or any other software and use the Jaunt cloud service to edit and online these stitched videos.

The Jaunt One camera takes a few important details into consideration, such as the synchronization of all of the shutters in the lenses. This prevents stitching abnormalities in fast moving objects that are captured in different moments in time by adjacent lenses.

The camera doesn’t have an internal ambisonics microphone but the cloud service supports ambisonic recordings made in a dual system or Dolby Atmos. It was interesting to notice that one of the toolset apps they released was the Jaunt Slate, a tool that allows for easy slating on all the cameras (without having to run around the camera like a child, clapping repeatedly) and is meant to automatize the synchronization of the separate audio recordings in post.

The Jaunt One camera shows that the market is maturing past its initial DIY stage and the demand for reliable, robust solutions for higher budget productions is now significant enough to attract developers such as Jaunt. Let’s hope tools such as these encourage more and more filmmakers to produce new content in VR.


Blackmagic’s Fusion 9 is now VR-enabled

At SIGGRAPH, Blackmagic was showing Fusion 9, its newly upgraded visual effects, compositing, 3D and motion graphics software. Fusion 9 features new VR tools, an entirely new keyer technology, planar tracking, camera tracking, multi-user collaboration tools and more.

Fusion 9 is available now with a new price point — Blackmagic has lowered the price of its Studio version from $995 to $299 Studio Version. (Blackmagic is also offering a free version of Fusion.) The software now works on Mac, PC and Linux.

Those working in VR get a full 360º true 3D workspace, along with a new panoramic viewer and support for popular VR headsets such as Oculus Rift and HTC Vive. Working in VR with Fusion is completely interactive. GPU acceleration makes it extremely fast so customers can wear a headset and interact with elements in a VR scene in realtime. Fusion 9 also supports stereoscopic VR. In addition, the new 360º spherical camera renders out complete VR scenes, all in a single pass and without the need for complex camera rigs.

The new planar tracker in Fusion 9 calculates motion planes for accurately compositing elements onto moving objects in a scene. For example, the new planar tracker can be used to replace signs or other flat objects as they move through a scene. Planar tracking data can also be used on rotoscope shapes. That means users don’t have to manually animate motion, perspective, position, scale or rotation of rotoscoped elements as the image changes.

Fusion 9 also features an entirely new camera tracker that analyzes the motion of a live-action camera in a scene and reconstructs the identical motion path in 3D space for use with cameras inside of Fusion. This lets users composite elements with precisely matched movement and perspective of the original. Fusion can also use lens metadata for proper framing, focal length and more.

The software’s new delta keyer features a complete set of matte finesse controls for creating clean keys while preserving fine image detail. There’s also a new clean plate tool that can smooth out subtle color variations on blue- and greenscreens in live action footage, making them easier to key.

For multi-user collaboration, Fusion 9 Studio includes Studio Player, a new app that features a playlist,
storyboard and timeline for playing back shots. Studio Player can track version history, display annotation notes, has support for LUTs and more. The new Studio Player is suited for customers that need to see shots in a suite or theater for review and approval. Remote synchronization lets artists  sync Studio Players in multiple locations.

In addition, Fusion 9 features a bin server so shared assets and tools don’t have to be copied onto each user’s local workstation.


PNY’s PrevailPro mobile workstations feature 4K displays, are VR-capable

PNY has launched the PNY PrevailPro P4000 and P3000, thin and light mobile workstations. With their Nvidia Max-Q design, these innovative systems are designed from the Quadro GPU out.

“Our PrevailPro [has] the ability to drive up to four 4K UHD displays at once, or render vividly interactive VR experiences, without breaking backs or budgets,” says Steven Kaner, VP of commercial and OEM sales at PNY Technologies. “The increasing power efficiency of Nvidia Quadro graphics and our P4000-based P955 Nvidia Max-Q technology platform, allows PNY to deliver professional performance and features in thin, light, cool and quiet form factors.”

P3000

PrevailPro features the Pascal architecture within the P4000 and P3000 mobile GPUs, with Intel Core i7-7700HQ CPUs and the HM175 Express chipset.

“Despite ever increasing mobility, creative professionals require workstation class performance and features from their mobile laptops to accomplish their best work, from any location,” says Bob Pette, VP, Nvidia Professional Visualization. “With our new Max-Q design and powered by Quadro P4000 and P3000 mobile GPUs, PNY’s new PrevailPro lineup offers incredibly light and thin, no-compromise, powerful and versatile mobile workstations.”

The PrevailPro systems feature either a 15.6-inch 4K UHD or FHD display – and the ability to drive three external displays (2x mDP 1.4 and HDMI 2.0 with HDCP), for a total of four simultaneously active displays. The P4000 version supports fully immersive VR, the Nvidia VRWorks software development kit and innovative immersive VR environments based on the Unreal or Unity engines.

With 8GB (P4000) or 6GB (P3000) of GDDR5 GPU memory, up to 32GB of DDR4 2400MHz DRAM, 512GB SSD availability, HDD options up to 2TB, a comprehensive array of I/O ports, and the latest Wi-Fi and Bluetooth implementations, PrevailPro is compatible with all commonly used peripherals and network environments — and provides pros with the interfaces and storage capacity needed to complete business-critical tasks. Depending on the use case, Mobile Mark 2014 projects the embedded Li polymer battery can reach five hours over a lifetime of 1,000 charge/discharge cycles.

PrevailPro’s thin and light form factor measures 14.96×9.8×0.73 inches (379mm x 248mm x 18mm) and weighs 4.8 lbs.

 


Quick Chat: SIGGRAPH’S production sessions chair Emily Hsu

With SIGGRAPH 2017 happening in LA next week, we decided to reach out to Emily Hsu, this year’s production sessions chair to find out more about the sessions and the process in picking what to focus on. You can check out this year’s sessions here. By the way, Hsu’s day job is production coordinator at Portland, Oregon’s Laika Studios. So she comes at this from an attendee’s perspective.

How did you decide what panels to offer?
When deciding the production sessions line-up, my team and I consider many factors. One of the first is a presentation’s appeal to a wide range of SIGGRAPH attendees, which means that it strikes a nice harmony between the technical and the artistic. In addition, we consider the line-up as whole. While we retain strong VFX and animated feature favorites, we also want to round out the show with new additions in VR, gaming, television and more.

Ultimately, we are looking for work that stands out — will it inspire and excite attendees? Does it use technology that is groundbreaking or apply existing technologies in a groundbreaking way? Has it received worthy praise and accolades? Does it take risks? Does it tell a story in a unique way? Is it something that we’ve never seen within the production sessions program before? And, of course, does it epitomize the conference theme: “At the Heart of Computer Graphics & Interactive Techniques?”

These must be presentations that truly get to the heart of a project — not just the obvious successes, but also the obstacles, struggles and hard work that made it possible for it all to come together.

How do you make sure there is a balance between creative workflow and technology?
With the understanding that Production Sessions’ subject matter is targeted toward a broad SIGGRAPH audience, the studios and panelists are really able determine that balance.

Production Session proposals are often accompanied by varied line-ups of speakers from either different areas of the companies or different companies altogether. What’s especially incredible is when studio executives or directors are present on a panel and can speak to over-arching visions and goals and how everything interacts in the bigger picture.

These presentations often showcase the cross-pollination and collaboration that is needed across different teams. The projects are major undertakings by mid-to-large size crews that have to work together in problem solving, developing new systems and tools, and innovating new ways to get to the finish line — so the workflow, technology and art all go hand-in-hand. It’s almost impossible to talk about one without talking about the other.

Can you talk more about the new Production Gallery?
The Production Gallery has been a very special project for the Production Sessions team this year. Over the years since Production Sessions began, we’ve had special appearances by Marvel costumes, props, Laika puppets, and an eight-foot tall Noisy Boy robot from Real Steel, but they have only been available for viewing in the presentation time slots.

In creating a new space that runs Sunday through Wednesday of the conference, we’re hoping to give attendees a true up-close-and-personal experience and also honor more studio work that may often go unnoticed or unseen.

When you go behind-the-scenes of a film set or on a studio tour, there are tens of thousands of elements involved – storyboards, concept artwork, maquettes, costumes, props, and more. This space focuses on those physical elements that are lovingly created for each project, beyond the final rendered piece you see in the movie theater. In peeling back the curtain, we’re trying to bring a bit of the studios straight to the attendees.

The Production Gallery is one of the accomplishments from this year that I’m most proud of, and I hope it grows in future SIGGRAPH conferences.

If someone has never been to SIGGRAPH before, what can you tell them to convince them it’s not a show to miss?
SIGGRAPH is a conference to be experienced, not to hear about later. It opens up worlds, inspires creativity, creates connections and surrounds you in genius. I always come out of it reinvigorated and excited for what’s to come.

At SIGGRAPH, you get a glimpse into the future right now — what non-attendees may only be able to see or experience in many years or even decades. If it’s a show you don’t attend, you’re not just missing — you’re missing out.

If they have been in the past, how is this year different and why should they come?
My first SIGGRAPH was 2011 in Vancouver, and I haven’t skipped a single conference since then. Technology changes and evolves in the blink of an eye and I’ve blinked a lot since last year. There’s always something new to be learned or something exciting to see.

The SIGGRAPH 2017 Committee has put an exceptional amount of effort into the attendee experience this year. There are hands-on must-see-it-to-believe-it kinds of experiences in VR Village, the Studio, E-Tech and the all-new VR Theater, as well as improvements to the overall SIGGRAPH experience to make the conference smoother, more fun, collaborative and interactive.

I won’t reveal any surprises here, but I can say that there will be quite a few that you’ll have to see for yourself! And on top of all that, a giraffe named Tiny at SIGGRAPH? That’s got to be one for the SIGGRAPH history books, so come join us in making history.


Assimilate and Z Cam offer second integrated VR workflow bundle

Z Cam and Assimilate are offering their second VR integrated workflow bundle, which features the Z Cam S1 Pro VR camera and the Assimilate Scratch VR Z post tools. The new Z Cam S1 Pro offers a higher level of image quality that includes better handling of low lights and dynamic range with detailed, well-saturated, noise-free video. In addition to the new camera, this streamlined pro workflow combines Z Cam’s WonderStitch optical-flow stitch feature and the end-to-end Scratch VR Z tools.

Z Cam and Assimilate have designed their combined technologies to ensure as simple a workflow as possible, including making it easy to switch back and forth between the S1 Pro functions and the Scratch VR Z tools. Users can also employ Scratch VR Z to do live camera preview, prior to shooting with the S1 Pro. Once the shoot begins with the S1 Pro, Scratch VR Z is then used for dailies and data management, including metadata. You don’t have to remove the SD cards and copy; it’s a direct connect to the PC and then to the camera via a high-speed Ethernet port. Stitching of the imagery is then done in Z Cam’s WonderStitch — now integrated into Scratch VR Z — as well as traditional editing, color grading, compositing, support for multichannel audio from the S1 or external ambisonic sound, finishing and publishing (to all final online or standalone 360 platforms).

Z Cam S1 Pro/Scratch VR Z  bundle highlights include:
• Lower light sensitivity and dynamic range – 4/3-inch CMOS image sensor
• Premium 220 degree MFT fisheye lens, f/2.8~11
• Coordinated AE (automatic exposure) and AWB ( automatic white-balance)
• Full integration with built-in Z Cam Sync
• 6K 30fps resolution (post stitching) output
• Gig-E port (video stream & setting control)
• WonderStich optical-flow based stitching
• Live Streaming to Facebook, YouTube or a private server, including text overlays and green/composite layers for a virtual set
• Scratch VR Z single, a streamlined, end-to-end, integrated VR post workflow

“We’ve already developed a few VR projects with the S1 Pro VR camera and the entire Neotopy team is awed by its image quality and performance,” says Alex Regeffe, VR post production manager at Neotopy Studio in Paris. “Together with the Scratch VR Z tools, we see this integrated workflow as a game changer in creating VR experiences, because our focus is now all on the creativity and storytelling rather than configuring multiple, costly tools and workflows.”

The Z Cam S1 Pro/Scratch VR Z bundle is available within 30 days of ordering. Priced at $11,999 (US), the bundle includes the following:
– Z CamS1 Pro Camera main unit, Z Cam S1 Pro battery unit (w/o battery cells), AC/DC power adapter unit and power connection cables (US, UK, EU).
– A Z Cam WonderStitch license, which is an optical flow-based stitching feature that performs offline stitching of files from Z Cam S1 Pro. Z Cam WonderStitch requires a valid software license associated with a designated Z Cam S1 Pro, and is nontransferable.
– A Scratch VR Z permanent license: a pro VR end-to-end, post workflow with an all-inclusive, realtime toolset for data management, dailies, conform, color grading, compositing, multichannel and ambisonic sound, and finishing, all integrated within the Z Cam S1 Pro camera. Includes one-year of support/updates.

The companies are offering a tutorial about the bundle.

FMPX8.14

Mocha VR: An After Effects user’s review

By Zach Shukan

If you’re using Adobe After Effects to do compositing and you’re not using Mocha, then you’re holding yourself back. If you’re using Mettle Skybox, you need to check out Mocha VR, the VR-enhanced edition of Mocha Pro.

Mocha Pro, and Mocha VR are all standalone programs where you work entirely within the Mocha environment and then export your tracks, shapes or renders to another program to do the rest of the compositing work. There are plugins for Maxon Cinema 4D, The Foundry’s Nuke, HitFilm, and After Effects that allow you to do more with the Mocha data within your chosen 3D or compositing program. Limited-feature versions of Mocha (Mocha AE and Mocha HitFilm) come installed with the Creative Cloud versions of After Effects and HitFilm 4 Pro, and every update of these plugins is getting closer to looking like a full version of Mocha running inside of the effects panel.

Maybe I’m old school, or maybe I just try to get the maximum performance from my workstation, but I always choose to run Mocha VR by itself and only open After Effects when I’m ready to export. In my experience, all the features of Mocha run more smoothly in the standalone than when they’re launched and run inside of After Effects.**

How does Mocha VR compare to Mocha Pro? If you’re not doing VR, stick with Mocha Pro. However, if you are working with VR footage, you won’t have to bend over backwards to keep using Mocha.

Last year was the year of VR, when all my clients wanted to do something with VR. It was a crazy push to be the first to make something and I rode the wave all year. The thing is there really weren’t many tools specifically designed to work with 360 video. Now this year, the post tools for working with VR are catching up.

In the past, I forced previous versions of Mocha to work with 360 footage before the VR version, but since Mocha added its VR-specific features, stabilizing a 360-camera became cake compared to the kludgy way it works with the industry standard After Effects 360 plugin, Skybox. Also, I’ve used Mocha to track objects in 360 before the addition of an equirectangular* camera and it was super-complicated because I had to splice together a whole bunch of tracks to compensate for the 360 camera distortion. Now it’s possible to create a single track to follow objects as they travel around the camera. Read the footnote for an explanation of equirectangular, a fancy word that you need to know if you’re working in VR.

Now let’s talk about the rest of Mocha’s features…

Rotoscoping
I used to rotoscope by tracing every few frames and then refining the frames in between until I found out about the Mocha way to rotoscope. Because Mocha combines rotoscoping with tracking of arbitrary shapes, all you have to do is draw a shape and then use tracking to follow and deform all the way through. It’s way smarter and more importantly, faster. Also, with the Uberkey feature, you can adjust your shapes on multiple frames at once. If you’re still rotoscoping with After Effects alone, you’re doing it the hard way.

Planar Tracking
When I first learned about Mocha it was all about the planar tracker, and that really is still the heart of the program. Mocha’s basically my go-to when nothing else works. Recently, I was working on a shot where a woman had her dress tucked into her pantyhose, and I pretty much had to recreate a leg of a dress that swayed and flowed along with her as she walked. If it wasn’t for Mocha’s planar tracker I wouldn’t have been able to make a locked-on track of the soft-focus (solid color and nearly without detail) side of the dress. After Effects couldn’t make a track because there weren’t enough contrast-y details.

GPU Acceleration
I never thought Mocha’s planar tracking was slow, even though it is slower than point tracking, but then they added GPU acceleration a version or two ago and now it flies through shots. It has to be at least five times as fast now that it’s using my Nvidia Titan X (Pascal), and it’s not like my CPU was a slouch (an 8-core i7-5960X).

Object Removal
I’d be content using Mocha just to track difficult shots and for rotoscoping, but their object-removal feature has saved me hours of cloning/tracking work in After Effects, especially when I’ve used it to remove camera rigs or puppet rigs from shots.

Mocha’s remove module is the closest thing out there to automated object removal***. It’s as simple as 1) create a mask around the object you want to remove, 2) track the background that your object passes in front of, and then 3) render. Okay, there’s a little more to it, but compared to the cloning and tracking and cloning and tracking and cloning and tracking method, it’s pretty great. Also, a huge reason to get the VR edition of Mocha is that the remove module will work with a 360 camera.

Here I used Mocha object removal to remove ropes that pulled a go-cart in a spot for Advil.

VR Outside of After Effects?
I’ve spent most of this article talking about Mocha with After Effects, because it’s what I know best, but there is one VR pipeline that can match nearly all of Mocha VR’s capabilities: the Nuke plugin Cara VR, but there is a cost to that workflow. More on this shortly.

Where you will hit the limit of Mocha VR (and After Effects in general) is if you are doing 3D compositing with CGI and real-world camera depth positioning. Mocha’s 3D Camera Solve module is not optimized for 360 and the After Effects 3D workspace can be limited for true 3D compositing, compared to software like Nuke or Fusion.

While After Effects sort of tacked on its 3D features to its established 2D workflow, Nuke is a true 3D environment as robust as Autodesk Maya or any of the high-end 3D software. This probably sounds great, but you should also know that Cara VR is $4,300 vs. $1,000 for Mocha VR (the standalone + Adobe plugin version) and Nuke starts at $4,300/year vs. $240/year for After Effects.

Conclusion
I think of Mocha as an essential companion to compositing in After Effects, because it makes routine work much faster and it does some things you just can’t do with After Effects alone. Mocha VR is a major release because VR has so much buzz these days, but in reality it’s pretty much just a version of Mocha Pro with the ability to also work with 360 footage.

*Equirectangular is a clever way of unwrapping a 360 spherical projection, a.k.a, the view we see in VR, by flattening it out into a rectangle. It’s a great way to see the whole 360 view in an editing program, but A: it’s very distorted so it can cause problems for tracking and B: anything that is moving up or down in the equirectangular frame will wrap around to the opposite side (a bit like Pacman when he exits the screen), and non-VR tracking programs will stop tracking when something exits the screen on one side.

**Note: According to the developer, one of the main advantages to running Mocha as a plug-in (inside AE, Premiere, Nuke, etc) for 360 video work is that you are using the host program’s render engine and proxy workflow. Having the ability to do all your tracking, masking and object removal on proxy resolutions is a huge benefit when working at large 360 formats that can be as large as 8k stereoscopic. Additionally, the Mocha modules that render, such as reorient for horizon stabilization or remove module will render inside the plug-in making for a streamlined workflow.

***FayOut was a “coming soon” product that promised an even more automated method for object removal, but as of the publishing of this article it appears that they are no longer “coming soon” and may have folded or maybe their technology was purchased and it will be included in a future product. We shall see…
________________________________________
Zach Shukan is the VFX specialist at SilVR and is constantly trying his hand at the latest technologies in the video post production world.


Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

Boxx Apexx 4 features i9 X-Series procs, targets post apps

Boxx’s new Apexx 4 6201 workstation features the new 10-core Intel Core i9 X-Series processor. Intel’s most scalable desktop platform ever, X-Series processors offer significant performance increases over previous Intel technology.

“The Intel Core X-Series is the ultimate workstation platform,” reports Boxx VP of engineering Tim Lawrence. “The advantages of the new Intel Core i9, combined with Boxx innovation, will provide architects, engineers and motion media creators with an unprecedented level of performance.”

One of those key Intel X-Series advantages is Intel Turbo Boost 3.0. This technology identifies the two best cores to boost, making the new CPUs ideal for multitasking and virtual reality, as well as editing and rendering high-res 4K/VR video and effects with fast video transcode, image stabilization, 3D effects rendering and animation.

When comparing previous-generation Intel processors to X-Series processors (10-core vs.10-core), the X-Series is up to 14% faster in multi-threaded performance and up to 15% faster in single-threaded performance.

The first in a series of Boxx workstations featuring the new Intel X-Series processors, Apexx 4 6201 also includes up to three professional-grade Nvidia or AMD Radeon Pro graphics cards, and up to 128GB of system memory. The highly configurable Apexx 4 series workstations provide support for single-threaded applications, as well as multi-threaded tasks in applications like 3ds Max, Maya and Adobe CC.

“Professionals choose Boxx because they want to spend more time creating and less time waiting on their compute-intensive workloads,” says Lawrence. “Boxx Apexx workstations featuring new Intel X-Series processors will enable them to create without compromise, to megatask, support a bank of 4K monitors and immerse themselves in VR — all faster than before.”