Tatjana Dzambazova helps us dig deeper into Autodesk Memento

Not long ago, Autodesk made Memento available globally as a free public beta. Memento is an end-to-end solution for converting reality capture input (such as photos or scans) into 3D high-quality models. According to the company, this 3D mesh can be cleaned up, fixed, compared over time and optimized for further digital use, physical fabrication methods like 3D printing and 3D interactive experiences on the Web.

To find out more we reached out to Autodesk’s production manager of reality capture and digital fabrication, Tatjana Dzambazova.

Who do you see getting the most use out of Memento?
Digital artists can make high-quality assets for films and games that are true replicas of the real things, not “interpretations of.” Memento’s supporting mesh-streaming engine and carefully developed toolset make the creation process easy. Memento makes digital asset creation accessible and faster for a wider audience of younger artists who may not have a lot of resources at their disposal.

What are some of the ways that high-definition 3D replicas can be used?
I think it would be easiest to just list them…
– Asset creation for film and games
– Digital doubles
– Capture shapes from nature and use them as a starting point for new artwork
– Digitize physical pieces of artwork to explore further development of the art, test different colors or finishes, etc.
– Art galleries can use 3D models to plan layouts for exhibitions, lighting, etc.
– Art insurance can use the 3D models to track changes over time that can affect the value or track theft

Where it really starts to get interesting is that we are seeing so many opportunities for museums, curators, scientists and entrepreneurs of personalized products (medical or accessories) to use Memento’s digitization and preparation tools to push their professions to the limit.

We are designing Memento with these users in mind — pros who need high-quality digital replicas of real objects, but don’t have any CAD/3D modeling experience. Some of these users often deal with collections of millions of objects, which would be an insurmountable task without an accessible tool that can be used by the masses.

What kind of camera do you need?
Our software algorithm will work with almost any kind of camera (fisheye lenses are some of the rare ones that are excluded). If you want really want good results, a camera with a good lens will sure make a difference. The body of the camera doesn’t really matter. Every good lens that is not a plastic lens will create good results.

If you’re planning on buying a new camera solely for the purpose of creating 3D models from photos, we recommend prime lenses with low distortion, like a 50mm lens. Zooming is not recommended in the process of image-based modeling (a.k.a. photogrammetry).

You can try it out for yourself and use any type of camera. Even photos from the iPhone (when taken correctly) can result in quite an amazing-quality 3D mesh.


How many photos are needed to generate a 3D model?
This depends heavily on the size of the object, the intricacy of the level of detail and the accessibility. When I shoot sculptures (such as an antique bust), I use somewhere between 100 to 200 photos. You can process and make a 3D model with a lot less photos, but the mesh geometry will be a lower level of detail (although the resulting textured 3D model might look nice).

I recommend at least 100 photos for a bust-sized object with a medium level of detail. Use 150-200 if it has many small details that you want to capture closer and cover with more photos.

Are there limits to the amount you can import?
Our current system limits the import to 250 photos, but that limitation will soon be lifted to 1,000+ photos. More is not always better, but there is a minimum number of photos you need for high definition. I wouldn’t go under 70-80 photos. The 1000+ photo capacity is generally useful for air-captures of sites, landscapes and buildings, but we are seeing institutions like museums using 300+ photos to get a really high-detail level of artifacts.

How do you transfer the data from the camera to the application?
Use a USB cable or any memory card. If you have a Wi-Fi camera, just use the Wi-Fi. There are no extra steps to transfer data to Memento. Just transfer data the way you would for any application.

To transfer, you open Memento and select the “Photo” option from the dashboard. It will prompt you to select the photos from your hard drive or A360 drive (Autodesk free cloud storage that gives anyone who signs up with an Autodesk ID 5GB of storage). After you select your photos, Memento will invisibly shoot them to the cloud for processing them into a mesh. An email will be sent to the user when the mesh is ready.

What kind of lighting conditions work best for image capture? Low-light problems?
Good question! Light is one of the most, if not the most important, factors for successful capture. Ideally, we need a diffuse light that creates no shadows. Adequate diffused lighting is essential for shooting at optimal apertures (between f5.6 to f8) and at native ISO (usually ISO100 or ISO200, depending on camera model). This eliminates resolution loss due to lack of focus depth, as well as noise introduced by shooting at high ISOs.

Use of flash is an absolute no-go! You also must avoid any condition that would create shadows, such as strong, overhead sun at lunchtime or spotlights.

coral comparision1 copy

In principle, we make a 3D model by comparing color pixels between photos to determine the camera location, orientation, lens type, lens distortion, etc. This means that the pixels of a captured object have to be the same color across all photos. For this reason, shiny, transparent and glossy objects will not work at all and cannot be captured with this method — not just with Memento, with any photogrammetry tool. For these objects, the colors of the same pixel change across various photos, as the object reflects the environment. A way to work around this is to try using a polarizer when dealing with shiny objects, or a variety of temporary dulling/white powder sprays (of course, using sprays is out of the question when dealing with antiquities or high-valued art objects).

The other important aspects of the photo quality include focus and sharpness. Everything has to be in focus throughout the entire photo, not just the object of interest. Fancy photos with blurred depth of field might be nice for an art photo, but will not work for photogrammetry. If you use them, the results will not be great.

Finally, you have to make enough photos from all sides and angles of the object and have good overlap between the photos. Having a lot of photos will not be enough if they don’t overlap well between themselves. I usually shoot every 5 degrees in three rounds around the object —middle, from above and from below. If I am particularly interested in capturing higher resolutions of certain aspects of the object, I shoot additional detailed photos of that part.

What is crucially important is that whenever you break a sequence, you take “in between photos” to help the system understand the link between the overall photo of the object. When you zoom in closer to shoot a detail, shoot a few pictures as you move in towards the object to make the connection.

How long to generate a 3D model?
Now that is the million-dollar question, and it hinges upon a few things. Depending on the number of photos and the complexity of the model, processing times can range anywhere from 30 minutes to 12 hours! This doesn’t include the time it takes to upload and download the files, which is dependent upon the users’ Internet connection speed.

We offer a solution based on cloud compute, which is scalable (meaning that a user can submit many jobs simultaneously). More importantly, it makes it agnostic to users’ system specifications. We routinely update and refine the engine’s algorithms, as well as upgrade the high-performance compute servers on the cloud to make it more powerful, thus making upgrades transparent to the user.

We had started off with infrastructure to support Memento users during its alpha stage, but we had such success during the beta launch that our servers got overloaded. We are scaling up the back-end to support the growing user base and meet their demands.

Will the 3D model work in Flame and Smoke too? After Effects? Fusion? Maya? Nuke? Cinema 4D?
Absolutely. In all of the programs you listed above. It will work in any application that reads .OBJ, .OBJ with quads, FBX (with camera export), .PLY (mesh) and .STL. We also export to RCP as point cloud files that can be imported into Autodesk ReCap and the majority of the Autodesk hero products.

Users can also generate turntables or video walkthroughs around the object within Memento by using simple keyframes. Through using keyframes, users without any previous knowledge of making videos can make them.

Can you extract the lighting from the image to relight people composited into a 3D world?
Not yet.

Do you get lighting maps from the model so it can be relit for compositing?
Not yet.

Can you control the polygon count in the model that comes out of the application?
Absolutely. Preparing and optimizing the data for downstream use is one of Memento’s core values, and there will be so much more that will be coming. When we export, we can decimate the 3D mesh that is usually huge when made from the reality input (photos or scans). It can reach 1-2 billion polygons, which many 3D modeling or post-processing applications can’t even read.

We do smart decimation, and we do baking of the texture maps (normal, diffuse and displacement maps can be baked in) so that the models — even if they are small in geometry polygons — can maintain a sense of beauty and level of detail that still looks awesome online, in Web/mobile solutions and can be read and further worked with in the modeling applications you mentioned.

We also do quadrification. Better quads will be available soon, and we are working on T-spline conversion for further design workflows.

For those who want to learn more check out Autodesk’s Memento Webinar.

Leave a Reply

Your email address will not be published. Required fields are marked *