I thought it would be interesting to have my Amazon bear dancing. So I scanned it in the morning. I’m still adjusting the results.

The first time rendering result turned out not correct. Maybe the reason was I did three angles but just applied one background?

The second I tried to organized the files into 3 groups. And went through all photos to delete some photos that look not correct.

Then in the software, I did angle by angle, totally three rounds. And imported the background images separately for each angel. Finally, it turned to be really nice.

 

Basic Processes in Photoscan:

(Windows) Tools>Reference >GPU > check GPU device and use CPU

Tools>import>import mask

Workflow:

Align photos >

Build Dens cloud >

Clean model (change view mode to dens point cloud, freeform selection)>

Build Mesh > Build Texture

 

Just curious about what else I can play with this technic, I did some experiments.

1. The first try (failed, like a fire disaster):

I guess the reason was I didn’t shoot enough photos, and the contract between object and background was not strong.


2. I raised the contract in Lightroom, and taken more photos, but it turned out like this:

3. Tried with Occipital Structure Sensor and iPad:

It really needs practice and good lighting, otherwise the model will look horrible.

 

4.iPhone

I also taken some photos around a booth at 33rd street, which is decorated with Halloween stuff. I think the details would be great if I had more photos to help the software understand the structure.