Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inaccurate meshing #896

Closed
TR7 opened this issue May 15, 2020 · 15 comments
Closed

inaccurate meshing #896

TR7 opened this issue May 15, 2020 · 15 comments
Labels
360/fisheye bug for actual bugs (unsure? use type:question)

Comments

@TR7
Copy link

TR7 commented May 15, 2020

Describe the bug
inaccurate/broken meshing. I guess it's not supposed to be like this

To Reproduce

Expected behavior
meshing should be a way better with the default values and so many pictures.

Screenshots
image

overview camera poses:
image

Logs
http://hosting141203.a2e6d.netcup.net/Thomas/HDKornmarkt/Logs.zip

Desktop (please complete the following and other pertinent information):

  • OS: win 10
  • Python version 3.7
  • Meshroom version: release 2019.2.0
@TR7 TR7 added the bug for actual bugs (unsure? use type:question) label May 15, 2020
@fabiencastan
Copy link
Member

Look at the tooltip on the orange flag on your images. The camera model is probably missing from the sensor database.

@natowi
Copy link
Member

natowi commented May 16, 2020

Try:

GoPro;FUSION;6.17;dpreview

If it is this model

Tested on nine images:
senns

@TR7
Copy link
Author

TR7 commented May 16, 2020

i tried this model (GoPro;FUSION;6.17;dpreview) with following results:

image

i also tried with only 9 images, like you have done, and that works without problems.

it seems the problem occurs with more than one loop around the target object. also it seems like there are 3 layers of ground-floor-point-clouds in the SfM output instead of one.

i know, it will take some time to compute, but it would be a very helpful information for me if the reconstruction with all 162 images causes the same problems for you.

@TR7
Copy link
Author

TR7 commented May 16, 2020

good news, it seems to work with more SIFT-Points AND "min observation for triangulation = 3"

@TR7 TR7 closed this as completed May 16, 2020
@fabiencastan
Copy link
Member

@TR7 Do you have the other side of the gopro fusion 360? If yes, could you share it? It would be interesting to declare it as a rig of 2 cameras and see if it improves the results.

@TR7
Copy link
Author

TR7 commented May 16, 2020

one the other side from the gopro, there is me standing in front of the camera, so not really useful. but i will take some other pictures and can of course share them (both sides with me only on the side).
is there a place or do you have a system, where to collect all the links to test-datasets? or just post it here?

@natowi
Copy link
Member

natowi commented May 16, 2020

Here I proposed to collect user contributed datasets similar to the Monstree demo dataset to test 360 and fish eye images.
@fabiencastan can we create a new Repository under Alicevision to collect a few small test datasets?

@TR7
Copy link
Author

TR7 commented May 16, 2020

whats your prefered amount of front/back photos per dataset? special kind of objects/places?

@fabiencastan
Copy link
Member

@TR7 It would be ideal if you could make an indoor and an outdoor dataset.
Share a link here and we will see if we should store it as a reference dataset for this use case. I'm not sure how to organize that for now.

@TR7
Copy link
Author

TR7 commented May 20, 2020

another outdoor dataset with 2x 128 JPGs from GoPro Fusion (both sides, 0.5 Sec Photo Timelapse):
http://hosting141203.a2e6d.netcup.net/Thomas/Scans/08/rig08_2x128Images_Outdoor_GPFUSION.zip

@TR7
Copy link
Author

TR7 commented May 20, 2020

an here a small indoor dataset for test purposes:
http://hosting141203.a2e6d.netcup.net/Thomas/Scans/06/rig06B_FrontBack_2x21Images_Indoor_GPFUSION.zip

@fabiencastan
just tell me, if you need more or bigger datasets.

@fabiencastan
Copy link
Member

@TR7 Thanks a lot! I see that you have already organized them as a rig of 2 cameras. Did you already tried to reconstruct them? And compare with/without using the rig?

@TR7
Copy link
Author

TR7 commented May 24, 2020

@TR7 Thanks a lot! I see that you have already organized them as a rig of 2 cameras. Did you already tried to reconstruct them? And compare with/without using the rig?

yes i tested a lot with over 14 datasets from the GoPro Fusion.
I had several problems. Some of them I could solve or avoid by myself:

  1. when importing the EXIF images into Meshroom they must be at least 1 second apart, otherwise they will be skipped. I wrote a Python script which automatically changes the recording time strings. Probably the problem arises because Meshroom creates an image identifier based on the name and time (second-based)?

  2. the serial numbers in the EXIF tag must be different (but the GoPro Fusion creates the same serial number in the EXIF tag for the back and front sides). I wrote a Python script that inserts a "B" or "F" before the serial number string in the EXIF tag.

Some other (more about the rig) problems are unsolved yet:

  1. camera rig constraint does not prevent the camera pairs from drifting apart [question] #906

  2. in some datasets: with 1000+ images the reconstruction only works without "Use Rig Constraint". Otherwise it fails in SfM. But i'm still looking into it, though.

  3. so far I haven't found out why sometimes the rig symbol (see marker on the 1st picture) is shown after the import and sometimes not.

image

RigSymbol

@natowi
Copy link
Member

natowi commented Jun 11, 2020

@TR7 Can I add a few of your images to https://github.com/natowi/meshroom-360-datasets under CC-BY-SA-4.0 license?

@TR7
Copy link
Author

TR7 commented Jun 11, 2020

yes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
360/fisheye bug for actual bugs (unsure? use type:question)
Projects
None yet
Development

No branches or pull requests

3 participants