You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, your dataset is fantastic! Currently, I am using Colmap software to simulate the process of generating camera poses similar to yours. However, the results obtained from the same set of images differ significantly from the camera poses you provided, and these differences persist even after coordinate transformations. I have tried multiple sets of images, and the results are consistent. I'm not sure what is causing these differences. There are many parameters to set during feature extraction, feature matching, or sparse reconstruction, and I have used the default settings for all of these parameters. How should I configure them?
The image below shows a visual comparison of the pose results generated from the "TeddyBear624_103316_207474" set—the green poses are from my Colmap-generated raw data, the yellow poses are after I performed the coordinate transformation from Colmap to PyTorch3D, and the red poses are yours. The teddybear point cloud below is from my Colmap results, and the one above is from yours. As you can see, there is a significant difference. I look forward to your guidance!
The text was updated successfully, but these errors were encountered:
Hello, your dataset is fantastic! Currently, I am using Colmap software to simulate the process of generating camera poses similar to yours. However, the results obtained from the same set of images differ significantly from the camera poses you provided, and these differences persist even after coordinate transformations. I have tried multiple sets of images, and the results are consistent. I'm not sure what is causing these differences. There are many parameters to set during feature extraction, feature matching, or sparse reconstruction, and I have used the default settings for all of these parameters. How should I configure them?


The image below shows a visual comparison of the pose results generated from the "TeddyBear624_103316_207474" set—the green poses are from my Colmap-generated raw data, the yellow poses are after I performed the coordinate transformation from Colmap to PyTorch3D, and the red poses are yours. The teddybear point cloud below is from my Colmap results, and the one above is from yours. As you can see, there is a significant difference. I look forward to your guidance!
The text was updated successfully, but these errors were encountered: