-
-
Notifications
You must be signed in to change notification settings - Fork 829
CLI Parameters
Default pipeline: aliceVision_cameraInit -> aliceVision_featureExtraction -> aliceVision_imageMatching -> aliceVision_featureMatching -> StructureFromMotion -> aliceVision_prepareDenseScene -> aliceVision_depthMapEstimation -> aliceVision_depthMapFiltering -> aliceVision_meshing -> aliceVision_depthMapFiltering -> aliceVision_texturing
A SfMData file (*.sfm) [if specified,--imageFolder cannot be used].
Input images folder [if specified, --input cannot be used].
Camera sensor width database path.
Output file path for the new SfMData file
Focal length in pixels. (or '-1' to unset)
Empirical value for the field of view in degree. (or '-1' to unset)
Intrinsics Kmatrix "f;0;ppx;0;f;ppy;0;0;1".
Camera model type (pinhole, radial1, radial3, brown, fisheye4, fisheye1).
When there is no serial number in the image metadata, we cannot know if the images come from the same camera. This is problematic for grouping images sharing the same internal camera settings and we have to decide on a fallback strategy:
- global: all images may come from a single device (make/model/focal will still be a differentiator).
- folder: different folders will be considered as different devices
- image: consider that each image has different internal camera parameters
Allow the program to process a single view.
Warning: if a single view is process, the output file can't be use in many other programs.
verbosity level (fatal, error, warning, info, debug, trace).
Note
This program takes as input a media (image, image sequence, video) and a database (vocabulary tree, 3D scene data) and returns for each frame a pose estimation for the camera.
The sfm_data.json kind of file generated by AliceVision.
The folder path or the filename for the media to track
If a folder is provided it enables visual debug and saves all the debugging info in that folder
Filename for the SfMData export file (where camera poses will be stored).
Default : trackedcameras.abc.
Filename for the localization results (raw data) as .json
Folder containing the descriptors for all the images (ie the *.desc.)
The describer types to use for the matching
Preset for the feature extractor when localizing a new image {LOW,MEDIUM,NORMAL,HIGH,ULTRA}
The type of *sac framework to use for resection (acransac, loransac)
The type of *sac framework to use for matching (acransac, loransac)
Calibration file
Enable/Disable camera intrinsics refinement for each localized image
Maximum reprojection error (in pixels) allowed for resectioning. If set to 0 it lets the ACRansac select an optimal value.
[voctree] Number of images to retrieve in database
[voctree] For algorithm AllResults, it stops the image matching when this number of matched images is reached. If 0 it is ignored.
[voctree] Number of minimum images in which a point must be seen to be used in cluster tracking
[voctree] Filename for the vocabulary tree
[voctree] Filename for the vocabulary tree weights
[voctree] Algorithm type: FirstBest, AllResults
[voctree] Maximum matching error (in pixels) allowed for image matching with geometric verification. If set to 0 it lets the ACRansac select an optimal value.
[voctree] Number of previous frame of the sequence to use for matching (0 = Disable)
[voctree] Enable/Disable the robust matching between query and database images, all putative matches will be considered.
[bundle adjustment] If --refineIntrinsics is not set, this option allows to run a final global bundle adjustment to refine the scene
[bundle adjustment] It does not take into account distortion during the BA, it consider the distortion coefficients all equal to 0
[bundle adjustment] It does not refine intrinsics during BA
[bundle adjustment] Minimum number of observation that a point must have in order to be considered for bundle adjustment