-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multicam Extrinsic Calibration #6007
Comments
Hi @rl2222 Before we begin, I should highlight that there is a commercial 3D multicam solution available called RecFusion Pro. https://www.recfusion.net/index.php/en/features It costs 499 euro but has a trial version, and RecFusion has been complimented by RealSense users. If you prefer to continue with your own project, I am happy to help. To start with, could you please tell me if your scans are creating a solid 3D mesh or a long cloud. Thanks! |
Hi @MartyG-RealSense, At the moment, I am using librealsense to create a rs2::pointcloud and convert it to a PCL point cloud, as this allows to easily transform clouds and do further postprocessing. |
Y16 is the only unrectified stream type. It is also greyscale when selected as an RGB format, according to a RealSense support team member. If you are using any post-processing, align should be done after the post-processing to help avoid a type of distortion called aliasing (which can cause jagged lines). |
Ok thanks. In the custom calibration white paper, however, they also speak of YUY2 (which is YUYV?). When using Y16, do I automatically get the color frame of the left camera? Thanks for that hint as well, for the calibration frames I am not using any librealsense postprocessing. Maybe my matrix multiplication is not correct? Do I have to consider the extrinsics of the color stream with respect to the depth stream? |
For the RealSense D41x, the RGB stream is available only via the "left" IR imager. Left and right IR imager on the D43x series camera are monochrome sensors; hence, no RGB color. When stitching multiple RealSense point clouds together, a way to do so is with an Affine Transform. Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud. I apologise for any lack of clarity in my answers, as this subject is a bit outside of my direct experience. |
As I am using two D435 cameras: So I have to use the Infrared stream to get the image of the left camera? Because when using RS_STREAM_COLOR, I cant select one camera specifically. Do I even have to use the left camera specifically? Why cant I just compute a matrix mapping from the depth frame of one cam to the depth frame of another cam? Yeah thats exactly what I am doing with my matrices (FromMainCam and Extrinsic Matrix) which are Matrix4f matrices, rotating and translating the point cloud, so that they are in one shared coordinate system. No problem, thank you for your ideas. |
If there is an offset between point clouds once they are stitched, I wonder if there is a difference in scale between the individual clouds. If your aim is to get a colorized PCL point cloud, the link below may be useful. |
But how could it happen, that there is a difference in scale? The resolution and image processing is the same. |
Each of the cameras should be using the same 'Depth Unit' scale, which is '0.001' by default. Changing the depth unit scale can affect the scale of a point cloud when it is exported from the SDK and imported into another 3D modeling program if both are not using the same depth unit scale (e.g a point cloud exported as a .ply file and imported into Blender or MeshLab might be much smaller than the original RealSense cloud if there is a depth unit mismatch). |
Ok yeah, thats true. I am settting the depth units to 0.0001 manually. |
There is a C++ script provided by a RealSense team member for setting the depth units to 0.0001 This would set the depth units automatically each time the program is run. Otherwise, I assume they return to their 0.001 default if you set it manually once you have started the pipeline. |
Ok, thanks. Btw, I found out something really interesting: When I use the camera sidewards, the offset in height is reduced to 1-2mm, which is acceptable I guess.. Do you know why that is the case? Do you know of any paper which proves that this reduces distortion(?) or something? |
I recall a past case from years ago in which different results could be obtained by rotating the camera. The thinking at the time was that it may have been related to changing how the projected laser light fell upon objects in the scene, though I don't know of a formal documentation reference that confirms that. |
This case will be closed after 7 days from the date of writing this if there are no further responses. Thanks! |
Issue Description
Hi,
I dont know, if I am at the right place to ask this, but I hope you guys can help me.
I am trying to get a multicam setup working, to scan a 3D object. Thus, I need a matrix describing the transformation from all camera's base frames to the base frame of a specified main camera: FromMainCam.
I have done this in the following way:
This works pretty well. However, I notice significant systematic offsets. Although the depth map of the CharUco Board seems to be properly aligned, for the component I want to scan, there is an offset of up to 1cm between the depthmaps of the individual cameras.
As this is a systematic error, I assume, that it has nothing to do with the quality of the CharUco calibration, but rather with something else.
I tried recalibrating the cameras using the DynamicCalibrator, but it didnt help.
I tried different color stream formats: Y16,YUYV but that didnt help either.
I even tried using infrared streams, but didnt manage to get it working properly.
What am I missing? I have read in similar posts, that I have to use unrectified images(?), but YUYV Color stream should be unrectified, am I right?
What else could be the reason for this offset?
Thank you guys for any help.
The text was updated successfully, but these errors were encountered: