Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multicam Extrinsic Calibration #6007

Closed
fl0ydj opened this issue Mar 10, 2020 · 14 comments
Closed

Multicam Extrinsic Calibration #6007

fl0ydj opened this issue Mar 10, 2020 · 14 comments

Comments

@fl0ydj
Copy link

fl0ydj commented Mar 10, 2020

Required Info
Camera Model { D435 }
Firmware Version (05.12.02.100)
Operating System & Version {Win 10}
Platform PC
SDK Version { 2.29.0 }
Language {C++ }
Segment {3D scanning }

Issue Description

Hi,
I dont know, if I am at the right place to ask this, but I hope you guys can help me.
I am trying to get a multicam setup working, to scan a 3D object. Thus, I need a matrix describing the transformation from all camera's base frames to the base frame of a specified main camera: FromMainCam.
I have done this in the following way:

  • enable Depth stream (Z16) and color stream (I tried: RGB8,Y16,YUYV)
  • waitforframes
  • align_to_depth
  • detectCharUco Corners and estimate Extrinsic Matrix of each cam(similarly to this tutorial: https://docs.opencv.org/3.4/df/d4a/tutorial_charuco_detection.html)
  • calculate FromMainCam=MainCam_ExtrinsicMatrix.inverse()*ExtrinsicMatrix
  • use this matrix to calculate the extrinsic matrix of each cam using the updated extrinsicMatrix of the MainCamera: ExtrinsicMatrix=MainCam_ExtrinsicMatrix*FromMainCam

This works pretty well. However, I notice significant systematic offsets. Although the depth map of the CharUco Board seems to be properly aligned, for the component I want to scan, there is an offset of up to 1cm between the depthmaps of the individual cameras.
As this is a systematic error, I assume, that it has nothing to do with the quality of the CharUco calibration, but rather with something else.
I tried recalibrating the cameras using the DynamicCalibrator, but it didnt help.
I tried different color stream formats: Y16,YUYV but that didnt help either.
I even tried using infrared streams, but didnt manage to get it working properly.

What am I missing? I have read in similar posts, that I have to use unrectified images(?), but YUYV Color stream should be unrectified, am I right?
What else could be the reason for this offset?

Thank you guys for any help.

@MartyG-RealSense
Copy link
Collaborator

Hi @rl2222 Before we begin, I should highlight that there is a commercial 3D multicam solution available called RecFusion Pro.

https://www.recfusion.net/index.php/en/features

It costs 499 euro but has a trial version, and RecFusion has been complimented by RealSense users.

If you prefer to continue with your own project, I am happy to help. To start with, could you please tell me if your scans are creating a solid 3D mesh or a long cloud. Thanks!

@fl0ydj
Copy link
Author

fl0ydj commented Mar 10, 2020

Hi @MartyG-RealSense,
thanks for your quick response and your reference to the commercial software. I would prefer to continue my own project, however.

At the moment, I am using librealsense to create a rs2::pointcloud and convert it to a PCL point cloud, as this allows to easily transform clouds and do further postprocessing.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 10, 2020

Y16 is the only unrectified stream type. It is also greyscale when selected as an RGB format, according to a RealSense support team member.

#1554 (comment)

If you are using any post-processing, align should be done after the post-processing to help avoid a type of distortion called aliasing (which can cause jagged lines).

@fl0ydj
Copy link
Author

fl0ydj commented Mar 10, 2020

Ok thanks. In the custom calibration white paper, however, they also speak of YUY2 (which is YUYV?). When using Y16, do I automatically get the color frame of the left camera?

Thanks for that hint as well, for the calibration frames I am not using any librealsense postprocessing.

Maybe my matrix multiplication is not correct? Do I have to consider the extrinsics of the color stream with respect to the depth stream?

@MartyG-RealSense
Copy link
Collaborator

For the RealSense D41x, the RGB stream is available only via the "left" IR imager.

Left and right IR imager on the D43x series camera are monochrome sensors; hence, no RGB color.

When stitching multiple RealSense point clouds together, a way to do so is with an Affine Transform. Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud.

I apologise for any lack of clarity in my answers, as this subject is a bit outside of my direct experience.

@fl0ydj
Copy link
Author

fl0ydj commented Mar 10, 2020

As I am using two D435 cameras: So I have to use the Infrared stream to get the image of the left camera? Because when using RS_STREAM_COLOR, I cant select one camera specifically. Do I even have to use the left camera specifically? Why cant I just compute a matrix mapping from the depth frame of one cam to the depth frame of another cam?

Yeah thats exactly what I am doing with my matrices (FromMainCam and Extrinsic Matrix) which are Matrix4f matrices, rotating and translating the point cloud, so that they are in one shared coordinate system.

No problem, thank you for your ideas.
As I have seen, that you are quite active in this forum @MartyG-RealSense: Do you know people, which had a similar problem? I couldnt find any issues directly related to my problem.

@MartyG-RealSense
Copy link
Collaborator

If there is an offset between point clouds once they are stitched, I wonder if there is a difference in scale between the individual clouds.

If your aim is to get a colorized PCL point cloud, the link below may be useful.

#1143

@fl0ydj
Copy link
Author

fl0ydj commented Mar 10, 2020

But how could it happen, that there is a difference in scale? The resolution and image processing is the same.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 10, 2020

Each of the cameras should be using the same 'Depth Unit' scale, which is '0.001' by default. Changing the depth unit scale can affect the scale of a point cloud when it is exported from the SDK and imported into another 3D modeling program if both are not using the same depth unit scale (e.g a point cloud exported as a .ply file and imported into Blender or MeshLab might be much smaller than the original RealSense cloud if there is a depth unit mismatch).

@fl0ydj
Copy link
Author

fl0ydj commented Mar 11, 2020

Ok yeah, thats true. I am settting the depth units to 0.0001 manually.
Do you know if I have to do so again, when I stopped the pipeline and startet it again?

@MartyG-RealSense
Copy link
Collaborator

There is a C++ script provided by a RealSense team member for setting the depth units to 0.0001

#2385 (comment)

This would set the depth units automatically each time the program is run. Otherwise, I assume they return to their 0.001 default if you set it manually once you have started the pipeline.

@fl0ydj
Copy link
Author

fl0ydj commented Mar 11, 2020

Ok, thanks.

Btw, I found out something really interesting: When I use the camera sidewards, the offset in height is reduced to 1-2mm, which is acceptable I guess.. Do you know why that is the case? Do you know of any paper which proves that this reduces distortion(?) or something?

@MartyG-RealSense
Copy link
Collaborator

I recall a past case from years ago in which different results could be obtained by rotating the camera. The thinking at the time was that it may have been related to changing how the projected laser light fell upon objects in the scene, though I don't know of a formal documentation reference that confirms that.

@MartyG-RealSense
Copy link
Collaborator

This case will be closed after 7 days from the date of writing this if there are no further responses. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants