-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pointcloud Reprojection onto External Camera #5743
Comments
@nancibles For the 2nd question, the intrinsic parameters of the depth image and the color image are not the same. You can use rs-enumerate to check more details about these parameters. https://github.com/IntelRealSense/librealsense/tree/master/tools/enumerate-devices When I use rs.align to align the depth map onto the RGB image of the D415, does the pointcloud also get modified to line up with the RGB image so that when I save it, it is the same as the depth image? I am using pyrealsense2. I am assuming that the intrinsic parameters of the depth image should be the same as the colour image. Is that a correct assumption? |
The aligned depth image will have the intrinsics of the colour image if that's what you mean. How are you doing the colour -> external alignment? One thing I would check is that your extrinsic calibration is good enough - you need plenty of photos at different angles and distances to get a good calibration. Also, make sure the external camera can't move relative to the realsense camera or you'll have to recalibrate. |
Currently, I am using the MATLAB Stereo Camera Calibrator to obtain the extrinsic parameters. I have fixed the two cameras in place, and use a set of ~25 pairs of images from the D415's RGB camera and my external camera and the error from this calibration is very low. I have a separate script that applies the extrinsic parameters to reproject the pointcloud so it is "from the same perspective" as my external camera, and then it applies the intrinsic parameters in order to produce a depth map that should be directly overlaying on my external camera's image. Should I be using the intrinsics of the depth image instead? What is happening to the pointcloud when I run rs.align, and can I assume that the pointcloud has the same frame (or field of view) as the depth image after alignment? |
If you generate the pointcloud from the aligned depth frame it will have the colour intrinsics and correspond to the colour fov. Once you have the pointcloud aligned to the external camera, to project it to the external camera's plane you'd use the external intrinsics. |
@nancibles Any other questions about this? Looking forward to your update. Thanks! |
Still working on the reprojection but like what jb455 said, it seems the depth frame after alignment will have the intrinsics of the colour stream. Not completely related and I'm not sure if I should open a new issue, but I'm wondering if there's a reason for grid/artifact appearing occasionally on the pointcloud: |
@nancibles Yes, please create another new ticket if it's another issue. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
I'm trying to save a pointcloud and then align the pointcloud to a separate camera that is mounted just below the RealSense camera's RGB camera. My goal is to have a depth map that can be overlaid to my own camera's images (kind of like how rs.align does it, however, I'm trying to map it to an external camera and NOT the D415's RGB camera.) My current steps so far are:
-Capture an RGB image from my own camera that is mounted to the D415.
-Capture an RGB image from the D415's camera at the same time
-Use rs.align to align the depth image gathered from D415 to the colour image on the D415.
-Use the D415's RGB image, along with the RGB image from my separate camera in the MATLAB Stereo Camera Calibrator Toolbox in order to get the intrinsic parameters along with the extrinsic parameters between D415 and my own camera.
-Reproject the pointcloud using the extrinsic parameters and then mapping out the intrinsic parameters of my own camera.
I keep having trouble with the alignment, where the reprojection of the pointcloud is still not quite overlaid on the images taken from my own camera, where the y-axis has the biggest discrepancy and x-axis is also not quite right. I'm wondering about some of these things (but of course if anyone can see something wrong with my process I'd also appreciate it) :
When I use rs.align to align the depth map onto the RGB image of the D415, does the pointcloud also get modified to line up with the RGB image so that when I save it, it is the same as the depth image? I am using pyrealsense2.
I am assuming that the intrinsic parameters of the depth image should be the same as the colour image. Is that a correct assumption?
I'm pretty new to using the RealSense depth sensor, but I do appreciate the help. Thanks!
The text was updated successfully, but these errors were encountered: