Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pointcloud Reprojection onto External Camera #5743

Closed
nancibles opened this issue Jan 29, 2020 · 7 comments
Closed

Pointcloud Reprojection onto External Camera #5743

nancibles opened this issue Jan 29, 2020 · 7 comments

Comments

@nancibles
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D415
Firmware Version Open RealSense Viewer --> Click info
Operating System & Version Win 10
Platform PC
SDK Version { legacy / 2.. }
Language python
Segment Robot

Issue Description

I'm trying to save a pointcloud and then align the pointcloud to a separate camera that is mounted just below the RealSense camera's RGB camera. My goal is to have a depth map that can be overlaid to my own camera's images (kind of like how rs.align does it, however, I'm trying to map it to an external camera and NOT the D415's RGB camera.) My current steps so far are:

-Capture an RGB image from my own camera that is mounted to the D415.
-Capture an RGB image from the D415's camera at the same time
-Use rs.align to align the depth image gathered from D415 to the colour image on the D415.
-Use the D415's RGB image, along with the RGB image from my separate camera in the MATLAB Stereo Camera Calibrator Toolbox in order to get the intrinsic parameters along with the extrinsic parameters between D415 and my own camera.
-Reproject the pointcloud using the extrinsic parameters and then mapping out the intrinsic parameters of my own camera.

profile = pipe.start(cfg)
depth_frames = []
color_frames = []

align_to = rs.stream.color
align = rs.align(align_to)

for x in range(filter_stack_size):
    frameset = pipe.wait_for_frames()
    aligned_frames = align.process(frameset)
    depth_frames.append(aligned_frames.get_depth_frame())
    color_frames.append(aligned_frames.get_color_frame())
pipe.stop()

for x in range(filter_stack_size):
    frame = depth_frames[x]
    frame = decimation.process(frame)
    frame = depth_to_disparity.process(frame)
    frame = spatial.process(frame)
    frame = temporal.process(frame)
    frame = disparity_to_depth.process(frame)
    frame = hole_filling.process(frame)

pc.map_to(color_frames[0])
points = pc.calculate(frame)
vtx = np.asanyarray(points.get_vertices())
tex = np.asanyarray(points.get_texture_coordinates())

I keep having trouble with the alignment, where the reprojection of the pointcloud is still not quite overlaid on the images taken from my own camera, where the y-axis has the biggest discrepancy and x-axis is also not quite right. I'm wondering about some of these things (but of course if anyone can see something wrong with my process I'd also appreciate it) :

  1. When I use rs.align to align the depth map onto the RGB image of the D415, does the pointcloud also get modified to line up with the RGB image so that when I save it, it is the same as the depth image? I am using pyrealsense2.

  2. I am assuming that the intrinsic parameters of the depth image should be the same as the colour image. Is that a correct assumption?

I'm pretty new to using the RealSense depth sensor, but I do appreciate the help. Thanks!

@RealSenseCustomerSupport
Copy link
Collaborator


@nancibles
For your 1st question, did you apply the intrinsic parameters along with the extrinsic parameters between D415 and your camera? Seems no such processing in your code snippet.

For the 2nd question, the intrinsic parameters of the depth image and the color image are not the same. You can use rs-enumerate to check more details about these parameters. https://github.com/IntelRealSense/librealsense/tree/master/tools/enumerate-devices

When I use rs.align to align the depth map onto the RGB image of the D415, does the pointcloud also get modified to line up with the RGB image so that when I save it, it is the same as the depth image? I am using pyrealsense2.

I am assuming that the intrinsic parameters of the depth image should be the same as the colour image. Is that a correct assumption?

@jb455
Copy link
Contributor

jb455 commented Feb 12, 2020

The aligned depth image will have the intrinsics of the colour image if that's what you mean.

How are you doing the colour -> external alignment? One thing I would check is that your extrinsic calibration is good enough - you need plenty of photos at different angles and distances to get a good calibration. Also, make sure the external camera can't move relative to the realsense camera or you'll have to recalibrate.

@nancibles
Copy link
Author

Currently, I am using the MATLAB Stereo Camera Calibrator to obtain the extrinsic parameters. I have fixed the two cameras in place, and use a set of ~25 pairs of images from the D415's RGB camera and my external camera and the error from this calibration is very low.

I have a separate script that applies the extrinsic parameters to reproject the pointcloud so it is "from the same perspective" as my external camera, and then it applies the intrinsic parameters in order to produce a depth map that should be directly overlaying on my external camera's image.

Should I be using the intrinsics of the depth image instead? What is happening to the pointcloud when I run rs.align, and can I assume that the pointcloud has the same frame (or field of view) as the depth image after alignment?

@jb455
Copy link
Contributor

jb455 commented Feb 13, 2020

If you generate the pointcloud from the aligned depth frame it will have the colour intrinsics and correspond to the colour fov. Once you have the pointcloud aligned to the external camera, to project it to the external camera's plane you'd use the external intrinsics.
Maybe your extrinsics are the wrong way around? ie, colour->external vs external->colour. To reverse it you could either swap camera 1 and camera 2 when calibrating, or IIRC you can take the inverse of the rotation matrix and negate the translation vector.

@RealSenseCustomerSupport
Copy link
Collaborator


@nancibles Any other questions about this? Looking forward to your update. Thanks!

@nancibles
Copy link
Author

nancibles commented Mar 4, 2020

Still working on the reprojection but like what jb455 said, it seems the depth frame after alignment will have the intrinsics of the colour stream.

Not completely related and I'm not sure if I should open a new issue, but I'm wondering if there's a reason for grid/artifact appearing occasionally on the pointcloud:

image

image

@RealSenseCustomerSupport
Copy link
Collaborator


@nancibles Yes, please create another new ticket if it's another issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants