-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help with aligning RGB and depth images from D435i using pyrealsense2 and Orange Pi #11758
Comments
Hi @CharmingZh Have you tested the RealSense SDK's pyrealsense2 depth-color alignment example program align_depth2color.py please? |
Thanks for responding, @MartyG-RealSense
I was hoping to send the frames instance to my desktop via tcp and get the aligned frames on the desktop, but I don't know how to convert the frames to a byte stream. That's why I was hoping to align them manually, with the help of camera parameters.
|
The main alternative to performing alignment with align_to is to map color onto depth with the pc.map_to and pc.calculate instructions to create a textured pointcloud, as demonstrated in a script at #4612 (comment) - though this would not be suitable if you do not want a 3D pointcloud. Aligning manually would not be easy on a D435i as its color sensor's field of view (FOV) size is smaller than its depth sensor's FOV size. This difference is normally adjusted for automatically when performing align_to or map_to. If you only need XYZ coordinates for a specific coordinate instead of the entire image then you can save processing by using the rs2_project_color_pixel_to_depth_pixel instruction to convert a single XY color pixel coordinate to an XYZ depth pixel coordinate instead of performing alignment. |
Thank You VVVVVVVVERY MUCHHHHHHHH MY BRO!!!@MartyG-RealSense, You really rescued me. |
You are very welcome, @CharmingZh - it's great news that you achieved a solution. :) |
@MartyG-RealSense |
In the past other RealSense Python users have attempted a numpy to rs2::frame conversion, if that is what you are aiming for, but to date there has not been any reports of success in doing so. A RealSense user at #8394 tried converting numpy to BufData instead but also did not succeed. |
@CharmingZh |
Hi @CharmingZh Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Dear community,
I am facing an issue while trying to align RGB and depth images from a D435i camera using pyrealsense2 on an Orange Pi (similar to Raspberry Pi). The operation of aligning frames is taking a lot of time, resulting in a transfer rate of only 2.5 images per second.
To overcome this issue, I tried to serialize the composite_frame object using pickle.dumps() and send it over TCP. However, I got the following error "TypeError: cannot pickle 'pyrealsense2.pyrealsense2.composite_frame' object". I would appreciate any suggestions on how I can serialize this object so that I can send it over TCP.
frames_serial = pickle.dumps(frames) gives the following error TypeError: cannot pickle 'pyrealsense2.pyrealsense2.composite_frame' object
I also attempted to align the frames manually by first extracting the depth and color frames and aligning them using the camera's internal and external reference, but I found that the depth map after coordinate mapping and RGB cannot be aligned. I have attached my code and the result of my attempt below. Can someone please point out my mistake or help me with aligning the images correctly?
Thank you all in advance for your help.
The Intrinsic parameters of RGB camera, depth camera and Extrinsic parameters from RGB to depth camera are all obtained by
rs-sensor-control
and written to the program manually.The text was updated successfully, but these errors were encountered: