-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D435i ouputs mapped Raw16 point cloud ERROR #11543
Comments
Hi @weisterki Which method are you using to save the pointcloud, please? Is it a .ply pointcloud file or an image file such as .png? If it is a .ply and you are exporting it with Python code then .ply color export is known to be problematic on Python unfortunately, with almost no solutions working. If you are exporting a ply and you are okay with exporting a color ply without normals then the export_to_ply based Python script at #6194 (comment) may work for you. |
Yes I want .ply point cloud file, and I indeed exporting by export_to_ply, like: while True:
For the viewer, the Raw frame is firstly changed to demosaic frame by But saving point cloud to PLY file is not in this way. The Raw frame is mapped to point cloud by Here is the question: the mapped_frame is actually raw_frame, and raw_frame contains a property or metadata called 'data' to save colour information. We can get colour data by |
How about defining two separate numpy arrays, such as imgRAW1 and imgRAW2, and storing the same RAW data in both arrays. For example:
Then if the data stored in imgRAW1 gets altered by |
I mean if I want to save point cloud, I should first map the point cloud using |
I wonder whether you could reverse the order of the formats in the cvtColor instruction.
|
But no matter how you change the data of imgRGB, you can not pack it to the frame format, therefore you can not map it to point cloud by |
As I emphasized earlier, the |
The section of src/points.cpp linked to below may be relevant source code for export_to_ply. Lines 34 to 60 in c94410a
There is not a clear source-code location for pc.map_to, though the pyrs_processing.cpp pyrealsense2 wrapper file has a reference.
|
The map_to is a class of rs2::pointcloud, is there a file pointcloud in rs2 folder? |
There is src/proc/pointcloud.cpp https://github.com/IntelRealSense/librealsense/blob/master/src/proc/pointcloud.cpp |
It's difficult to solve it from the source code. Could you give any advice of mapping image to the point cloud? |
Ok, I will try that. By the way, could you please explain the function float2 * points::get_texture_coordinates() In the code #11543 (comment), the get_texture_coordinates() is seems to generate a coordinate, and then send to the pointcloud function to transform to UV coordinate: def pointcloud(out, verts, texcoords, color, painter=True):
Can I utilize this |
Another RealSense team member provides a detailed explanation about get_texture_coordinates() in relation to map_to at #1231 (comment) |
That issue indeed gives information about The problem of saving a Raw16 format point cloud may be a bug in both RealSense SDK and Viewer, as I thought, they may deal with RA16 format Bayer image just as Y16 grayscale image. But that is incorrect because the pixel value of Bayer pattern image present both value of R, G and B, therefore cause the chaos of saved point cloud colour. |
Is #6234 (comment) helpful for mapping UV coordinates to the color frame? |
I used the code in #6234 (comment), and I don't know what |
I'm not familiar with UV color programming or the colorLocation instruction. At #1429 though a RealSense team member provides the following advice. When you calculate the location of the color pixel, note that each pixel is N-bytes wide (3 for RGB) when you access its raw data via char* pointer:
So unsigned char has a value of [3] for RGB, which is split into a 3-part array:
Where the first colorLocation is the first part, 'colorLocation + 1' is the second part and 'colorLocation + 2' is the third part. And the value of ColorLocation is provided by int colorLocation = y * color_intrinsic.width + x; Where the values of y and x are calculated from: int x = tex_coords[i].u * color_intrinsic.width; |
I use Python so my code is : while True:
But it throws ERROR. Because python is different from C++ pointers, and it doesn't store the image in the way of a |
I use python so my code is:
By the way, my UV coordinates are all 0, like #1429 |
#6556 suggests using texcoords in Python in the following way.
|
I've already used that in my code, as my answer #11543 (comment), but I want the python code of UV to RGB after getting the texcoords. Any suggestion about my question #11543 (comment)? Thanks! |
I researched your question extensively but could not find a specific Python example of UV being converted back to RGB after texcoords are obtained. However, a RealSense user who knows more about this particular programming subject than myself once offered the following advice: "The texture coordinates return uv coordinates mapping to the color image with the coordinates normalised to 0-1. So to get from uv to xy coordinates (i.e pixel coordinates in the color image) you have to multiply the u and v values by the color image width and height". |
According to the reply of https://community.intel.com/t5/Items-with-no-label/How-to-project-points-to-pixel-Librealsense-SDK-2-0/m-p/479194#M4774, The texture coordinates return uv coordinates mapping to the colour image with the coordinates normalised to 0-1. That means, the value of
|
https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud |
Here is my solution of saving PLY point cloud with raw RGB(plyfile is needed):
Hope to be useful to anyone who may need it. I cut off the uv locations which are out of the boundary since I believe that is caused by the depth camera FOV is larger than RGB camera FOV. |
Thanks so much for sharing your code with the RealSense community! |
Thanks, Marty, and I also hope Intel can fix this bug in SDK and RealSense Viewer:-) |
Hi @weisterki As you have achieved a solution, do you require further asssistance with this case please? Thanks! |
Thanks, Marty. You can close this issue. |
Thanks very much for the confirmation! |
Issue Description
I want to capture a point cloud with Raw16 stream via D435i. I changed the camera configure to:
![image](https://user-images.githubusercontent.com/20593723/223669310-99cf51bf-24ce-416f-8142-e82a3a65bc2f.png)
![image](https://user-images.githubusercontent.com/20593723/223671869-e2ef032e-8868-4c02-9f05-436804fd81e7.png)
config.enable_stream(rs.stream.color, 1920, 1080, rs.format.raw16, 30)
and transformed Raw16 stream to Demosaic RGB stream in the capturing loop:
imgRGB = cv2.cvtColor(imgRAW, cv2.COLOR_BAYER_GBRG2RGB)
.In the viewer I got the wanted stream:
However, when I save the point cloud, I found the colour of the point cloud is incorrect.
That's weird because the demosaic stream should be same with RGB8 stream when saving. The demosaic stream is formatted as uint8. I am looking for the reason of this problem, and I would be very appreciative if you could give me some advices!
The text was updated successfully, but these errors were encountered: