-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
relationship between a point in RGB Image and its corresponding in depth Image #2948
Comments
Hi @Medo91code The mathematics of these transformations are concentrated in rsutil.h. Specifically - for the D400 camera, depth image is generated from the left & right infrared and is perfectly aligned to the left infrared image. Given pixel and its depth, you can convert it to 3D point: You can query extrinsics between any two streams - this will give you are rotation and translation between the two sensors. Multiply by the resulting matrix and do the whole procedure in reverse, and you get a pixel in the color image. Let me know if this answer your question |
Thank you for your response . firstly i want to say what i will do (will know ) exactly : 1- i will detect a shape in RGB Image ( for Example this shape is a circle ) and i will get the center of this circle (Xrgb , Yrgb). I have now ( Xrgb , Yrgb) . 2- you said :
in your Answer you supposed that i get "in a first step" the depth of a Pixel (the depth of ( Xrgb , Yrgb) ) . I have two cases . in first Case------------------------------------------------------------------------------------------------ to do that , i use (.get_distance(x,y); ) and here i use RGB frame not Depth frame , Right ? for Example : i have now the pixel (Xrgb , Yrgb) and its depth so i can use (rs2_deproject_pixel_to_point) and i get in second Case ------------------------------------------------------------------------------------------- when i can't use (RGB_frame.get_distance(x,y); ) and only i can use (dpt_frame.get_distance(x,y); ) for example the center of the circle in RGB Image is Xrgb = 500 , Yrgb = 900 and the center of this same circle in depth image is (Xd , Yd) , but i don't know (Xd,Yd) . because i apply detection Algorithm on RGB Image only." i want to convert from (Xrgb , Yrgb) to (Xd , Yd) and after that i will use (.get_distance(Xd,Yd) ) How can i make that? or can i make that ? |
Hi @Medo91code We have python tutorial showing how to use depth to estimate the distance to object detected in the RGB frame: Distance to Object The key point is that after applying If |
Thank you @dorodnic for your response . i know this take some time from you , but please can you provide me a practical Example or tell me what are these parameters in the function (rs2_project_color_pixel_to_depth_pixel) and from where can i pass their values : the parameter are : float to_pixel[2], thank you |
librealsense/unit-tests/unit-tests-live.cpp Line 4936 in 79dda7f
if you ask this |
Hi @Medo91code |
Hi Thank you @dorodnic @yuemasy @RealSense-Customer-Engineering |
@dorodnic please should I perform depth_rgb alignment before using the function rs2_project_color_pixel_to_depth_pixel() ? Thank you in advance. |
Hello, @Medo91code @dorodnic. |
Hello @Medo91code @dorodnic Hey I have problem in reverse manner I want to project (Depth Pixel -> Color Pixel) in way reverse of rs2_project_color_pixel_to_depth_pixel. Please help. I did the reverse of |
Hello
I use D435 and I should detect a shape (a point "Pixel" in this shape ) on RGB Image with resolution 1920X1080 . we will say the coordinate of this point ( the coordinate of red Point in RGB image) is 1000 X 700 .
after that I will get a distance with:
rs2::depth_frame dpt_frame = frame.asrs2::depth_frame();
float pixel_distance_in_meters = dpt_frame.get_distance(x,y);
but the pixel 740 X 500 in depth image represent this Point ( a pixel 1000 X 700 in depth image represent other point ) .
(i use depth image 1280 X 720 )
I mean , there is displacement and other Transformations between RGB Image and depth Image (even if i use the same resolution for RGB Image and depth Image ).
I want to know the Mathematical formulas or Codes / Methods to Extract these displacements and Transformations between RGB Image (1920X1080) and depth image (1280 X 720).
I tested 3 examples
1- rs-align.cpp
2- rs-measure.cpp
3-rs-pointcloud.cpp
these 3 examples are great but with these i have "aligned Image" or " Point cloud Image"
with high noise (i want firstly to detect the object only from RGB Image after that i use depth Imege / get_distance(x,y); ).
Important note : I noticed that the depth image represent the same points as will as the left_IR Image .
i mean " no displacement and other Transformations between left_IR Image and depth Image " .
i noticed that by testing , but I'm not sure , if that 100% correct.
is this note correct ?
if it is correct , Why ?
Attached files :
1- Rgb Image
2- depth
3_ Left ir imager
The text was updated successfully, but these errors were encountered: