Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting distance and 3D coordinate from specific pixel #9945

Closed
IvanNataP opened this issue Nov 10, 2021 · 7 comments
Closed

Getting distance and 3D coordinate from specific pixel #9945

IvanNataP opened this issue Nov 10, 2021 · 7 comments

Comments

@IvanNataP
Copy link

IvanNataP commented Nov 10, 2021

Required Info
Camera Model D415
Firmware Version 05.12.15.50
Operating System & Version Win 10
Platform PC
SDK Version 2.0
Language python
Segment others

Hi, first time asking here. I'm quite new to programming and currently making a program written in Python on Windows 10 using D415 camera.

My current goal is to get the 3D coordinate (x, y, z) and the distance of an object's mid-point. My limitation right now is that I only have both color_frame and depth_frame saved as .jpg file.

I already know which pixel from the color_frame, let's say (x_col, y_col), that I want to know the 3D coordinate and the distance.

I need some help on 2 things (an example code would be better):

  1. How to convert this pixel (x_col, y_col) to the 3D coordinate (x, y, z)?
  2. How to get the distance of this specific pixel, given I only have color_frame and depth_frame as .jpg file? Is this possible?

Please let me know if there are things that are unclear or some info that I should have included above. Thank you in advance.

Edit:
Code for snapping:

import cv2
import numpy as np
import pyrealsense2 as rs

class DepthCamera:
    def __init__(self):
        # Configure depth and color streams
        self.pipeline = rs.pipeline()
        config = rs.config()

        # Get device product line for setting a supporting resolution
        pipeline_wrapper = rs.pipeline_wrapper(self.pipeline)
        pipeline_profile = config.resolve(pipeline_wrapper)
        device = pipeline_profile.get_device()
        device_product_line = str(device.get_info(rs.camera_info.product_line))

        config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
        config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)

        # Start streaming
        self.pipeline.start(config)

    def get_frame(self):
        frames = self.pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()

        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        if not depth_frame or not color_frame:
            return False, None, None
        return True, depth_image, color_image

    def release(self):
        self.pipeline.stop()

# Initialize Camera Intel Realsense
dc = DepthCamera()

#Get depth and color frame
ret, depth_frame, color_frame = dc.get_frame()

# Save both depth and color frame
cv2.imwrite('Depth.jpg', depth_frame)
cv2.imwrite('Color.jpg', color_frame)

dc.release()
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 10, 2021

Hi @IvanNataP The case #1904 (comment) has a Python script in which depth and color was also being saved as jpg image files with cv2.imwrite, and then the mid-point distance was obtained by placing the instruction depth_frame.get_distance(int(x+w/2), int(y+h/2))) immediately after the imwrite instructions.

image

If you are aiming to obtain XYZ 3D coordinates for a single pixel then you could potentially use the instruction rs2_project_color_pixel_to_depth_pixel to convert a color pixel to a depth pixel. An example of scripting for using this instruction with Python can be found at #5603 (comment)

@IvanNataP
Copy link
Author

IvanNataP commented Nov 11, 2021

Hi @MartyG-RealSense , thanks for the links provided.

So, in my understanding,

From #1904 (comment), distance was obtained from the depth_frame.get_distance(int(x+w/2), int(y+h/2)), specifically the .get_distance() part, which is part of the rs library function, correct?

From #5603 (comment), I need to have all the parameters from the

depth_point = rs.rs2_project_color_pixel_to_depth_pixel(depth_frame.get_data(), depth_scale, depth_min, 
               depth_max,depth_intrin, color_intrin, depth_to_color_extrin, color_to_depth_extrin, color_point)

which then I can implement the .rs2_project_color_pixel_to_depth_pixel() part to get my desired result, correct?

I'll try to implement these idea to my code and see the results. Once again, thank you.

@MartyG-RealSense
Copy link
Collaborator

Yes, get_distance() is an attribute of the depth_frame class of the librealsense library. The official documentation relating to the pyrealsense2 form of this instruction is at the link below.

https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.depth_frame.html#pyrealsense2.depth_frame.get_distance

In regard to the required parameters for rs2_project_color_pixel_to_depth_pixel, the parameters listed in the quoted code are consistent with the description in the C++ version of the instruction in the official documentation (the pyrealsense2 documentation's entry for the instruction does not have an equivalent description).

image

@IvanNataP
Copy link
Author

Thank you for clarifying that get_distance() is an attribute of depth_frame class of the librealsense library. I think I already have what I needed, just need to apply these to my code. I'll keep this issue open for a few days in case I encounter some problem during my testing. Thanks.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the update. Good luck with your testing!

@IvanNataP
Copy link
Author

Hi, after some testing, I've managed to get the depth points of the desired specific pixel on the color frame.

Even though there's a deviation from my initial issue, which was about getting depth points using saved the jpg files of both color and depth frame, I decided to implement the code by getting both frame using the camera in real time, which is my end-goal.

Below are snippet of the result I got (ignore the radius part):

Screenshot 2021-11-16 17 28 08

and part of the code:

import pyrealsense2 as rs

...

### Project Color Pixel coordinate to Depth Pixel coordinate
def ProjectColorPixeltoDepthPixel(depth_frame, depth_scale, 
                                depth_min, depth_max, depth_intrinsic, color_intrinsic, 
                                depth_to_color_extrinsic, color_to_depth_extrinsic, 
                                color_pixel):

    depth_pixel = rs.rs2_project_color_pixel_to_depth_pixel(depth_frame.get_data(), depth_scale, 
                    depth_min, depth_max, depth_intrinsic, color_intrinsic, 
                    depth_to_color_extrinsic, color_to_depth_extrinsic, 
                    color_pixel)
    
    return depth_pixel

### Deproject Depth Pixel coordinate to Depth Point coordinate
def DeProjectDepthPixeltoDepthPoint(depth_frame, depth_intrinsic, x_depth_pixel, y_depth_pixel):

    depth = depth_frame.get_distance(int(x_depth_pixel), int(y_depth_pixel))

    depth_point = rs.rs2_deproject_pixel_to_point(depth_intrinsic, [int(x_depth_pixel), int(y_depth_pixel)], depth)
    
    return depth, depth_point

...


color_pixel = (x_color_pixel, y_color_pixel)

### Get depth pixel from color pixel
depth_pixel = ProjectColorPixeltoDepthPixel(depth_frame, 
                    depth_scale, depth_min, depth_max, depth_intrinsic, color_intrinsic, 
                    depth_to_color_extrinsic, color_to_depth_extrinsic, 
                    color_pixel)

x_depth_pixel, y_depth_pixel = depth_pixel

### Get depth points from depth pixel
depth, depth_point = DeProjectDepthPixeltoDepthPoint(depth_frame, depth_intrinsic, x_depth_pixel, y_depth_pixel)

x_depth_point, y_depth_point, z_depth_point = depth_point 

Thanks so much for all the help. Have a nice day.

@MartyG-RealSense
Copy link
Collaborator

You are very welcome, @IvanNataP - thanks so much for sharing your solution with the RealSense community :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants