Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POINT CLOUD OF A PREDEFINED REGION USING PYREALSENSE2 #2769

Closed
sunit1409 opened this issue Nov 24, 2018 · 24 comments
Closed

POINT CLOUD OF A PREDEFINED REGION USING PYREALSENSE2 #2769

sunit1409 opened this issue Nov 24, 2018 · 24 comments
Assignees

Comments

@sunit1409
Copy link

From the rgb image obtained from realsense I obtain the 2d bounding box of an object. I segment out that region from both rgb and depth image. Now I wish to obtain a point cloud for that region only using pyrealsense2. How can it be done without loosing performance in terms of speed ?

@dorodnic
Copy link
Contributor

You could manually apply rs2_deproject_pixel_to_point in a loop.
I'm not sure what you mean by "without loosing performance". Projecting to 3D space will always take some time. Given small enough bounding-box you can achieve this in under a millisecond, but realistically generating pointcloud from the entire image should not take that much more (about 5-8ms for HD on reasonable hardware)

@sunit1409
Copy link
Author

Thanks for your response @dorodnic , see my problem is that I have both RGB and depth image aligned. Now from the RGB image I have obtained the x_min, x_max and y_min , y_max for a particular object. Now I want to create a point cloud for that region only using the python wrapper of librealsense. I want to know if that is feasible and how ?

@ogoshen
Copy link
Contributor

ogoshen commented Nov 27, 2018

You could find bounding boxes on the RGB aligned to depth image, use the rs.pointcloud processing block to generate vertices and crop it to the ROI:

align = rs.align(rs.stream.depth)
pc = rs.pointcloud()
frames = pipeline.wait_for_frames()
aligned = align.process(frames)
color_aligned_to_depth = aligned.first(rs.stream.color)
xmin, xmax, ymin, ymax = 100, 200, 100, 200 # BB
depth_frame = frames.first(rs.stream.depth)
points = pc.calculate(depth_frame)
w = rs.video_frame(depth_frame).width
h = rs.video_frame(depth_frame).height
verts = np.asanyarray(points.get_vertices()).view(np.float32).reshape(h, w, 3)
roi = verts[ymin:ymax, xmin:xmax, :]

@sunit1409
Copy link
Author

sunit1409 commented Nov 27, 2018

Hey @ogoshen ,
Kindly consider having a look at the following lines of codes :
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
pc = rs.pointcloud()
points = rs.points()
# Start streaming
profile = pipeline.start(config)
align_to = rs.stream.color
align = rs.align(align_to)
count = 0

try:
    while count < 20000:
    count +=1

    # Wait for a coherent pair of frames: depth and color
    frames = pipeline.wait_for_frames()
    # Align the depth frame to color frame
    aligned_frames = align.process(frames)
    # Get aligned frames
    aligned_depth_frame = aligned_frames.get_depth_frame()
    aligned_color_frame = aligned_frames.get_color_frame()

    points = pc.calculate(aligned_depth_frame)

    vertices = np.asanyarray(points.get_vertices(dims=2))


    w = aligned_depth_frame.get_width()
    image_Points = np.reshape(vertices , (-1,w,3))
    if not aligned_depth_frame or not aligned_color_frame:
        continue
    depth_image = np.asanyarray(aligned_depth_frame.get_data())
    color_image = np.asanyarray(aligned_color_frame.get_data())
    rgb = color_image[...,::-1]
    image = Image.fromarray(color_image)
    image_detected , left , right , bottom , top , label = yolo.detect_image(image, depth_image,count)
    for i in range(0,len(label)):

        left_cord = left[i]
        right_cord = right[i]
        bottom_cord = bottom[i]
        top_cord = top[i]
        vertices_interest = image_Points[top_cord:bottom_cord,left_cord:right_cord,:].reshape(-1,3)

        color_interest = color_image[top_cord:bottom_cord,left_cord:right_cord,:].reshape(-1,3)
        pcd = PointCloud()
        pcd.points = Vector3dVector(vertices_interest)
        pcd.colors = Vector3dVector(color_interest)
        draw_geometries([pcd])
        write_point_cloud(str(count)+label[i]+'.ply', pcd)
    image_detected=np.asarray(image_detected)


    # Show images
    cv2.imshow('RealSense', image_detected)
    if cv2.waitKey(1) & 0xFF == ord('q'):
                break

finally:

    pipeline.stop()

But the point cloud that I am getting are not having color. Sometimes I get a couple of gree yellow patches but mostly it is white. I am wondering why is it the case. I am sending a couple of ply files for your perusal. it is a scan of a office chair.

Link to it https://drive.google.com/drive/folders/1LFSQQDjGnLLDF1JYN6y5iCW-FN_uADZt?usp=sharing

@sunit1409
Copy link
Author

from open3d import *
pcd = PointCloud()
pcd.points = Vector3dVector(vertices_interest)
pcd.colors = Vector3dVector(color_interest)
draw_geometries([pcd])
write_point_cloud(str(count)+label[i]+'.ply', pcd)

@sunit1409
Copy link
Author

is it a open3d issue or have I made any mistake

@ogoshen
Copy link
Contributor

ogoshen commented Nov 27, 2018

rs code looks fine to me.
Not familiar with Open3D that much, but have a feeling its expecting normalized float32 for colors

@sunit1409
Copy link
Author

@ogoshen and @dorodnic Thank you so much , this really helped I solved all my issues. Closing it now.

@ghost
Copy link

ghost commented May 6, 2019

@sunit1409 I'm trying to achieve the same thing, but not able to get a proper cropped point cloud. Can you help me on this?

@bingws
Copy link

bingws commented May 10, 2019

You could find bounding boxes on the RGB aligned to depth image, use the rs.pointcloud processing block to generate vertices and crop it to the ROI:

align = rs.align(rs.stream.depth)
pc = rs.pointcloud()
frames = pipeline.wait_for_frames()
aligned = align.process(frames)
color_aligned_to_depth = aligned.first(rs.stream.color)
xmin, xmax, ymin, ymax = 100, 200, 100, 200 # BB
depth_frame = frames.first(rs.stream.depth)
points = pc.calculate(depth_frame)
w = rs.video_frame(depth_frame).width
h = rs.video_frame(depth_frame).height
verts = np.asanyarray(points.get_vertices()).view(np.float32).reshape(h, w, 3)
roi = verts[ymin:ymax, xmin:xmax, :]

how to save the roi into a ".ply" file?

@ghost
Copy link

ghost commented May 10, 2019

@bingws Thanks for the code snippet. I solved using open3d.crop_point_cloud for cropping and open3d.write_point_cloud for saving the roi to ".ply" or ".pcd" format

@bingws
Copy link

bingws commented May 10, 2019

@arjunskumar your code can be shared to me ? thanks! look here, I have the same problem.

@ghost
Copy link

ghost commented May 10, 2019

@bingws I'm not mapping 2D image to 3D Point Clouds. For your case, I was not able to do it, still trying to solve using pyrealsense api

@lgribeiro
Copy link

@ogoshen and @dorodnic Thank you so much , this really helped I solved all my issues. Closing it now.

Hi @sunit1409 , I got the same issue. How did you solved your issue ? help me please

@lgribeiro
Copy link

lgribeiro commented Aug 15, 2019

@ogoshen and @dorodnic Thank you so much , this really helped I solved all my issues. Closing it now.

Hi @sunit1409 , I got the same issue. How did you solved your issue ? help me please

I understand now! I changed this lines
pcd.points = Vector3dVector(vertices_interest)
pcd.colors = Vector3dVector(color_interest)
for
pcd.points = Vector3dVector(vertices_interest.astype(np.float32)/255)
pcd.colors = Vector3dVector(color_interest.astype(np.float32)/255)

it works !

@JimXu1989
Copy link

You could find bounding boxes on the RGB aligned to depth image, use the rs.pointcloud processing block to generate vertices and crop it to the ROI:

align = rs.align(rs.stream.depth)
pc = rs.pointcloud()
frames = pipeline.wait_for_frames()
aligned = align.process(frames)
color_aligned_to_depth = aligned.first(rs.stream.color)
xmin, xmax, ymin, ymax = 100, 200, 100, 200 # BB
depth_frame = frames.first(rs.stream.depth)
points = pc.calculate(depth_frame)
w = rs.video_frame(depth_frame).width
h = rs.video_frame(depth_frame).height
verts = np.asanyarray(points.get_vertices()).view(np.float32).reshape(h, w, 3)
roi = verts[ymin:ymax, xmin:xmax, :]

@ogoshen Hi, I want to know is there any difference between vertix and 3D points of the object? Will the vertices more sparse? Thanks a lot!

@ogoshen
Copy link
Contributor

ogoshen commented Nov 4, 2019

When talking about data, they are the same.
In the sample code above verts is the 3d data (array of xyz floats), and points is an object (of type pyrealsense2.points) that provides that data and extra functionality like exporting to PLY.

@MartyG-RealSense
Copy link
Collaborator

@ogoshen Thanks so much for your quick response to the question from @JimXu1989 :)

@JimXu1989
Copy link

@ogoshen Thanks a lot for your reply! And I want to ask one more question. If I have aRGB image and a depth image, how could I use the two images to get the 3D points? Must I use the frame just captured by my D435i to get the 3D points?
Thanks a lot!!

@ogoshen
Copy link
Contributor

ogoshen commented Nov 4, 2019

Depends on how you want to get the data...

Basically, you have to call pc.map_to(frame)
Where frame is your case would be the color (RGB) frame.
This will internally map the points to the color frame with a 'uvmap' (texture coordinates)

Now, the easiest to visualize the results is to export to ply (using points.export_to_ply)
You can also take a look at opencv_pointcloud_viewer.py, on how to use the texture coodinates with numpy/opencv.

@JimXu1989
Copy link

Depends on how you want to get the data...

Basically, you have to call pc.map_to(frame)
Where frame is your case would be the color (RGB) frame.
This will internally map the points to the color frame with a 'uvmap' (texture coordinates)

Now, the easiest to visualize the results is to export to ply (using points.export_to_ply)
You can also take a look at opencv_pointcloud_viewer.py, on how to use the texture coodinates with numpy/opencv.

Hello, I want to know, if I have some 3D points(in x,y,z), how could I use these 3D points to set up a pc(rs.pointcloud)? and how could I use map_to(frame) to get the uv_map?

Thanks a lot!

@JimXu1989
Copy link

Depends on how you want to get the data...

Basically, you have to call pc.map_to(frame)
Where frame is your case would be the color (RGB) frame.
This will internally map the points to the color frame with a 'uvmap' (texture coordinates)

Now, the easiest to visualize the results is to export to ply (using points.export_to_ply)
You can also take a look at opencv_pointcloud_viewer.py, on how to use the texture coodinates with numpy/opencv.

Could you tell me, how to generate a rs.points by 3D points in numpy(x,y,z) ?

@ogoshen
Copy link
Contributor

ogoshen commented Dec 29, 2019

If you have external data (like points in an array) that you want to pass through the SDK, you probably have to create a rs2::software_device. IIRC it's not exposed in the python wrapper though.

@nirandiw
Copy link

I am assuming in this code PointCloud() was from pcl?

Is there a good reference for using PCL python binding with intelrealsense. Having trouble install PCL in Mac and using it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants