Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pyrealsense2 ROI #4838

Closed
BenDavisson opened this issue Sep 10, 2019 · 6 comments
Closed

Pyrealsense2 ROI #4838

BenDavisson opened this issue Sep 10, 2019 · 6 comments

Comments

@BenDavisson
Copy link
Contributor

Camera Model | D435 |
Firmware Version | 0.5.11.06.250 |
Operating System & Version | Windows 10 |
Platform | PC |
SDK Version | 2.25.0 |
Language | python |
Segment | Machine vision |

Is there a way to define an ROI using Pyrealsense2 with my camera model?

I know it is possible using OpenCV, however, my issue is a little more advanced. The problem is that the stereo cam is determine depth based off of objects that aren't important. Meaning, the closer objects to the cam that are on the far edges of the view are changing the color of the stereo output in an undesirable way.

Is there a way to set an ROI so the only depth detection that takes place is the in the camera field of view?

Thanks!

@MartyG-RealSense
Copy link
Collaborator

I hope the link below will be of help regarding setting a ROI with Python.

#2681 (comment)

@BenDavisson
Copy link
Contributor Author

Sorry if I misunderstand, but doesn't the linked issue only handle setting an autoexposure ROI?

I'm trying to configure an ROI for the entire camera. For example, OpenCV can return a region of interest of an image with the correct gird points(click link, then scroll down or search for Image ROI in link).

Is this possible to do using Pyrealsense2? In other words, configure the camera to only return the view of a region of interest?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 10, 2019

You did not mention if you are using a point cloud. If you are, the discussion below on generating a point cloud with Python and cropping it may help.

#2769

@BenDavisson
Copy link
Contributor Author

Thank your for the timely responses Marty! I really appreciate it.

To my understanding... I am not using a point cloud. I am simply trying to define a region of interest for the camera to use, just like you can do with other machine vision camera SDK's.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 11, 2019

In RealSense SDK 2.0, the term 'region of interest' is typically associated with the auto-exposure. Another kind of ROI is to generate a bounding-box and then occlude the data outside of that bounding box. Here is the approach that one RealSense user took to implementing this:

#2016 (comment)

Again, it is a point cloud related example, so I apologize for that. I include the link primarily to demonstrate the viability of using bounding boxes for ROI purposes.

@RealSenseCustomerSupport
Copy link
Collaborator


Hi @BenDavisson,

Is the issue resolved per @MartyG-RealSense's comment? So far, we don't provide this kind of feature to specify a defined depth region.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants