Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you extract by measuring the distance to the object in front? #10728

Closed
jiminiscat opened this issue Jul 27, 2022 · 34 comments
Closed

Can you extract by measuring the distance to the object in front? #10728

jiminiscat opened this issue Jul 27, 2022 · 34 comments

Comments

@jiminiscat
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D400. 455 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Linux (Ubuntu 18.04)}
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC
SDK Version { legacy / 2.. }
Language {python }
Segment {Robot }

Issue Description

<Describe your issue / question / feature request / etc..>

After extracting the distance to the object (person) in front, we want to use the TensorFlow example, the human height estimation example.
The TensorFlow human height estimation example has high accuracy at a 'specific distance', so we want to measure a person's height only at that distance.
Is there any code or example where I can extract the distance value of the object in front of the camera?

I want to extract distance measurements. And I want to measure the person height from that value.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 27, 2022

Hi @jiminiscat You could apply a Threshold Filter to exclude depth data that is outside of a defined minimum and maximum depth range, like in the Python script at #8170 (comment)

A minimum distance could be defined by adding the line below to the script and changing '1' to the minimum distance in meters of your choice:

threshold_filter.set_option(rs.option.min_distance, 1)

Like in the linked-to script, the threshold filter could be inserted into the Example 2 height estimation script at the line after the pipeline start instruction.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/tensorflow/example2%20-%20person%20height.py#L18


Another approach could be to put the code that prints the person's height on lines 108-109 within an If statement so that it only prints the text on the screen IF zs is < or > than a certain value.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/tensorflow/example2%20-%20person%20height.py#L109-L110

@sosoyeon1125
Copy link

Is this the right way to do it? but there was an error ... ;(
Can you help me? thank you...

Screenshot from 2022-07-28 15-24-06

@MartyG-RealSense
Copy link
Collaborator

In the original 'Example 2' script, aligned_stream = rs.align(rs.stream.color) and point_cloud = rs.pointcloud() are not within the Try / While True section but outside of it, just after the pipeline start instruction:

pipeline.start(config)

aligned_stream = rs.align(rs.stream.color) # alignment between color and depth
point_cloud = rs.pointcloud()

@sosoyeon1125
Copy link

sosoyeon1125 commented Aug 1, 2022

SyntaxError: invalid syntax
-> When i corrected the error, there was a problem that was not in the example2.
I don't know what to do next.
Adding ':' results in an error ... ;(

Screenshot from 2022-08-01 14-36-32

@sosoyeon1125
Copy link

try:
frames = pipe.wait_for_frames()
frames_filtered = threshold_filter.process(frames)
<- It seems to be the part where the error occurs....
used 'try:',
'except'<- need?

@sosoyeon1125
Copy link

This code add
Screenshot from 2022-08-01 17-38-52

New error...
Traceback (most recent call last):
File "try.py", line 29, in
pipe.start(config)
RuntimeError: xioctl(VIDIOC_S_FMT) failed Last Error: Device or resource busy

Screenshot from 2022-08-01 17-39-26

@MartyG-RealSense
Copy link
Collaborator

At the top of the script in the above image you have a pipeline.start(config) instruction and then the same line further down after config.enable_stream. You cannot start a pipeline that has already been started as it will be busy. Once the pipeline has been started then you have to stop() the pipeline first before you can use the start instruction again.

@sosoyeon1125
Copy link

sosoyeon1125 commented Aug 7, 2022

  1. Add: pipeline.stop()
    pipe.start(config)
    ERROR-> This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    Traceback (most recent call last):
    File "try.py", line 71, in
    frames = pipeline.wait_for_frames()
    RuntimeError: wait_for_frames cannot be called before start()

Screenshot from 2022-08-07 16-33-43
->File "try.py", line 71, in
frames = pipeline.wait_for_frames()
Screenshot from 2022-08-07 16-36-53

2 . # pipeline.stop()
# pipe.start(config)
[INFO] start streaming...
error

Screenshot from 2022-08-07 16-35-48

I don't know how to solve it...
i want to measure person height only at a distance of 1.6M, but it's too hard,,,for beginners,,,

@MartyG-RealSense
Copy link
Collaborator

The error is at line 71 of try.py and relates to the instruction frames = pipeline.wait_for_frames()

It indicate that this instruction was used whilst the pipeline was stopped and the pipeline has to have been started before it can work.

It looks as though line 71 is further down the script than the section of code that is shown in the image above.

@sosoyeon1125
Copy link

It indicate that this instruction was used whilst the pipeline was stopped and the pipeline has to have been started before it can work. -> isn't this the pipe.start ??
Screenshot from 2022-08-07 16-51-10

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 7, 2022

The highlighted line is the pipe start. I cannot see the entire script though (the image has around the first 40 lines and the error is at line 71) so there may a Stop() further down the script. Could you show me more of the script below the image's bottom line please?

@sosoyeon1125
Copy link

sosoyeon1125 commented Aug 7, 2022

All code

Screenshot from 2022-08-07 17-30-59
Screenshot from 2022-08-07 17-31-16
Screenshot from 2022-08-07 17-31-24

->File "try.py", line 71, in
frames = pipeline.wait_for_frames()
Screenshot from 2022-08-07 16-36-53

@MartyG-RealSense
Copy link
Collaborator

I think it is because you are using two pipeline definitions, 'pipeline' and 'pipe'. You called Stop() on pipeline and then started pipe.

On line 71, you use frames = pipeline.wait_for_frames(). But you stopped 'pipeline' and started 'pipe'. So I believe that you should change the wait_for_frames instruction to reference the pipeline that is currently open.

frames = pipe.wait_for_frames()

@sosoyeon1125
Copy link

another error-> I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "try.py", line 79, in
color_image = np.asanyarray(color_frame.get_data())
RuntimeError: null pointer passed for argument "frame_ref"

Screenshot from 2022-08-07 17-56-48

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 7, 2022

The simplest way to handle this may be to first delete the config.enable_stream(rs.stream.depth) instruction at the line immediately before pipeline.Stop()

Then change pipe.start(config) to pipe.start() with empty brackets. When the pipeline is restarted, the script should then apply the camera's default stream profile, which should enable both the depth and color streams at their default resolution and FPS values.

@sosoyeon1125
Copy link

#config.enable_stream(rs.stream.depth)
pipeline.stop()
pipe.start() -> OK

' When the pipeline is restarted, the script should then apply the camera's default stream profile, which should enable both the depth and color streams at their default resolution and FPS values. '
-> How do i do this? when i ran the code i fixed, it turned blue

@MartyG-RealSense
Copy link
Collaborator

The depth and color streams are enabled automatically when the pipe.start brackets are empty. You do not have to write any code to enable them.

@sosoyeon1125
Copy link

sosoyeon1125 commented Aug 7, 2022

Screenshot from 2022-08-07 18-57-16

No boxes are created in the human position.
A min 1.3M and max 1.6M also seems useless............

Screenshot from 2022-08-07 18-51-00

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 7, 2022

An alternative to using the default stream configuration is to put config back in the pipe.start brackets and then copy the two config.enable_stream lines for depth and color from near the beginning of your script and paste them at the line above pipe.start(config) so that 'pipe' uses the same stream configuration that 'pipeline' did.

@sosoyeon1125
Copy link

threshold_filter = rs.threshold_filter()
threshold_filter.set_option(rs.option.max_distance, 1.6)
threshold_filter.set_option(rs.option.min_distance, 1.3)

try:
frames = pipe.wait_for_frames()
frames_filtered = threshold_filter.process(frames)

->Where should "frames_filtered" be located???
"frames_filtered" there is mix1.6M & min 1.3M value inside, but the code seems to be not working....

@sosoyeon1125
Copy link

another error....

Traceback (most recent call last):
File "try5.py", line 73, in
depth_frame = frames_filtered.get_depth_frame()
AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_depth_frame'

@MartyG-RealSense
Copy link
Collaborator

Please try changing frames_filtered = threshold_filter.process(frames) to this:

frames_filtered = threshold_filter.process(frames_filtered)

Does the threshold filter then work correctly?

Is try5.py a different script to try.py? If it is then please post images of the try5.py script.

@sosoyeon1125
Copy link

try5 is try: <-put in the code
other than that, it is the same as the original code.
Screenshot from 2022-08-07 23-22-37

"Please try changing frames_filtered = threshold_filter.process(frames) to this:
frames_filtered = threshold_filter.process(frames_filtered)
Does the threshold filter then work correctly?"
-> same ..

Screenshot from 2022-08-07 23-36-02

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 7, 2022

It appears that the try-except mechanism in your threshold code is printing "Error" because when the script tries to apply the threshold filter, an error occurs but the except instruction handles it and prints "Error".

In the initial 'pipeline' section of code, a pipeline called 'pipeline' is opened, depth is aligned to color using align_stream and a pointcloud is generated, and then 'pipeline' is closed.

Later in the second pipeline called 'pipe', align_stream from the first pipeline is called in the line frames = aligned_stream.process(frames)

But 'frames' is defined in the previous line as pointing to the second 'pipe' pipeline instead of the first 'pipeline' pipeline where the aligned depth-color image was created.

This may have been why originally you were using frames = pipeline.wait_for_frames() on this line until I suggested changing it to 'pipe' in #10728 (comment) because at that time I did not see how the second 'pipe' pipeline needed to access aligned-frame data from the first 'pipeline' pipeline. I do apologize. Please try changing the wait_for_frames instruction from 'pipe' back to 'pipeline'.

@sosoyeon1125
Copy link

[try5.py All code]
Screenshot from 2022-08-08 02-01-32

Screenshot from 2022-08-08 02-01-44

error
Traceback (most recent call last):
File "try5.py", line 74, in
depth_frame = frames_filtered.get_depth_frame()
AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_depth_frame'
Screenshot from 2022-08-08 02-03-55

@MartyG-RealSense
Copy link
Collaborator

The script may benefit from being checked section by section to confirm what each part is doing and whether it is needed.

For example, in lines 20 to 23 of pipeline 'pipeline' a pointcloud is set up and stored in 'points'. But the pointcloud is not generated from 'points' until line 76 of pipeline 'pipe'.

Splitting the script code between two pipelines and having instructions in pipeline 2 depend on instructions in pipeline 1, and also inserting TensorFlow code in-between the librealsense code, increases the complexity of debugging.

@sosoyeon1125
Copy link

sosoyeon1125 commented Aug 8, 2022

error
Traceback (most recent call last):
File "try5.py", line 74, in
depth_frame = frames_filtered.get_depth_frame()
AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_depth_frame'
Screenshot from 2022-08-08 02-03-55

->#8170 (comment)
The below application successfully filtered the point cloud to under the configured distance option (1). The units seem to be in meters. I don't know what defines this, but assume it ties back to option.depth_units. Further, the filter would only accept the frames type and not a frames.get_depth_frame() type. I'm still new to rs and cannot explain why.

->"frames.get_depth_frame() type"
Isn't it a code I can't use ??
Is there any way I can use it?

@sosoyeon1125
Copy link

Do you have any other examples of reference?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 8, 2022

Yes by default real world distance is measured in meters in the RealSense SDK. You could take the distance value provided by the SDK and perform a calculation on it to convert it into another unit of measurement such as mm though (for example, distance value in m x 1000 = mm)

Yes, depth unit scale affects distance scale. The default depth scale is 0.001, which is millimeter scale. A scale of 0.01 is centimeter scale. The bottom of the section of Intel's Projection documentation linked to below confirms this.

https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#depth-image-formats


What happens if you change line 67 to this:

frames = pipe.wait_for_frames()

Line 68 to this:

frames_filtered = threshold.filter.process(frames)

Then comment out line 72, which is not needed as it is the same as the new line 67

@sosoyeon1125
Copy link

same error..

-> catkin_ws/src/librealsense/wrappers/tensorflow$ python3 try5.py
[INFO] start streaming...
[INFO] loading model...
2022-08-09 03:02:11.849028: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
1
threshold_filter <pyrealsense2.pyrealsense2.threshold_filter object at 0x7fd7cc965308>
Traceback (most recent call last):
File "try5.py", line 74, in
depth_frame = frames_filtered.get_depth_frame()
AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_depth_frame'

Screenshot from 2022-08-09 03-02-37

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 9, 2022

Thanks again for your patience!

If the section of code in lines 37 to 62 (containing TensorFlow code) was moved further down the script of Try5.py to one line below the current line 85 # Perform the actual detection by running the model with image as input then you could prove whether or not the block of TensorFlow code is interfering with the processing of the librealsense code in a way that causes File "try5.py", line 74, in depth_frame = frames_filtered.get_depth_frame() AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_depth_frame'

@jiminiscat
Copy link
Author

Solved. thank you
I solved it using the depth example.

@MartyG-RealSense
Copy link
Collaborator

That's great to hear after all of your hard work! Thanks for the update :)

@MartyG-RealSense
Copy link
Collaborator

Case closed due to solution achieved and no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants