-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you extract by measuring the distance to the object in front? #10728
Comments
Hi @jiminiscat You could apply a Threshold Filter to exclude depth data that is outside of a defined minimum and maximum depth range, like in the Python script at #8170 (comment) A minimum distance could be defined by adding the line below to the script and changing '1' to the minimum distance in meters of your choice:
Like in the linked-to script, the threshold filter could be inserted into the Example 2 height estimation script at the line after the pipeline start instruction. Another approach could be to put the code that prints the person's height on lines 108-109 within an If statement so that it only prints the text on the screen IF zs is < or > than a certain value. |
In the original 'Example 2' script, aligned_stream = rs.align(rs.stream.color) and point_cloud = rs.pointcloud() are not within the Try / While True section but outside of it, just after the pipeline start instruction:
|
try: |
At the top of the script in the above image you have a pipeline.start(config) instruction and then the same line further down after config.enable_stream. You cannot start a pipeline that has already been started as it will be busy. Once the pipeline has been started then you have to stop() the pipeline first before you can use the start instruction again. |
The error is at line 71 of try.py and relates to the instruction frames = pipeline.wait_for_frames() It indicate that this instruction was used whilst the pipeline was stopped and the pipeline has to have been started before it can work. It looks as though line 71 is further down the script than the section of code that is shown in the image above. |
The highlighted line is the pipe start. I cannot see the entire script though (the image has around the first 40 lines and the error is at line 71) so there may a Stop() further down the script. Could you show me more of the script below the image's bottom line please? |
I think it is because you are using two pipeline definitions, 'pipeline' and 'pipe'. You called Stop() on pipeline and then started pipe. On line 71, you use frames = pipeline.wait_for_frames(). But you stopped 'pipeline' and started 'pipe'. So I believe that you should change the wait_for_frames instruction to reference the pipeline that is currently open. frames = pipe.wait_for_frames() |
another error-> I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA |
The simplest way to handle this may be to first delete the config.enable_stream(rs.stream.depth) instruction at the line immediately before pipeline.Stop() Then change pipe.start(config) to pipe.start() with empty brackets. When the pipeline is restarted, the script should then apply the camera's default stream profile, which should enable both the depth and color streams at their default resolution and FPS values. |
#config.enable_stream(rs.stream.depth) ' When the pipeline is restarted, the script should then apply the camera's default stream profile, which should enable both the depth and color streams at their default resolution and FPS values. ' |
The depth and color streams are enabled automatically when the pipe.start brackets are empty. You do not have to write any code to enable them. |
An alternative to using the default stream configuration is to put config back in the pipe.start brackets and then copy the two config.enable_stream lines for depth and color from near the beginning of your script and paste them at the line above pipe.start(config) so that 'pipe' uses the same stream configuration that 'pipeline' did. |
threshold_filter = rs.threshold_filter() try: ->Where should "frames_filtered" be located??? |
another error.... Traceback (most recent call last): |
Please try changing frames_filtered = threshold_filter.process(frames) to this: frames_filtered = threshold_filter.process(frames_filtered) Does the threshold filter then work correctly? Is try5.py a different script to try.py? If it is then please post images of the try5.py script. |
It appears that the try-except mechanism in your threshold code is printing "Error" because when the script tries to apply the threshold filter, an error occurs but the except instruction handles it and prints "Error". In the initial 'pipeline' section of code, a pipeline called 'pipeline' is opened, depth is aligned to color using align_stream and a pointcloud is generated, and then 'pipeline' is closed. Later in the second pipeline called 'pipe', align_stream from the first pipeline is called in the line frames = aligned_stream.process(frames) But 'frames' is defined in the previous line as pointing to the second 'pipe' pipeline instead of the first 'pipeline' pipeline where the aligned depth-color image was created. This may have been why originally you were using frames = pipeline.wait_for_frames() on this line until I suggested changing it to 'pipe' in #10728 (comment) because at that time I did not see how the second 'pipe' pipeline needed to access aligned-frame data from the first 'pipeline' pipeline. I do apologize. Please try changing the wait_for_frames instruction from 'pipe' back to 'pipeline'. |
The script may benefit from being checked section by section to confirm what each part is doing and whether it is needed. For example, in lines 20 to 23 of pipeline 'pipeline' a pointcloud is set up and stored in 'points'. But the pointcloud is not generated from 'points' until line 76 of pipeline 'pipe'. Splitting the script code between two pipelines and having instructions in pipeline 2 depend on instructions in pipeline 1, and also inserting TensorFlow code in-between the librealsense code, increases the complexity of debugging. |
error ->#8170 (comment) ->"frames.get_depth_frame() type" |
Do you have any other examples of reference? |
Yes by default real world distance is measured in meters in the RealSense SDK. You could take the distance value provided by the SDK and perform a calculation on it to convert it into another unit of measurement such as mm though (for example, distance value in m x 1000 = mm) Yes, depth unit scale affects distance scale. The default depth scale is 0.001, which is millimeter scale. A scale of 0.01 is centimeter scale. The bottom of the section of Intel's Projection documentation linked to below confirms this. https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#depth-image-formats What happens if you change line 67 to this: frames = pipe.wait_for_frames() Line 68 to this: frames_filtered = threshold.filter.process(frames) Then comment out line 72, which is not needed as it is the same as the new line 67 |
same error.. -> catkin_ws/src/librealsense/wrappers/tensorflow$ python3 try5.py |
Thanks again for your patience! If the section of code in lines 37 to 62 (containing TensorFlow code) was moved further down the script of Try5.py to one line below the current line 85 # Perform the actual detection by running the model with image as input then you could prove whether or not the block of TensorFlow code is interfering with the processing of the librealsense code in a way that causes File "try5.py", line 74, in depth_frame = frames_filtered.get_depth_frame() AttributeError: 'pyrealsense2.pyrealsense2.frame' object has no attribute 'get_depth_frame' |
Solved. thank you |
That's great to hear after all of your hard work! Thanks for the update :) |
Case closed due to solution achieved and no further comments received. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Describe your issue / question / feature request / etc..>
After extracting the distance to the object (person) in front, we want to use the TensorFlow example, the human height estimation example.
The TensorFlow human height estimation example has high accuracy at a 'specific distance', so we want to measure a person's height only at that distance.
Is there any code or example where I can extract the distance value of the object in front of the camera?
I want to extract distance measurements. And I want to measure the person height from that value.
The text was updated successfully, but these errors were encountered: