Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delay / Latence issue caused by buffering, queue size too big #6448

Closed
QBoulanger opened this issue May 23, 2020 · 14 comments
Closed

Delay / Latence issue caused by buffering, queue size too big #6448

QBoulanger opened this issue May 23, 2020 · 14 comments

Comments

@QBoulanger
Copy link

Required Info
Camera Model Realsense D435
Firmware Version 05.12.03.00
Operating System & Version Linux Ubuntu 18.04
Kernel Version (Linux Only) 4.9.140-tegra
Platform NVIDIA Jetson TX2

Issue Description

I recently reinstalled librealsense on our Team's Jetson TX2. I installed it by running the script script/libuvc_installation.sh changing the cmake flags to:
cmake ../ -DFORCE_LIBUVC=true -DCMAKE_BUILD_TYPE=release -DBUILD_WITH_CUDA=true -DBUILD_PYTHON_BINDINGS=bool:true
The installation went well and the tests too. I'm having great FPS performance ( way better than before the reinstallation which was the goal of my reset ).

However, I notive a long delay of at least 2-3 sec. between the events happening in reality and their treatments.

I face the problem on realsense-viewer and on python simple test codes. Test code here:


import pyrealsense2 as rs
import cv2 as cv
import time
import numpy as np

# Create a pipeline
pipeline = rs.pipeline()

# Create a config and configure the pipeline to stream, different resolutions of color and depth streams
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 360, rs.format.z16, 60)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 60)

# Start streaming
profile = pipeline.start(config)

depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: ", depth_scale)

align_to = rs.stream.color
align = rs.align(align_to)


while(True):
    start = time.time()

    frames = pipeline.wait_for_frames()
    aligned_frames = align.process(frames)

    color_frame = aligned_frames.get_color_frame()

    cv.imshow( "frame", np.asanyarray(color_frame.get_data()))

    print ("FPS: ", 1 / (time.time() - start))

    if cv.waitKey(1) & 0xFF == ord('q'):
        break

To illustate my problem if, for example, with the code above or with realsense-viewer, i clap on my hands, I will wait 2-3 secs before seeing it on screen.

It's an important problem for us as we are working on a realtime application.

I didn't have the problem before reinstalling LibRealsense.

Thank you for your help and your great job on Realsense ! It's a real pleasure to work with you

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 23, 2020

My understanding is that the recommendation is to leave the frame queue size alone and use the default settings, due to the risk of breaking a program if an inappropriate frame queue size is used.

Changing the frame queue size can change the balance between performance and latency. A smaller value gives faster performance, with the trade-off being a greater risk that frames are dropped. If you do choose to change the queue size, the documentation recommends a value of '2' if both depth and color streams are enabled, like in your program.

https://github.com/IntelRealSense/librealsense/wiki/Frame-Buffering-Management-in-RealSense-SDK-2.0#latency-vs-performance

In Pyrealsense2, the frame queue size could be set with code like that in the Python script at #5939

image

If you are experiencing latency in the RealSense Viewer as well as your own program though, I would recommend trying to set the streams at 30 FPS instead of 60 if you have not already done so, to see whether the latency is related to another factor such as USB not being able to keep up.

@QBoulanger
Copy link
Author

Thank you for your quick and relevant reply @MartyG-RealSense .

I tried changing the queue size of the depth and color sensor, thanks to your code. But unfortunatly ( and quite surprisingly to me) it doesn't make any change to the latency I'm experiencing.

Changing the FPS on realsense viewer made little changes. I tried setting it at 6 FPS, and the latency is shorter even though still visible.

About the potential USB problem factor, it would be surprising as before my reinstallation of librealsense, I didn't have any visible latency.

By trying running on realsense viewer I saw these output problems:

23/05 15:02:19,679 ERROR [547675434784] (types.h:307) get_device_time_ms() took too long (more then 2 mSecs)
23/05 15:02:19,701 ERROR [547650256672] (types.h:307) get_device_time_ms() took too long (more then 2 mSecs)

These two lines are repeating again and again as Stero Module and / or RGB Camera are turned on.

Would it be a key to understand the problem ?

@QBoulanger
Copy link
Author

I just discovered that running the depth sensor alone, I don't have any latency. The problem comes from the RGB Camera.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 23, 2020

If you are using SDK 2.34.0, the get_device_time_ms() took too long (more then 2 mSecs) error is a known issue with the Viewer in this SDK version that has affected Windows, Linux and MacOS users alike. So it is likely not a factor in your particular problem.

In the RealSense Viewer, you could try having auto-exposure enabled and an option called Auto-Exposure Priority disabled in the "RGB" section of the Viewer side-panel. Disabling Auto-Exposure Priority whilst Auto-Exposure is enabled forces the camera to maintain a constant FPS.

@QBoulanger
Copy link
Author

Unfortunately, it doesn't fixe the issue.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 23, 2020

Do you get the same problems if you try the SDK's own Python depth-color alignment example program?

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py

@QBoulanger
Copy link
Author

I do.

@MartyG-RealSense
Copy link
Collaborator

Any difference if you use rgb8 as the color format instead of bgr8?

@QBoulanger
Copy link
Author

QBoulanger commented May 25, 2020

I tried reinstalling JetPack version 4.3 ( the version I was using before) as the version 4.4 that I was using while testing librealsense is a Developer Preview. Unfortunately it didn't fixe the problem. I am still experiencing the delay.

I also tried changing the color format to rgb8 following your recommendation, but the delay is still there.

Any other clue ?

@MartyG-RealSense
Copy link
Collaborator

I went over your case again from the start to try to get some new ideas.

If the slowdown is only occurring if both RGB and depth are enabled but does not occur if only depth is enabled, can you check what your CPU usage is when you run the program with depth and RGB enabled, please? Is it at or close to 100% usage?

@QBoulanger
Copy link
Author

I just fixed the problem. It came from the cmake flag: -DFORCE_LIBUVC=true

The proper way to install RealSense on jetson TX2 is using :

cmake ../ -DCMAKE_BUILD_TYPE=release -DBUILD_WITH_CUDA=true -DBUILD_PYTHON_BINDINGS=bool:true

FPS Performances are great and I don't notice any delay.

Thank you @MartyG-RealSense for your great help.

@MartyG-RealSense
Copy link
Collaborator

You're very welcome - it's great news that you found a solution that works for you. Thanks for the update!

@malapatiravi
Copy link

malapatiravi commented Jun 17, 2020

@MartyG-RealSense
I am compiling source code for ubuntu 16.04 and using recommended firmware for 2.33.1 and 2.35.2
I have three configurations as following

  1. make ../ -DENFORCE_METADATA=true -DBUILD_PYTHON_BINDINGS=true -DFORCE_RSUSB_BACKEND=ON -DBUILD_WITH_CUDA=true -DCMAKE_BUILD_TYPE=Release
  2. make ../ -DENFORCE_METADATA=true -DBUILD_PYTHON_BINDINGS=true -DFORCE_RSUSB_BACKEND=OFF -DBUILD_WITH_CUDA=true -DCMAKE_BUILD_TYPE=Release
  3. make ../ -DENFORCE_METADATA=false -DBUILD_PYTHON_BINDINGS=true -DFORCE_RSUSB_BACKEND=OFF -DBUILD_WITH_CUDA=true -DCMAKE_BUILD_TYPE=Release

What i observed is when DFORCE_RSUSB_BACKEND=OFF i am observing stale frames that have same timestamp and when DFORCE_RSUSB_BACKEND=ON no stale frames are observed and times stamps are good. Do you see anything wrong with my configuration?
What is the recommendation from realsense to enable -DENFORCE_METADATA. I read earlier that this is not intended for linux.
Thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 17, 2020

@malapatiravi The CMake build configuration documentation says about the ENFORCE_METADATA flag: "Having UVC per-frame metadata requires building with Windows SDK installed. When this flag is disabled, the resulting binary might not be capable of properly parsing UVC metadata if Windows SDK was not installed during the build. If this flag is true, not having Windows SDK install will break the build".

My recommendation is not to include the flag when building in Linux.

Below is the CMake statement that I usually recommend when building with RSUSB backend = true. This installation method requires an internet connection but is not dependent on Linux versions of kernel versions and does not require patching.

cmake ../ -DFORCE_RSUSB_BACKEND=true -DCMAKE_BUILD_TYPE=release -DBUILD_EXAMPLES=true -DBUILD_GRAPHICAL_EXAMPLES=true

Feel free to add -DBUILD_PYTHON_BINDINGS=true and -DBUILD_WITH_CUDA=true on the end of the statement.

Bear in mind that in 2.35.2 there is a bug that can prevent build when CUDA = true. A fix method is here:

#6573 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants