-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D455F occasionally won't receive depth image in a python recording program. #12253
Comments
Hi @ZitongLan Does removing the check for whether the stream is a depth frame or color frame improve stability? As you have used cfg instructions to ensure that only depth and color streams are enabled, there will not be any other type of stream (such as infrared) enabled. So the check is likely to be unnecessary.
|
Hi @MartyG-RealSense, thanks for your reply. if I comment the By the way, what do you mean by "removing the check for whether the stream is a depth frame or color frame". Should I try to comment the code related to depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame() And only keeps |
By 'removing the check', I just meant commenting out If the program works correctly 9 out 10 runs then the code is likely correct and stable and there is not anything that can be done to improve it. You could try resetting the camera when the script runs though by placing the code below beneath your cfg lines.
|
I have tried to add these codes beneath the cfg lines. However, the depth frame don't arrive will still occur occasionally. I think the rate may not be 1 out of 10. Randomly it can become 1 out of 2. |
I have examined your code very carefully and cannot see any significant problems with it. It is clear from your successful tests that the program is able to function correctly. Stability of recording may increase if latency is introduced into the streaming to reduce the number of dropped frames. The default queue size is '1' but when two streams are being used (depth and color), setting latency to a higher value may help. Intel recommend a value of '2' for two streams, though '50' has also worked well for some RealSense users. Python code for configuring the frame queue size can be found at #6448 (comment) |
I add the following codes |
By the way, I have just noticed there is a python demo https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/frame_queue_example.py, that tries to solve the frame dropping problem. Is this issue related to my case? If so, can I borrow some method in the code to solve my problem, cause I noticed there is also a line of code setting queue size like |
Whilst there is no harm in including frame queue control code in your script, your case seems more related to depth frames sometimes ceasing to arrive than it does to queue size. It might be useful to play a 'bad' bag file back in the RealSense Viewer tool (by drag and dropping the bag into the Viewer's center panel) to confirm whether the bag is actually okay and only the script's depth frame counter has failed to stop increasing. |
unfortunately, I have player back those file that the depth frame didn't come. The realsense viewer will report frame didn't come within 5000. |
If you download a sample bag from the link below and can play it back without problems in the Viewer, this might suggest that your bag recording created by the script is damaged or incomplete. https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md |
Yes, I can play the recording bags using Viewer, and there is definitely some tiny bugs in my recording code. Could you provide some similar codes that records both color and depth streams to a bag file at jetsonnano? |
If your problem is the lack of synchronization between image and depth frames, pipeline.poll_for_frames() may help. This function returns a composite_frame of the streams you configured (depth and color) synchronized by timestamps. I have found some problems with this approach though. If color and depth frames arrive in the pipeline unsynchronized, these frames will be dropped. This means, that you may have more frame drops overall, but the frames you receive will be guaranteed to consist of corresponding depth and color images. I hope this can help you! I also have a question: From my search, I have not found a way to combine queue and poll_for_frames() to reduce frame drops of synchronized frames in Python. Is there a way to implement this? |
@ZitongLan There are examples of Python bag recording scripts at #3029 (comment) and #8183 @Gowthers Thanks so much for your kindly provided advice to @ZitongLan What happens if you use the RealSense SDK's frame_queue_example.py Python example program at the link below and change wait_for_frames to poll_for_frames? |
From what I tried, I could not get poll_for_frames() to work from the frames_queue, since the queue doesn't work with composite_frames. Furthermore, wait_for_frames() blocks the pipeline thread until frames are available, while poll_for_frames() as far as I understand, does not. It only looks, if a frameset is available for the instance. |
@Gowthers That is correct, poll_for_frames does not block - see #2422 (comment) |
Hi @ZitongLan and @Gowthers Do either of you require further assistance with this case, please? Thanks! |
Thanks for your help, my questions on this topic are answered :) |
You are very welcome. I'm pleased that I could help. As you do not require further assistance, I will close this case. Thanks again! |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi there, I am tring to use jetsonnano to capture depth and color stream to a .bag file using python code with camera D455f. In my program, however, the depth frame will occasionally not arrive, and the camera stop capture depth image, while it could still capture color image. What is wrong with my code or some other issues?
I set s timer check, if recording time exceeds the setted time. The recording and frame will stop.
And there is a command line results of a success cases, the depth frame and color frame are same (or roughly the same)
The below one is a failure case, in a time the depth frame is not received by the camera, while the camera is still capturing color images. (Note this will happen with a probability in 1 out of 10, but this is really annoying!)
Let me know if you need any other information to fix this issues.
The text was updated successfully, but these errors were encountered: