Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D455F occasionally won't receive depth image in a python recording program. #12253

Closed
ZitongLan opened this issue Oct 3, 2023 · 18 comments
Closed

Comments

@ZitongLan
Copy link

ZitongLan commented Oct 3, 2023

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D455f
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Linux (Ubuntu 18.04)
Kernel Version (Linux Only) (4.9.299-tegra)
Platform NVIDIA Jetson nano
SDK Version { legacy / 2.. }
Language python3.7
Segment others

Issue Description

Hi there, I am tring to use jetsonnano to capture depth and color stream to a .bag file using python code with camera D455f. In my program, however, the depth frame will occasionally not arrive, and the camera stop capture depth image, while it could still capture color image. What is wrong with my code or some other issues?

I set s timer check, if recording time exceeds the setted time. The recording and frame will stop.

import pyrealsense2 as rs
import time
import datetime
import argparse
import sys
import os
if __name__ == "__main__":
    parser = argparse.ArgumentParser(description = "Script to record data")
    parser.add_argument("--record_time", type = int,default = 10, help = "Time of recording")
    parser.add_argument("--output_folder",type=str, help = "folder to write data files")
    parser.add_argument("--is_empty", type=str, help= "is data for sensor calibration")
    args = parser.parse_args()

    # Configure depth and color streams
    pipeline = rs.pipeline()
    config = rs.config()
    config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 15)
    config.enable_stream(rs.stream.color, 848, 480, rs.format.rgb8, 15)


    # Get the current date and time
    current_datetime = datetime.datetime.now()

    # Extract year, month, date, hour, minute, and second
    current_year = current_datetime.year
    current_month = current_datetime.month
    current_day = current_datetime.day
    current_hour = current_datetime.hour
    current_minute = current_datetime.minute
    current_second = current_datetime.second

    # Format them as a string
    formatted_datetime = f"{current_year:04}{current_month:02}{current_day:02}_{current_hour:02}{current_minute:02}{current_second:02}"

    # Print the formatted datetime
    print(formatted_datetime)
    if args.is_empty == "True":
        config.enable_record_to_file(f"{args.output_folder}/{formatted_datetime}_empty.bag")
    elif args.is_empty == "False":
        config.enable_record_to_file(f"{args.output_folder}/{formatted_datetime}.bag")
    else:
        print("is_empty argument is empty or wrong, Type True or False again")
        sys.exit()
    # Start streaming

    profile = pipeline.start(config)
    recorder = profile.get_device().as_recorder()

    print("Recording start......")
    print("Time of recording is ", args.record_time)

    e1 = time.time()
    recording_time = args.record_time

    cur_color_number = -1
    cur_depth_number = -1
    color_frame_number = 0
    depth_frame_number = 0
    
    try:
        while True:
            # Wait for a coherent pair of frames: depth and color
            frames = pipeline.wait_for_frames()
            depth_frame = frames.get_depth_frame()
            color_frame = frames.get_color_frame()

            if not depth_frame or not color_frame:
                continue
            
            if cur_color_number < color_frame.get_frame_number():
                color_frame_number += 1
            if cur_depth_number < depth_frame.get_frame_number():
                depth_frame_number += 1
            
            cur_color_number = color_frame.get_frame_number()
            cur_depth_number = depth_frame.get_frame_number()   

            e2 = time.time()
            t = (e2 - e1) 
            
            sys.stdout.write(f"\rColor_Frame: {color_frame_number} Depth_Frame: {depth_frame_number}")
                
            if t>recording_time: # change it to record what length of video you are interested in
                print("\nDone!")
                break

    finally:
        recorder.pause()
        time.sleep(0.5)
        pipeline.stop()

And there is a command line results of a success cases, the depth frame and color frame are same (or roughly the same)

Screenshot from 2023-10-03 16-46-41

The below one is a failure case, in a time the depth frame is not received by the camera, while the camera is still capturing color images. (Note this will happen with a probability in 1 out of 10, but this is really annoying!)

Screenshot from 2023-10-03 16-46-19

Let me know if you need any other information to fix this issues.

@ZitongLan ZitongLan changed the title D455F occasionally won't receive depth image in a python recording program. [D455F, python, jetsonnano] D455F occasionally won't receive depth image in a python recording program. Oct 3, 2023
@ZitongLan ZitongLan changed the title [D455F, python, jetsonnano] D455F occasionally won't receive depth image in a python recording program. D455F occasionally won't receive depth image in a python recording program. Oct 3, 2023
@MartyG-RealSense
Copy link
Collaborator

Hi @ZitongLan Does removing the check for whether the stream is a depth frame or color frame improve stability?

As you have used cfg instructions to ensure that only depth and color streams are enabled, there will not be any other type of stream (such as infrared) enabled. So the check is likely to be unnecessary.

if not depth_frame or not color_frame:
continue

@ZitongLan
Copy link
Author

Hi @ZitongLan Does removing the check for whether the stream is a depth frame or color frame improve stability?

As you have used cfg instructions to ensure that only depth and color streams are enabled, there will not be any other type of stream (such as infrared) enabled. So the check is likely to be unnecessary.

if not depth_frame or not color_frame:
continue

Hi @MartyG-RealSense, thanks for your reply.

if I comment the if not xxx or not xxx then continue check, it seems the problem is still there. And it may make things worse. Here is two consecutive recordings, the depth and color frame number has a big differences.

Screenshot from 2023-10-04 10-05-22

By the way, what do you mean by "removing the check for whether the stream is a depth frame or color frame". Should I try to comment the code related to

depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()

And only keeps frames = pipeline.wait_for_frames() and time check in the while loop?

@MartyG-RealSense
Copy link
Collaborator

By 'removing the check', I just meant commenting out if not depth_frame or not color_frame: like you tried.

If the program works correctly 9 out 10 runs then the code is likely correct and stable and there is not anything that can be done to improve it. You could try resetting the camera when the script runs though by placing the code below beneath your cfg lines.

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()   

@ZitongLan
Copy link
Author

I have tried to add these codes beneath the cfg lines. However, the depth frame don't arrive will still occur occasionally. I think the rate may not be 1 out of 10. Randomly it can become 1 out of 2.

@MartyG-RealSense
Copy link
Collaborator

I have examined your code very carefully and cannot see any significant problems with it. It is clear from your successful tests that the program is able to function correctly.

Stability of recording may increase if latency is introduced into the streaming to reduce the number of dropped frames. The default queue size is '1' but when two streams are being used (depth and color), setting latency to a higher value may help. Intel recommend a value of '2' for two streams, though '50' has also worked well for some RealSense users.

Python code for configuring the frame queue size can be found at #6448 (comment)

@ZitongLan
Copy link
Author

ZitongLan commented Oct 6, 2023

I add the following codes
depth_sensor = profile.get_device().first_depth_sensor() depth_sensor.set_option(rs.option.frames_queue_size,x)
below code profile = pipeline.start(config), where x is the queue size. I have again tested with queue size being 2, 3 or 10.
However, it seems that these code couldn't solve the unstability problem.

@ZitongLan
Copy link
Author

By the way, I have just noticed there is a python demo https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/frame_queue_example.py, that tries to solve the frame dropping problem. Is this issue related to my case? If so, can I borrow some method in the code to solve my problem, cause I noticed there is also a line of code setting queue size like queue = rs.frame_queue(50).

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 7, 2023

Whilst there is no harm in including frame queue control code in your script, your case seems more related to depth frames sometimes ceasing to arrive than it does to queue size.

It might be useful to play a 'bad' bag file back in the RealSense Viewer tool (by drag and dropping the bag into the Viewer's center panel) to confirm whether the bag is actually okay and only the script's depth frame counter has failed to stop increasing.

@ZitongLan
Copy link
Author

unfortunately, I have player back those file that the depth frame didn't come. The realsense viewer will report frame didn't come within 5000.

@MartyG-RealSense
Copy link
Collaborator

If you download a sample bag from the link below and can play it back without problems in the Viewer, this might suggest that your bag recording created by the script is damaged or incomplete.

https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md

@ZitongLan
Copy link
Author

Yes, I can play the recording bags using Viewer, and there is definitely some tiny bugs in my recording code. Could you provide some similar codes that records both color and depth streams to a bag file at jetsonnano?

@Gowthers
Copy link

If your problem is the lack of synchronization between image and depth frames, pipeline.poll_for_frames() may help. This function returns a composite_frame of the streams you configured (depth and color) synchronized by timestamps.

I have found some problems with this approach though. If color and depth frames arrive in the pipeline unsynchronized, these frames will be dropped. This means, that you may have more frame drops overall, but the frames you receive will be guaranteed to consist of corresponding depth and color images.

I hope this can help you!

I also have a question: From my search, I have not found a way to combine queue and poll_for_frames() to reduce frame drops of synchronized frames in Python. Is there a way to implement this?

@MartyG-RealSense
Copy link
Collaborator

@ZitongLan There are examples of Python bag recording scripts at #3029 (comment) and #8183

@Gowthers Thanks so much for your kindly provided advice to @ZitongLan

What happens if you use the RealSense SDK's frame_queue_example.py Python example program at the link below and change wait_for_frames to poll_for_frames?

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/frame_queue_example.py

@Gowthers
Copy link

From what I tried, I could not get poll_for_frames() to work from the frames_queue, since the queue doesn't work with composite_frames. Furthermore, wait_for_frames() blocks the pipeline thread until frames are available, while poll_for_frames() as far as I understand, does not. It only looks, if a frameset is available for the instance.
Sadly I don't have a more in-depth answer, since this is what I found in the wikis and by trying.
Hope it helps

@MartyG-RealSense
Copy link
Collaborator

@Gowthers That is correct, poll_for_frames does not block - see #2422 (comment)

@MartyG-RealSense
Copy link
Collaborator

Hi @ZitongLan and @Gowthers Do either of you require further assistance with this case, please? Thanks!

@Gowthers
Copy link

Thanks for your help, my questions on this topic are answered :)

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. I'm pleased that I could help. As you do not require further assistance, I will close this case. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants