Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Playback specific frame by index #3121

Closed
amaanda opened this issue Jan 22, 2019 · 40 comments
Closed

Playback specific frame by index #3121

amaanda opened this issue Jan 22, 2019 · 40 comments

Comments

@amaanda
Copy link

amaanda commented Jan 22, 2019

Required Info
Camera Model D435
Firmware Version 05.10.06.00
Operating System & Version Win 10
Platform PC
SDK Version 2.17.1
Language python
Segment others

Hello,
I am currently trying to access a specific frame index from a playback .bag using the python wrapper. I need to access infrared, RGB and depth images from each frame. I am trying to do this by using the seek method, however I would prefer a better option where I could direcly access each frame by their index.

Thanks in advance!

@MartyG-RealSense
Copy link
Collaborator

The SDK has an instruction called set_real_time that Dorodnic the RealSense SDK Manager says should enable frames to be fetched one frameset at a time.

A user came up with a sample script for frame by frame bag replay that uses set_real_time.

#1579 (comment)

@dorodnic
Copy link
Contributor

Hi @amaanda
Completely agree, I believe there is existing enhancement with this request.
Previously, we made sure set_real_time(false) works as intended, allowing frame-by-frame iteration.
It does not, however, help if you want "random access" to streams.
Jumping back and forward is more complicated, and there are possibly additional bugs hidden there. Simplest way is to reset to start, and skip until the desired frame.

@amaanda
Copy link
Author

amaanda commented Jan 23, 2019

Thanks for the quick reply!
I have already seen that issue that was solved by using set_real_time, I will try using that then.
Still, it seems troublesome for my specific application. I need to access the frames in order to cut a quite long .bag file, and then choose specific frames for processing stage, in case you have any tips on how to that.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 23, 2019

When recording a bag with ROS, there is the option to set the recording to automatically end a clip and start a new one after a certain period of time - a process called Split - so you end up with a number of smaller clips instead of one giant one. This may help with choosing which parts you want to keep.

#2424 (comment)

@amaanda
Copy link
Author

amaanda commented Jan 23, 2019

Will do that. Thank you!

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Any other questions about this ticket? Looking forward to your update. Thanks!

@amaanda
Copy link
Author

amaanda commented Feb 6, 2019

Hi, so, what I am trying to do now is: I need to access pre-recorded .bag files, do some processing and generate the .ply in a while loop, for each depth and infrared (both channels) frames available. I have tried using wait_for_frames but I am not being able to solve the frame drop problem. Do you have any tips on how to solve this?

@amaanda
Copy link
Author

amaanda commented Feb 6, 2019

What I was previously trying to do was to use the timestamps in order to get only the aligned frames.

if (depth_frame.get_timestamp() - ir_frame.get_timestamp())<=5:
start_time = time.time()
pc.map_to(ir_frame)
points = pc.calculate(depth_frame)
points.export_to_ply(os.path.join(path, str(ir_frame.get_timestamp()) +".ply"), ir_frame)
elapsed_time = time.time() - start_time

I was losing a lot of frames by doing this, and depending on the speed of the running object, I would still be getting desynchronized frames.

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Noticed that you're testing on Windows, please refer to below link to enable metadata which might helps.
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_windows.md#enabling-metadata-on-windows
Or please refer to rs-align example to get aligned.

@amaanda
Copy link
Author

amaanda commented Feb 13, 2019

Yes, I have already followed all the steps on that tutorial. I still get dropped frames.

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Then could you please follow the rs-align example instead of using timestamp?

@amaanda
Copy link
Author

amaanda commented Feb 14, 2019

rs-align example was my first attempt on getting the frames, and I still get the same problem

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Could you please provide your code snippet that referring to rs-align so that we can further look into this? Thanks!

@amaanda
Copy link
Author

amaanda commented Feb 18, 2019

align = rs.align(rs.stream.color)
aligned_frames = align.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame()
aligned_color_frame = aligned_frames.get_color_frame()
aligned_depth_image = np.asanyarray(aligned_depth_frame.get_data())
aligned_color_image = np.asanyarray(aligned_color_frame.get_data())

@amaanda
Copy link
Author

amaanda commented Feb 18, 2019

I also want to get aligned infrared frames, which I believe will be done the same way it works for color frames.

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Could you please have a try of rs-align in **librealsense/examples/align **first? I suspect there's some problem with current python wrapper as we also see similar problem as #3221

@amaanda
Copy link
Author

amaanda commented Feb 20, 2019

I will try using the cpp applications then. Do you think the rs-align example will also solve the frame drop problem? From what I've read on this forum, it seems to be a problem on every programming language the librealsense is implemented, inherent to the .bag format.

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Did you try the live streaming to see if any problem?

@amaanda
Copy link
Author

amaanda commented Feb 21, 2019

The live streaming looks fine. The problem is when I try saving the frames: I get around 514 depth frames and only 323 infrared frames. To export the frames I used the rs-convert cpp application. I added a flag to save only the infrared frames.

I'm starting to think it is a problem when the file is recorded, inherent to the .bag format, maybe?

@amaanda
Copy link
Author

amaanda commented Feb 21, 2019

The .bag file is around 17 seconds long, the viewer shows the FPS to be 30 for all channels. This means I should get roughly around 510 frames for each channel. I don't know if these infrared frames are not really there or what.

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda
Here's another way you can have a try to avoid such frame drop. First call frame.keep() API during saving all the frames to memory , then save to hard disk.
frame.keep API: https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/python.cpp#L356

@amaanda
Copy link
Author

amaanda commented Feb 22, 2019

Tried using frame.keep and still got the same problem.

    frames = pipeline.wait_for_frames()
    frames.keep()

    ir_frame_0 = frames.get_infrared_frame(0)
    ir_image_0 = np.asanyarray(ir_frame_0.get_data())
    ir_frame_1 = frames.get_infrared_frame(1)
    ir_image_1 = np.asanyarray(ir_frame_1.get_data())
    depth_frame = frames.get_depth_frame()
    depth_image = np.asanyarray(depth_frame.get_data())

@amaanda
Copy link
Author

amaanda commented Feb 22, 2019

The .bag file is 11.635678 seconds long, which in a 30FPS should give around 349 frames. By running the code above I get the maximum of 228 frames per channel.

@amaanda
Copy link
Author

amaanda commented Feb 22, 2019

I think it might be a problem in the python wrapper. According to this comment, it seems to work fine on c++. Is there some example using keep()? I couldn't find it.

@lramati
Copy link
Contributor

lramati commented Feb 22, 2019 via email

@amaanda
Copy link
Author

amaanda commented Feb 22, 2019

@lramati Sorry, I don't see where they are using keep() on that code.

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Is it possible for you to try C++ first? I'm afraid we need some time to check what's the problem of python wrapper.

@amaanda
Copy link
Author

amaanda commented Feb 26, 2019

@RealSense-Customer-Engineering That is my next step now. If you have any tips or a project/tutorial I could take a look at, please share.

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Sure. Will share here if I get update. Any progress for your testing with C++? Any issue there?

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Any good news about your test with C++? Thanks!

@amaanda
Copy link
Author

amaanda commented Mar 6, 2019

@RealSenseCustomerSupport sorry for taking so long, I had to focus on a few other issues that came up. I tried recording using the viewer on different PCs and apparently the trouble I was facing was due to pc performance.

@amaanda
Copy link
Author

amaanda commented Mar 7, 2019

Ok. I've been running a few tests using c++ and now I only seem to have the frame dropping problem when I try recording all 4 channels provided by the realsense camera: infrared 1, 2, color and depth. So I went to check the log files of these tests and I saw the following error message:

06/03 17:55:02,363 WARNING [7936] (playback_device.cpp:198) Playback device does not provide a matcher
06/03 17:55:02,363 WARNING [928] (playback_device.cpp:198) Playback device does not provide a matcher
06/03 17:55:02,363 WARNING [6224] (playback_device.cpp:198) Playback device does not provide a matcher
06/03 17:55:02,395 WARNING [8424] (playback_device.cpp:198) Playback device does not provide a matcher

Any guesses on what might be causing the problem, or how to solve this?

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda What's the resolution and FPS configuration when you get the warning issue for 4 channels recording?

@amaanda
Copy link
Author

amaanda commented Mar 25, 2019

@RealSense-Customer-Engineering 1380x720 30fps

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda The warning should be related to the frame drops during the recording which is very possible to happen when record high resolution for several channels. The number of frames included in your .bag file can be checked with Rosbag Inspector https://github.com/IntelRealSense/librealsense/tree/master/tools/rosbag-inspector

@RealSenseCustomerSupport
Copy link
Collaborator


@amaanda Any other questions about this ticket? Looking forward to your update. Thanks!

@amaanda
Copy link
Author

amaanda commented May 7, 2019

Not for now. Thanks!

@julienguegan
Copy link

Hello, I am just trying to do the same thing : access any frame from a .bag file previously recorded. It is not clear for me the conclusion of this discussion and how to do it ... It seems to be about set_real_time(false) or keep() but I am not sure

Does someone can provide a minimal example on how to do it in python ?

@MartyG-RealSense
Copy link
Collaborator

Hi @julienguegan Yes, reading bag file frames typically involves set_real_time(false).

There is another Python case that discusses indexing bag frames, with scripting provided, at the link below:

#3850

@julienguegan
Copy link

julienguegan commented Oct 29, 2020

@MartyG-RealSense thank you, I have found good ressources... But I have a problem, for reading I first get the number of frame in the .bag file :

cfg = rs.config()
cfg.enable_device_from_file(filename)
pipe = rs.pipeline()
profile = pipe.start(cfg)
playback = profile.get_device().as_playback()
playback.set_real_time(False)
t = playback.get_duration()
frame_counts = t.seconds * 30

and then I get all the frames by doing :

for i in range(frame_counts):
          frame_present, frames = pipe.try_wait_for_frames()
          f = frames.get_color_frame()

It gives me that I have 1080 frames to read but when displaying the images read, I realized that there is not all images from my video, and when trying to loop on more number (like 5000), I can actually read more image ... Does someone know how to have the real number of frame in my video ? did you have may be already seen this problem @MartyG-RealSense ?
I also noticed something weird, when doing rs.frame.get_frame_number(frames)) in my loop, it gives me several times in a row, the same number, like if the same frame is read multiple times ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants