Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to obtain all the color maps and depth maps from bag file with Python3? #8137

Closed
ChairManMeow-SY opened this issue Jan 11, 2021 · 11 comments

Comments

@ChairManMeow-SY
Copy link

Required Info
Camera Model { D455 }
Firmware Version (05.12.09.00)
Operating System & Version {Ubuntu 18.04}
Kernel Version (5.4.0-58-generic)
Platform PC
SDK Version { 2.40 }
Language {python}
Segment {VR }

Issue Description

Hi, I'm trying to read frames from the bag file with python. I think there should be some APIs that support:

  1. Read the number of frames.
  2. Read each color maps and depth maps until the end of the file.

I need to process the frames one by one so I'd like to read them from the bag file one by one. Even though I have set the set_real_time to False, it seems most of the frames are dropped.

The following is my code:

import pyrealsense2 as rs
import numpy as np
import cv2
import argparse
import os.path

def GetAllFrames(bag_file):
  pipeline=rs.pipeline()
  config=rs.config()
  rs.config.enable_device_from_file(config,bag_file)
  config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
  config.enable_stream(rs.stream.color,1280,720,rs.format.rgb8,30)

  profile = pipeline.start(config)
  playback=profile.get_device().as_playback() # get playback device
  playback.set_real_time(False) # disable real-time playback


  cur_frame_number=-1
  while True:
    frames=pipeline.wait_for_frames()
    color_frame= frames.get_color_frame()
    depth_frame = frames.get_depth_frame()

    print("color number: %d" %color_frame.get_frame_number())
    print("depth number: %d" %depth_frame.get_frame_number())
    if cur_frame_number < color_frame.get_frame_number():
      cur_frame_number=color_frame.get_frame_number()
    else:
      break
  print("current_number: %d" %cur_frame_number)
  pipeline.stop()

if __name__ == "__main__":
  file_names=['/home/zhaosy/Videos/20201229_002213.bag','~/Videos/20201229_002211.bag','~/Videos/20201229_002212.bag']

  GetAllFrames(file_names[0])

The frame number is not the same as the file shows in the realsens-viewer. Also it's 15-second video but only around 100 frames are read.

I searched this problem and so surprisingly find that many people come up with the same issue. Could anyone help correct my code?

Or is there other ways to do this (like with matlab). I know there is a binary tool to convert bag file into jpeg images. But I need to bundle 3 realsense cameras so I don't know if the binary tool can offer the timestamp.

@MartyG-RealSense
Copy link
Collaborator

Hi @OldOG Your situation sounds similar to case #7932

In that Python case, the RealSense user - who is also processing depth and color frames from a bag - posted details of their solutions for reading the number of frames and for performance issues at #7932 (comment)

@ChairManMeow-SY
Copy link
Author

ChairManMeow-SY commented Jan 14, 2021

Hi @OldOG Your situation sounds similar to case #7932

In that Python case, the RealSense user - who is also processing depth and color frames from a bag - posted details of their solutions for reading the number of frames and for performance issues at #7932 (comment)

Thanks for your help. I find two ways to convert the bag files to images. Firstly using the python script:

def GetAllFrames(bag_file):
  try:
    pipeline=rs.pipeline()
    config=rs.config()
    rs.config.enable_device_from_file(config,bag_file,repeat_playback=False)
    config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
    config.enable_stream(rs.stream.color,1280,720,rs.format.rgb8,30)

    profile = pipeline.start(config)
    playback=profile.get_device().as_playback() # get playback device
    playback.set_real_time(False) # disable real-time playback

    cur_frame_number=-1
    i=0
    while True:
      is_frame,frames=pipeline.try_wait_for_frames()
      if not is_frame:
        break
      color_frame= frames.get_color_frame()
      depth_frame = frames.get_depth_frame()
      i=i+1
     ###
      #Do what you want
      ###'

    print("current_number: %d" %cur_frame_number)
    pipeline.stop()
    print("number of the frames: %d" %i)

  finally:
    pass

Secondly, use the convert tool:

rs-convert -i mybagfile.bag -p ./

I stil have two question:

  1. Is the depth map aligned to the color map?
  2. With the same input, the python script output 388 frames and the second method outputs 306 frames. Which one is right and what makes this difference?

@ChairManMeow-SY
Copy link
Author

OK I have found the answer: the convert tool drops the frames sometimes.
Here is the discussion and solution:
#7067(comment)

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @OldOG :) Do you require further assistance with this case, please?

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

@ChairManMeow-SY
Copy link
Author

Thanks very much @OldOG :) Do you require further assistance with this case, please?

Hi @MartyG-RealSense , I have one more question: are the color maps obtained directly from bag files calibrated? Do I need to calibrate the frames manually?

@MartyG-RealSense
Copy link
Collaborator

The infomation recorded in the bag will depend upon the calibration that the camera had at the time. If your camera has a healthy calibration then you should not need to perform any manual calibration on the images.

You can perform a quick depth calibration of the camera with the On-Chip Calibration tool accessible from the More side-panel option of the RealSense Viewer, or a robust calibration of the camera with the SDK's Dynamic Calibration software tool (which includes RGB sensor calibration).

@ChairManMeow-SY
Copy link
Author

The infomation recorded in the bag will depend upon the calibration that the camera had at the time. If your camera has a healthy calibration then you should not need to perform any manual calibration on the images.

You can perform a quick depth calibration of the camera with the On-Chip Calibration tool accessible from the More side-panel option of the RealSense Viewer, or a robust calibration of the camera with the SDK's Dynamic Calibration software tool (which includes RGB sensor calibration).

So in other words, if I had never run the calibration software for the camera, the color maps are not calibrated?

I get confused because I can read the distortion coefficients from the bag file.

@MartyG-RealSense
Copy link
Collaborator

The depth and color sensors are calibrated in the factory as part of the manufacturing process, so typically RealSense cameras have a good calibration out of the box when new.

There are factors that may affect calibration and necessitate a re-calibration to restore image quality though. These could include physical stresses such as a hard knock, drop on the ground or severe vibration.

A high temperature event may also affect calibration, though a Thermal Compensation feature was introduced for the D455 camera model in SDK 2.43.0. The release notes state about this feature: "D455 introduces a compensation mechanism intended to mitigate the effect of thermal propagation in optics. When active (default = On) it will track and adjust Depth and RGB calibration parameters automatically".

@ChairManMeow-SY
Copy link
Author

The depth and color sensors are calibrated in the factory as part of the manufacturing process, so typically RealSense cameras have a good calibration out of the box when new.

There are factors that may affect calibration and necessitate a re-calibration to restore image quality though. These could include physical stresses such as a hard knock, drop on the ground or severe vibration.

A high temperature event may also affect calibration, though a Thermal Compensation feature was introduced for the D455 camera model in SDK 2.43.0. The release notes state about this feature: "D455 introduces a compensation mechanism intended to mitigate the effect of thermal propagation in optics. When active (default = On) it will track and adjust Depth and RGB calibration parameters automatically".

I get it. Thank you so much! 😄

@Esperer105
Copy link

good question!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants