Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I get the imu data from the bag file? #10055

Closed
Noctstar opened this issue Dec 14, 2021 · 9 comments
Closed

Can I get the imu data from the bag file? #10055

Noctstar opened this issue Dec 14, 2021 · 9 comments

Comments

@Noctstar
Copy link

| Camera Model | D455 |
| Operating System & Version | Ubuntu18.04 |
| Platform | Jetson AGX Xavier |
| Language | python |

Issue Description

I am trying to use D455 to save the bag file and get the gyro and acceleration.
However, when I add the following (before enable_devide_from_file), I get a RuntimeError.
Where can I find the problem?
conf.enable_stream(rs.stream.accel) conf.enable_stream(rs.stream.gyro)

@MartyG-RealSense
Copy link
Collaborator

Hi @Noctstar Would it be possible to post your Python script in a comment, please? Thanks!

@Noctstar
Copy link
Author

import pyrealsense2 as rs
import numpy as np
import cv2
import sys
import time
import datetime
start = time.time()

pipeline = rs.pipeline()
config = rs.config()

pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))

found_rgb = False
for s in device.sensors:
    if s.get_info(rs.camera_info.name) == 'RGB Camera':
        found_rgb = True
        break
if not found_rgb:
    print("The demo requires Depth camera with Color sensor")
    exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
config.enable_stream(rs.stream.accel)
config.enable_stream(rs.stream.gyro)
config.enable_device_from_file('/media/633F-BB24/20211214-170538.bag')

align_to = rs.stream.color
align = rs.align(align_to)
profile = pipeline.start(config)

try:
    while True:
        frames = pipeline.wait_for_frames()
        aligned_frames = align.process(frames)
        depth_frame = aligned_frames.get_depth_frame()
        color_frame = aligned_frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue

        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)

        images = np.hstack((color_image, depth_colormap))
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('RealSense', images)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            cv2.destroyAllWindows()
            break

        if time.time() - start > 40:
            cv2.destroyAllWindows()
            break

finally:
    pipeline.stop()

@MartyG-RealSense
Copy link
Collaborator

Which RuntimeException error are you receiving, please?

Could you first try adding a 'config' command into the brackets of the config.enable_device_from_file instruction and also adding 'rs.' in front of the instruction.

rs.config.enable_device_from_file(config, '/media/633F-BB24/20211214-170538.bag')

This is the formatting used by Intel's Python read_bag_example.py example program.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/read_bag_example.py

@Noctstar
Copy link
Author

Traceback(most recent call last):
 File "record.py", line 51, in <module>
  profile = pipeline.start(config)
RuntimeError: Couldn't resolve requests

I can't run it even with the code you suggested.
However, I was able to run read_bag_example.py and opencv_viewer_example.py without any problem.
The error seems to occur when I try to get accel and gyro.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 15, 2021

Looking at your script, I see that you are streaming depth, color and IMU simultaneously. Having these three streams enabled at the same time can cause problems in the RealSense SDK (it can also break streams with the 'No Frames Received' message in the RealSense Viewer). A workaround is to set up two separate pipelines, with IMU on one pipeline and depth & color on the other pipeline. The best example of a Python script for implementing this is at #5628 (comment)

@Noctstar
Copy link
Author

Thanks for the helpful comments.
I tried to update the code referring to #6773 in the part you taught me, but the error occurred again.
Please tell me how to fix this issue.

Traceback (most recent call last):
  File "imu.py", line 44, in <module>
    profile = rgbd_pipeline.start(config)
RuntimeError: Couldn't resolve requests

↓ the code

import pyrealsense2 as rs
import numpy as np
import cv2
import argparse
import os.path

def motion_parser(frame, motion_type):
    try:
        motion_d = frame.as_motion_frame().get_motion_data()
    except RuntimeError:
        print("Frame for {} captured containing invalid/corrupted data!".format(motion_type))
        return None
    return str(motion_d)

# Create object for parsing command-line options
parser = argparse.ArgumentParser(description="Read recorded bag file and display depth stream in jet colormap.\
                                Remember to change the stream fps and format to match the recorded.")
# Add argument which takes path to a bag file as an input
parser.add_argument("-i", "--input", type=str, help="Path to the bag file")
# Parse the command line arguments to an object
args = parser.parse_args()
# Safety if no parameter have been given
if not args.input:
    print("No input paramater have been given.")
    print("For help type --help")
    exit()
# Check if the given file have bag extension
if os.path.splitext(args.input)[1] != ".bag":
    print("The given file is not of correct file format.")
    print("Only .bag files are accepted")
    exit()

#Need to prepare the rgbd_pipeline for depth/rgb and imu data separately!
#prepare depth/rgb rgbd_pipeline and config
rgbd_pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, args.input) #get config to read from file
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)
colorizer = rs.colorizer()
profile = rgbd_pipeline.start(config)

#prepare imu pipeline and config
imu_pipeline = rs.pipeline()
imu_config = rs.config()
rs.config.enable_device_from_file(imu_config, args.input) #get imu_config to read from file
imu_config.enable_stream(rs.stream.gyro) #, format=rs.format.motion_xyz32f)
imu_config.enable_stream(rs.stream.accel)
imu_profile = imu_pipeline.start(imu_config)

accel_data, gyro_data = None, None
cv2.namedWindow("RealSense Recordings", cv2.WINDOW_AUTOSIZE)

try:
    while True:

        # Wait for frames from both rgb/depth and imu rgbd_pipelines
        rgbd_frames = rgbd_pipeline.wait_for_frames()
        imu_frames = imu_pipeline.wait_for_frames()
        depth_frame, color_frame = rgbd_frames.get_depth_frame(), rgbd_frames.get_color_frame()

        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        if colorizer is None:
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_HOT)
        else:
            depth_colormap = np.asanyarray(colorizer.colorize(depth_frame).get_data())

        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))
        cv2.imshow("RealSense Recordings", images)

        #use escape key to stop streaming
        key = cv2.waitKey(1)

        if key == 27:
            cv2.destroyAllWindows()
            break

        # Check and print for acceleration and gyro data
        motion_data = imu_frames
        if motion_data is not None:
            if len(motion_data) == 2:
                accel_data, gyro_data = motion_parser(motion_data[0], "Acceleration"), motion_parser(motion_data[1], "Gyroscope")
            else:
                accel_data, gyro_data = motion_parser(motion_data[0], "Acceleration"), None
            print("Acceleration data:", accel_data)
            print("Gyroscope data: ", gyro_data)
            print()

        else:
            accel_data, gyro_data = None, None

finally:

    # Stop streaming
    rgbd_pipeline.stop()

@MartyG-RealSense
Copy link
Collaborator

Does the configuration of the depth and color streams that are recorded inside your bag file match the configurations that you are requesting in the script.

Depth 640x480, 30 FPS
Color 640x480, 30 FPS

If, for example, your bag file contains depth recorded at 848x480 and 30 FPS then I recommend editing the depth config instruction of the script to be 848,480 instead of 640,480

@Noctstar
Copy link
Author

Noctstar commented Dec 17, 2021

This is consistent with the configuration requested in the script.

I created a new bag file with two depth configuration patterns, 640x480 30FPSand 848x480 30FPS(both color configurations are 640x480 30FPS), and confirmed that I can play both using enable_device_from_file.
However, I cannot get the RGBD and imu information at the same time.

@MartyG-RealSense
Copy link
Collaborator

Your script code looks like a script for bag reading in #9846 (comment) that has been modified to use separate pipelines.

It may be better to use the Ezward two-pipeline script at #5628 (comment) and modify it to read data from bag file with enable_device_from_file() instead of using a live camera as the source for the camera data.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants