Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to save D435 rgb and depth data with python? #2731

Closed
travisCxy opened this issue Nov 15, 2018 · 11 comments
Closed

How to save D435 rgb and depth data with python? #2731

travisCxy opened this issue Nov 15, 2018 · 11 comments
Assignees
Labels

Comments

@travisCxy
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { R200 / F200 / SR300 / ZR300 / D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.. }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

<Describe your issue / question / feature request / etc..>

@HippoEug
Copy link

Is this a mistake?

@travisCxy travisCxy changed the title save video How to save video with python? Nov 15, 2018
@travisCxy travisCxy changed the title How to save video with python? How to save D435 rgb and depth data with python?I Nov 15, 2018
@travisCxy travisCxy changed the title How to save D435 rgb and depth data with python?I How to save D435 rgb and depth data with python? Nov 15, 2018
@travisCxy
Copy link
Author

@HippoEug
Hello, I 'd like to save D435 rgb and depth data in python.
Here is my code:
``import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 320, 240, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 320, 240, rs.format.bgr8, 30)

color_path = 'RGB/V00P00A00C00.avi'
depth_path = 'Depth/V00P00A00C00.avi'
colorwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc('XVID'), 30, (320,240), 1)
depthwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc(
'XVID'), 30, (320,240), 1)

pipeline.start(config)

try:
while True:
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue

	#convert images to numpy arrays
	depth_image = np.asanyarray(depth_frame.get_data())
	color_image = np.asanyarray(color_frame.get_data())
	depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
	
	colorwriter.write(color_image)
	depthwriter.write(depth_colormap)
	#if cv2.waitKey(1) == ord("q"):
		#break

finally:
pipeline.stop()
but this will get a wrong video data, could you tell me how to solve this problem?

@dorodnic
Copy link
Contributor

Hi @travisCxy
I can't test this right now, but in line depthwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc('XVID'), 30, (320,240), 1) you are putting what seems to be the wrong filename.
What results are you getting?

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
No response since Nov 14. Close issue as solved.

@g2-bernotas
Copy link

g2-bernotas commented Sep 10, 2019

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

color_path = 'V00P00A00C00_rgb.avi'
depth_path = 'V00P00A00C00_depth.avi'
colorwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)
depthwriter = cv2.VideoWriter(depth_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)

pipeline.start(config)

try:
    while True:
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue
        
        #convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        
        colorwriter.write(color_image)
        depthwriter.write(depth_colormap)
        
        cv2.imshow('Stream', depth_colormap)
        
        if cv2.waitKey(1) == ord("q"):
            break
finally:
    colorwriter.release()
    depthwriter.release()
    pipeline.stop()

@prathamsss
Copy link

can you save the frame in json file and check, it would store original information (each json file for each img)

@666tua
Copy link

666tua commented May 18, 2022

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

color_path = 'V00P00A00C00_rgb.avi'
depth_path = 'V00P00A00C00_depth.avi'
colorwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)
depthwriter = cv2.VideoWriter(depth_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)

pipeline.start(config)

try:
    while True:
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue
        
        #convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        
        colorwriter.write(color_image)
        depthwriter.write(depth_colormap)
        
        cv2.imshow('Stream', depth_colormap)
        
        if cv2.waitKey(1) == ord("q"):
            break
finally:
    colorwriter.release()
    depthwriter.release()
    pipeline.stop()

Sorry to bother you! I am currently parsing a bag file recorded with the intelrealsense viewer. I want to extract color and depth maps from it. The program you shared is real-time, I want to extract all video frames from bag file, how should I modify this program?

@gycka
Copy link

gycka commented May 18, 2022

@L-xn As far as I can see, you only need to add the following lines to read from the pre-recorded bag file. I haven't tested it, but I had something similar working once I worked with it.

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, bagfile) # bagfile is your/path/to/your/bagfile.bag
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

color_path = 'V00P00A00C00_rgb.avi'
depth_path = 'V00P00A00C00_depth.avi'
colorwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)
depthwriter = cv2.VideoWriter(depth_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)

pipeline.start(config)

# if you don't set your playback to be in real time, then it will be skipping some of the frames
playback=profile.get_device().as_playback()
playback.set_real_time(False)

try:
    # I found it a good idea to skip the first frames of a recording as they aren't always perfect. I used to skip about 30 frames, but typically in the range 5-45 (45 being extremely rare) were unusable
    count = 0
    for _ in range(30):
        profile.get_device().as_playback().resume()
        frames = pipe.wait_for_frames()
        profile.get_device().as_playback().pause()
        count = count + 1
    while True:
        # getting your frames
        profile.get_device().as_playback().resume()
        frames = pipe.wait_for_frames()
        profile.get_device().as_playback().pause()
        frames.keep()

        # no more of this
        # frames = pipeline.wait_for_frames() 
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue
        
        #convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        
        colorwriter.write(color_image)
        depthwriter.write(depth_colormap)
        
        cv2.imshow('Stream', depth_colormap)
        
        if cv2.waitKey(1) == ord("q"):
            break
finally:
    colorwriter.release()
    depthwriter.release()
    pipeline.stop()

@666tua
Copy link

666tua commented May 19, 2022

@L-xn据我所知,您只需要添加以下几行即可从预先录制的包文件中读取。我没有测试过它,但是一旦我使用它,我就有类似的工作。

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, bagfile) # bagfile is your/path/to/your/bagfile.bag
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

color_path = 'V00P00A00C00_rgb.avi'
depth_path = 'V00P00A00C00_depth.avi'
colorwriter = cv2.VideoWriter(color_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)
depthwriter = cv2.VideoWriter(depth_path, cv2.VideoWriter_fourcc(*'XVID'), 30, (640,480), 1)

pipeline.start(config)

# if you don't set your playback to be in real time, then it will be skipping some of the frames
playback=profile.get_device().as_playback()
playback.set_real_time(False)

try:
    # I found it a good idea to skip the first frames of a recording as they aren't always perfect. I used to skip about 30 frames, but typically in the range 5-45 (45 being extremely rare) were unusable
    count = 0
    for _ in range(30):
        profile.get_device().as_playback().resume()
        frames = pipe.wait_for_frames()
        profile.get_device().as_playback().pause()
        count = count + 1
    while True:
        # getting your frames
        profile.get_device().as_playback().resume()
        frames = pipe.wait_for_frames()
        profile.get_device().as_playback().pause()
        frames.keep()

        # no more of this
        # frames = pipeline.wait_for_frames() 
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue
        
        #convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        
        colorwriter.write(color_image)
        depthwriter.write(depth_colormap)
        
        cv2.imshow('Stream', depth_colormap)
        
        if cv2.waitKey(1) == ord("q"):
            break
finally:
    colorwriter.release()
    depthwriter.release()
    pipeline.stop()

Thank you very much! I added the operation of alignment and filtering on the basis of the code you gave. However, the processing speed is very slow. I recorded the bag for 3 minutes, and it takes 30 minutes after processing. How can I speed up the processing?

@gycka
Copy link

gycka commented Jun 13, 2022

Hi @L-xn, I would consider dropping the alignment unless it is a must. As for filtering, try and get as good depth data quality as possible before you do the post-processing. If you cannot go about without both processing tasks, I would look into multithreading, but other than that, I have no other suggestions for now.

@666tua
Copy link

666tua commented Jun 14, 2022

你好@L-xn,我会考虑放弃对齐,除非它是必须的。至于过滤,请在进行后处理之前尝试获得尽可能好的深度数据质量。如果你不能同时处理两个任务,我会考虑多线程,但除此之外,我现在没有其他建议。

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants