Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Realsense D455 duplicated depth arrays #10669

Closed
tmxftw opened this issue Jul 12, 2022 · 7 comments
Closed

Realsense D455 duplicated depth arrays #10669

tmxftw opened this issue Jul 12, 2022 · 7 comments

Comments

@tmxftw
Copy link

tmxftw commented Jul 12, 2022


Required Info
Camera Model D455
Firmware Version 5.13.0.50
Operating System & Version Raspbian
Kernel Version (Linux Only) 5.4.0-1065-raspi
Platform Raspberry Pi 4 4Gb
SDK Version 2.50.0
Language python
Segment Robot

Issue Description

I am trying to use two D455 cameras to get the rgb and depth data as numpy arrays. The cameras are attached onto a moving rover and it will capture depth and rgb data as it moves, hence the cameras should be getting different depth data. The individual arrays will be saved on the pi as the code runs, and downloaded to a desktop for further processing. However, after checking the acquired data, the depth arrays are duplicated and repeated, causing the data to be useless for further processing. On the other hand, the rgb images are captured properly.

The trend of duplicates seems to be more common for the second camera of self.pipeline_1.

The main issue i am trying to tackle is to reduce the duplication of depth arrays.
A secondary issue will be to increase the frame rate of captured data, which is currently less than 1.

This is the code which I used to check for duplicates in the depth arrays:

def check_duplicates(D_array, camera_no=1):
    duplicates = []
    for i in range(D_array.shape[0]):
        if i != 0:
            check = np.count_nonzero(D_array[i]-prev_D_array)
            if check == 0:
                print(f'Image {i} of camera {camera_no} is a duplicate')
                duplicates.append(i)
        prev_D_array = D_array[i]
    print(f'Camera {camera_no} has {len(duplicates)} duplicates')
    return duplicates

This is the camera class I created to record the numpy arrays:

import pyrealsense2 as rs
import numpy as np
import time
from datetime import date
import os

class Camera:
    def __init__(self, serial_no, config_num, parent_dir):
        # Pipelines
        self.pipeline_0, self.pipeline_1 = rs.pipeline(), rs.pipeline()
        # Configs
        config = {0:{'width':  640, 'height': 480, 'frame_rate': 30},
                  1:{'width': 1280, 'height': 720, 'frame_rate': 5}}
        self.config_0, self.config_1 = rs.config(), rs.config()

        # Set up first camera config
        self.config_0.enable_device(serial_no[0])
        self.config_0.enable_stream(rs.stream.depth, config[config_num]['width'], config[config_num]['height'],
                                    rs.format.z16, config[config_num]['frame_rate'])
        self.config_0.enable_stream(rs.stream.color, config[config_num]['width'], config[config_num]['height'], 
                                    rs.format.bgr8, config[config_num]['frame_rate'])
                                    
        # Set up second camera config
        self.config_1.enable_device(serial_no[1])
        self.config_1.enable_stream(rs.stream.depth, config[config_num]['width'], config[config_num]['height'],
                                    rs.format.z16, config[config_num]['frame_rate'])
        self.config_1.enable_stream(rs.stream.color, config[config_num]['width'], config[config_num]['height'], 
                                    rs.format.bgr8, config[config_num]['frame_rate'])
        
        # Create align object
        self.align = rs.align( rs.stream.color )

        # Create output folder
        folder_count = 0
        for file in os.listdir(parent_dir):
            if file.startswith(f'{date.today()}_'):
                folder_count += 1
        self.output_dir = os.path.join(parent_dir, f'{date.today()}_Scan_{folder_count}' )
        os.mkdir(self.output_dir)
        print(f'Output to {self.output_dir}')

    def start_recording(self, record_time):

        # Start stream pipeline from both cameras
        self.pipeline_0.start(self.config_0)
        self.pipeline_1.start(self.config_1)

        # Init varibles
        aligned_frames_container_1, aligned_frames_container_2 = [], []
        print('##### Start Recording #####')

        # Start recording
        try:
            t_StartRecording = time.time()
            frame_count = 0
            while time.time()-t_StartRecording <= record_time:
                # Wait for a coherent pair of frames: depth and color
                frames_0 = self.pipeline_0.wait_for_frames()
                frames_1 = self.pipeline_1.wait_for_frames()

                    # Real time alignment of frames
                frames_aligned_0 = self.align.process(frames_0)
                frames_aligned_1 = self.align.process(frames_1)
                
                    # Extract depth and color frames
                depth_frame_0, color_frame_0 = frames_aligned_0.get_depth_frame(), frames_aligned_0.get_color_frame()
                depth_frame_1, color_frame_1 = frames_aligned_1.get_depth_frame(), frames_aligned_1.get_color_frame()

                    # Save data as numpy arrays
                np.save(os.path.join(self.output_dir,f'CAM1_BGR_image_{frame_count:04d}.npy'), np.asanyarray(color_frame_0.get_data()) )
                np.save(os.path.join(self.output_dir,f'CAM1_D_image_{frame_count:04d}.npy'),   np.asanyarray(depth_frame_0.get_data()) )
                np.save(os.path.join(self.output_dir,f'CAM2_BGR_image_{frame_count:04d}.npy'), np.asanyarray(color_frame_1.get_data()) )
                np.save(os.path.join(self.output_dir,f'CAM2_D_image_{frame_count:04d}.npy'),   np.asanyarray(depth_frame_1.get_data()) )
                    
                    # Increase frame_count
                frame_count += 1

        except KeyboardInterrupt: # use crtl+c to stop recording prematurely
            print('Stopping recording upon keyboard interrupt')
            pass

        else:
            print(f'Set record time of {record_time} seconds reached, stopping recording')

        finally: # stop recording
            record_time = time.time()-t_StartRecording
            print(f'Stopped recording, total time recorded: {record_time} seconds')
            print(f'Number of frames captured by both cameras: {frame_count+1}')
            print(f'Average FPS is: {(frame_count+1)/record_time}')
            self.pipeline_0.stop()
            self.pipeline_1.stop()


if __name__ == '__main__':
    # Serial number
    serial_no = ['046322252061','053422251131']
    # Config number setting (0 for 640*480; 1 for 1280*720)
    config_setting = 0
    # Camera output folder
    parent_dir = '/home/ubuntu/rover/camera_output'
    
    # Create camera instance
    Camera_instance = Camera(serial_no, config_setting, parent_dir)
    # Start recording
    Camera_instance.start_recording(record_time=150)
    print('finished')

@MartyG-RealSense
Copy link
Collaborator

Hi @tmxftw I see you are using two separate pipelines with their own unique configurations, like the Python code in #1735 (comment)

When you say that the arrays are duplicated, do you mean that the second camera's depth array has the same values as the first camera's depth array? Or is it that in the second array, the frame numbers are repeating (1, 1, 1, 2, 2, 2, 3, 3, 3, etc)

If the problem is that the frame numbers are repeating then a RealSense team member advises at #5107 (comment) to use poll_for_frames() instead of wait_for_frames() to avoid duplicate frames. Using poll_for_frames() in applications that handle more than one camera is also Intel's recommendation, as advised at #2422 (comment)

@tmxftw
Copy link
Author

tmxftw commented Jul 14, 2022

Hi @MartyG-RealSense

  1. I plan to have the same configurations for both pipelines, is there an easier way of implementing this?

  2. The duplication occurs in a manner that the second (or third, fourth, etc) depth numpy array is the same as the first. I know that the arrays are being duplicated like this by doing an element-wise subtraction between the array 2 and 1, 3 and 2, etc.. Then is the resultant array is filled with 0 only, it is counted that the depth array has been duplicated. I repeat this process for the arrays obtained by camera 1 and 2.

  3. I will try poll_for_frames() in the meanwhile, thank you for this tip

@tmxftw
Copy link
Author

tmxftw commented Jul 14, 2022

Hi @MartyG-RealSense

I am still trying to implement poll_for_frames(), however, I have several questions regarding how to use it:

  1. In comment #5107 (comment), how do I synchronize the frames and subsequently align it?
  2. How do I ensure that the frames obtained by camera 1 and 2 occur at the time step? Because the two cameras are related together by a rigid link, hence my current algorithm bases the location of camera 2 via rigid transformation of camera 1, whose location is approximated based on the depth reading of camera 1.

@tmxftw
Copy link
Author

tmxftw commented Jul 14, 2022

I tried to implement poll_for_frames() like so, by changing wait_for_frames() with poll_for_frames() and keeping said frames.

    def start_recording_poll(self, record_time):

        # Start stream pipeline from both cameras
        self.pipeline_0.start(self.config_0)
        self.pipeline_1.start(self.config_1)

        print('##### Start Recording #####')
        print('poll for frames')

         # Start recording
        try:
            t_StartRecording = time.time()
            frame_count = 0
            while time.time()-t_StartRecording <= record_time:

                print(f'current frame: {frame_count}')

                try:
                    frames_0 = self.pipeline_0.poll_for_frames()
                    frames_1 = self.pipeline_1.poll_for_frames()

                        # Keep frames
                    frames_0.keep()
                    frames_1.keep()

                        # Real time alignment of frames
                    frames_aligned_0 = self.align.process(frames_0)
                    frames_aligned_1 = self.align.process(frames_1)

                except RuntimeError: # for null frames
                    time.sleep(0.5)
                    continue
                
                    # Extract depth and color frames
                depth_frame_0, color_frame_0 = frames_aligned_0.get_depth_frame(), frames_aligned_0.get_color_frame()
                depth_frame_1, color_frame_1 = frames_aligned_1.get_depth_frame(), frames_aligned_1.get_color_frame()

                    # Save data as numpy arrays
                np.save(os.path.join(self.output_dir,f'CAM1_BGR_image_{frame_count:04d}.npy'), np.asanyarray(color_frame_0.get_data()) )
                np.save(os.path.join(self.output_dir,f'CAM1_D_image_{frame_count:04d}.npy'),   np.asanyarray(depth_frame_0.get_data()) )
                np.save(os.path.join(self.output_dir,f'CAM2_BGR_image_{frame_count:04d}.npy'), np.asanyarray(color_frame_1.get_data()) )
                np.save(os.path.join(self.output_dir,f'CAM2_D_image_{frame_count:04d}.npy'),   np.asanyarray(depth_frame_1.get_data()) )
                    
                    # Increase frame_count
                frame_count += 1
                time.sleep(1)

        except KeyboardInterrupt: # use crtl+c to stop recording prematurely
            print('Stopping recording upon keyboard interrupt')
            pass

        else:
            print(f'Set record time of {record_time} seconds reached, stopping recording')

        finally: # stop recording
            record_time = time.time()-t_StartRecording
            print(f'Stopped recording, total time recorded: {record_time} seconds')
            print(f'Number of frames captured by both cameras: {frame_count}')
            print(f'Average FPS is: {(frame_count)/record_time}')
            self.pipeline_0.stop()
            self.pipeline_1.stop()

However, the issues with duplicate depths is still occuring

@MartyG-RealSense
Copy link
Collaborator

  1. I would think that you could define a single set of config instructions and use the same config name for both pipeline start instructions.
self.pipeline_0.start(self.config)
self.pipeline_1.start(self.config)
  1. In regard to sync between the frames of different sensors and between multiple cameras, if Global Time is set to true then the SDK should try to find the best timestamp match between the different frames. More information about this can be found at Global Camera Time #3909 and Global timestamps wrong after long use #4505 (comment)

  2. You can attempt to sync multiple cameras to a common timestamp with hardware sync, either by using one of the cameras as a Master camera that transmits a sync pulse to the other cameras (the Slaves that follow the Master's timing) or by generating a sync pulse with a signal generator device. The Intel white-paper documents in the links below describe this process for RealSense 400 Series camera models.

https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration

https://dev.intelrealsense.com/docs/external-synchronization-of-intel-realsense-depth-cameras

@MartyG-RealSense
Copy link
Collaborator

Hi @tmxftw Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants