Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multithread Multicam xioctl(VIDIOC_S_FMT) failed Last Error: Device or resource busy #5939

Closed
locdoan12121997 opened this issue Feb 28, 2020 · 6 comments

Comments

@locdoan12121997
Copy link

locdoan12121997 commented Feb 28, 2020

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model 1 D435 and 1 D435i
Firmware Version 05.12.02.100
Operating System & Version Ubuntu 18.04.4 LTS
Kernel Version (Linux Only) Linux 5.3.0-40-generic
Platform PC
SDK Version 2.32.1
Language python
Segment others

Issue Description

<Describe your issue / question / feature request / etc..>
I have 2 depth camera D435 and D435i. I create 2 thread for each camera to HW sync them. When running the thread, only thread 1 run, thread 2 will get error xioctl(VIDIOC_S_FMT) failed Last Error: Device or resource busy.

Command dmesg give output
[91401.485289] uvcvideo: Failed to query (GET_CUR) UVC control 1 on unit 3: -32 (exp. 1024). [91401.592335] uvcvideo: Non-zero status (-71) in video completion handler. [91403.207448] audit: type=1400 audit(1582876763.713:666): apparmor="DENIED" operation="open" profile="snap.gnome-system-monitor.gnome-system-monitor" name="/proc/12263/attr/current" pid=15216 comm="gnome-system-mo" requested_mask="r" denied_mask="r" fsuid=1001 ouid=1001 [91406.589807] uvcvideo: Failed to query (GET_CUR) UVC control 1 on unit 3: -32 (exp. 1024).

Below is my code

import threading as th
import time

import cv2
import numpy as np
import pyrealsense2 as rs


class ReadCameraThread(th.Thread):

    def __init__(self, device, index):
        super(ReadCameraThread, self).__init__()
        self.device = device
        self.index = index

    def run(self):
        print("Device ID: " + str(self.index) + "")
        serial = self.device.get_info(rs.camera_info.serial_number)
        print("Camera with serial number: " + serial)

        depth_found = False
        color_found = False
        depth_sensor = None
        color_sensor = None

        for sensor in self.device.query_sensors():
            module_name = sensor.get_info(rs.camera_info.name)
            print(module_name)

            if (module_name == "Stereo Module"):
                depth_sensor = sensor
                depth_found = True
            elif (module_name == "RGB Camera"):
                color_sensor = sensor
                color_found = True

        if not (depth_found and color_found):
            print("Unable to find both stereo and color modules")

        depth_sensor.set_option(rs.option.exposure, 8500) # microseconds
        depth_sensor.set_option(rs.option.gain, 16)
        depth_sensor.set_option(rs.option.frames_queue_size, 1)

        if self.index == 0:
            print("Setting " + serial + " to master!")
            depth_sensor.set_option(rs.option.inter_cam_sync_mode, 1)
        else:
            print("Setting " + serial + " to slave!")
            depth_sensor.set_option(rs.option.inter_cam_sync_mode, 2)

        color_sensor.set_option(rs.option.enable_auto_exposure, 0)
        color_sensor.set_option(rs.option.exposure, 100) # microseconds
        color_sensor.set_option(rs.option.gain, 64)
        color_sensor.set_option(rs.option.frames_queue_size, 1)

        pipe = rs.pipeline()
        config = rs.config()
        config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 30)
        config.enable_stream(rs.stream.color, 848, 480, rs.format.rgb8, 30)
        profile = pipe.start(config)

        last_time = 0

        for i in range(1500):
            frames = pipe.wait_for_frames()
            color_frame = frames.get_color_frame()
            depth_frame = frames.get_depth_frame()
            print(str(len(frames)) + " frames from " + serial)
            print("Drift: " + str(depth_frame.get_timestamp() - last_time))
            last_time = depth_frame.get_timestamp()
            color_image = np.asanyarray(color_frame.get_data())

            cv2.imwrite("data/" + serial+"/"+ str(i) + ".jpg", color_image)


ctx = rs.context()
for i, device in enumerate(ctx.query_devices()):
    read_camera_thread = ReadCameraThread(device, i)
    read_camera_thread.start()
    time.sleep(5)
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 3, 2020

There is another Python script in the link below for streaming multiple cameras.

#1735 (comment)

Structurally, your script is similar to that one, taking care to identify the separate devices using serial number. I note though that in the linked script, they take the approach of defining separate '_1 and _2' pipeline and config variables.


...from Camera 1

pipeline_1 = rs.pipeline()
config_1 = rs.config()
config_1.enable_device('013102060174')
config_1.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config_1.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

...from Camera 2

pipeline_2 = rs.pipeline()
config_2 = rs.config()
config_2.enable_device('046112051680')
config_2.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config_2.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)


Although both the _1 and _2 definitions are set to the same content - rs.pipeline() and rs.config() - they are treated as separate devices because unique serial numbers are defined for them:

config_1.enable_device('013102060174')
config_2.enable_device('046112051680')

@locdoan12121997
Copy link
Author

I want to sync multiple cameras. The white paper suggests creating threads for each camera. So, I have the following questions:

  1. Does running one thread like this can sync between the cameras? I have tested capturing the frames when viewing them using your above code, and it does look like it syncing, with a little bit of noise.
  2. Does this mean when starting 1 pipeline is equivalent to creating one thread running the camera streaming?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 6, 2020

As far as I know, running in a single thread or multiple threads in a multi-cam setup is a matter of personal preference rather than a compulsory requirement, and some developers believe that a multi-thread approach with a thread for each device provides better performance.

As the multi-cam white paper says, a way to test whether sync is working correctly is to observe the timestamps of the devices. If they always remain perfectly aligned then this indicates that they are not synced, whilst if the timestamps noticably drift apart over a period of minutes then this indicates that they are successfully synced.

I'm not as certain about your second question. The approach that the rs-multicam example program takes is to create multiple pipelines, one for each detected device.

https://github.com/IntelRealSense/librealsense/tree/master/examples/multicam

@locdoan12121997
Copy link
Author

I have been able to run the code you provided. I still have some questions regarding the problem I encounter:

  1. Let me rephrase my above second question: Does pipeline.start() equivalent to create another thread?
  2. You show me a code to run multiple pipelines in one thread. My problem was getting the resource busy while running multiple threads for multiple pipelines in Python? Do you have a solution or example code running multiple pipelines on multiple threads in Python?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 7, 2020

  1. I believe that pipeline.start() is not equivalent to creating a thread. Dorodnic the RealSense SDK Manager's explanation of multi-threading is "Different instances of pipeline or device objects can co-exist on different threads, and you can send frames from thread to thread using frame_queue primitive".

Some further explanation is provided in the SDK's frame management documentation.

https://dev.intelrealsense.com/docs/frame-management#section-frames-and-threads

  1. Another member of the RealSense support team recommended to someone that they look at the program in the link below as an example of multi-processing with RealSense in Python. I do not know though whether it meets your wish of having multiple threads and multiple pipelines.

https://github.com/PINTO0309/MobileNet-SSD-RealSense/blob/master/MultiStickSSDwithRealSense.py

@MartyG-RealSense
Copy link
Collaborator

This case will be closed after 7 days from now if there are no further questions. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants